ChatGPT is the go-to solution for prototyping. If chatGPT cannot solve it, the tech likely needs to arrive. However, to bring features into production, the market will be divided into specialized LLMs catering to specific verticals/use cases/budgets.
For AI, specialization will always beat generalization. If it doesn't for you, it means that you are in the stage where the Kaplan et al. (2020) data law effects still apply, and you have yet to reach the tipping point of ever-increasing saturation. To cross the tipping point, only a hyper-optimized feature space gets you out of the local extrema. Thus, eventually, feature-space optimization via specialization is the way to go.
Who best classifies and orchestrates hyper-optimized models will win the LLM wars. Ultimately, it's all about engineering, infrastructure and data strategies.
I find ChatGPT extremely useful for language related tasks - especially as non-native English speaker - it helps me find the most appropriate phrasing in terms of tone, style and conciseness, and it is better in translating longer text from non-English languages (like Korean) than any other tool I ever had experience with. ChatGPT is also a convenient alternative to casual googling, e.g. when I need to find a quick answer on a topic widely covered across various reliable sources such as encyclopaedias, ChatGPT makes it very easy to find an answer on a specific question I may have. However, ChatGPT is currently unreliable for professional level research because it can hardly provide specific sources of the information it provides.