What were the things that Cursor couldn't "fix" for you?
Hello PH!
I'm Hyuntak form South Korea, quite new to this community, first time posting.
I'm a med student and solo developer.
Cursor did deal with hassles that would otherwise have me glued to the monitors for days.
BUT, cursor(actually, the AI) did made things harder in some cases.
Not to mention what I call "the loop of errors" - when you ask the AI to resolve error A, it makes error B, and then to resolve error B, it makes error A,
the extreme case was the ai suggested me to create a whole new project and migrate my current codebase, which wasn't the case.
As a fellow Cursor user, could you share your moment of "loop of errors" or extreme suggestions that the Cursor(ai) made?
Replies
When updating an express app from ~v4 to ~v5 you no longer need try / catch statements for basic error handling. Express does this for you. So, I asked the Cursor agent to remove those statements from several controllers in my project. After plenty of thinking, the agent was able to fix 10 controllers. But, when it moved to the next set, it started to rewrite functionality and change endpoints entirely. This kicked off several rounds of back and forth with the agent that ended up taking over an hour and I never got remotely close to finishing the task.
From time to time, I ask the Cursor agent to perform tasks like this, as an exercise, and it always falls way short. So, I've given up on the Cursor agent. Instead, I prefer using google search to find answers (often pulled from Gemini AI) and then incorporate those answers into the project myself (tailored to my liking).
With regards to the task above, I ended up doing a couple of global search and replace exercises narrowed to the controller directory and finished in about 5 minutes.
In sum, I don't like AI touching my code though I do enjoy the answers it gives me outside my project.
The experience above was using Claude Sonnet-3.5.
@erichubbell Ah problems keep happening even after Sonnet-3.5! Reading your story of making changes to your express app gives me hard flashbacks of my struggle with AI. Thanks for sharing your experience!
I'm sure AI models have studied Stack Overflow and other google expert articles, but wonder why they can't be as "wise" as their learning materials sometimes.
Keeping the code safe from indiscriminate AI touch is crucial! Thanks for the comment!
@erichubbell Totally hear you on this. It’s wild how something that should take 5 minutes ends up turning into a debugging marathon because the AI agent “tries too hard.” I’ve seen the same thing - especially when the task looks pattern-based, but has nuance that the model can’t intuit.
Interesting you mentioned global search + replace - it’s amazing how often the simplest tools still outperform the most advanced ones in certain contexts.
Nothing so far for me, if you are on the 3.7 Max Sonnet. It solved pretty much most with thinking.
@ajinsunny Maybe I experienced such matters because it was even before Sonnet 3.7 era. Things got better really fast from then on I guess.
@ajinsunny @hyuntak_lee Yes you are exactly right
@ajinsunny @hyuntak_lee @sandra_addison Ah...I see the problem now. Let me just...
I've been using gemini 2.5 and it's incredible
really has contextual understanding of the entire codebase + using md files to layer instructions atop the dev process has really helped maintain cohesion and "house style" aka list of pet peeves I never want to see!!!
@matt__mcdonagh I've heard Gemini 2.5 is quite a player these days but I've never tried it for coding. I've been on the Claude's side in terms of coding jobs but maybe I should give Gemini a try!
I've been using Gemini for works related to articles / writing / language and that definitely resonates what you've mentioned - understanding the entire context, keeping the "style". Didn't know that those work for the codes too.
I'm giving it a shot for revising my landing page. Thanks for letting me know!
Let me know how it works for you, and in general what your experience.
Feedback from the front lines here from builders has 1000x more signal than the AI comparison charts being pumped out by influencer accounts on X and LNKD.
Shadow
At this point I’ve learned to treat Cursor like a very smart but overly confident intern. Great for ideas, shortcuts, or filling in boilerplate—but if you blindly say “yes” to everything, you might wake up in a new codebase you didn’t ask for LOL
Curious what you were working on when it told you to migrate the whole project?
@ashley_from_shadow That's a good simile about the intern! Cursor is a good tool when managed, not when being managed by it.
It was when my current launch project "Elissa" was almost done. What worked yesterday suddenly stopped working today with serious native level version compatibility issue (It's a Flutter project). Since StackOverflow solutions failed, I asked Sonnet how to resolve the issue and it listed 5 possible solutions. I've tried them all and fell into the "loop of errors." After several loops, Sonnet said the exact following : "The most drastic but fastest way to resolve this kind of problem is to create a new project and migrate."
But guess what! I somehow managed to figure out the solution myself and it turned out I didn't have to make that much dramatic change. The story got lengthy, but thanks for asking!
Cursor AI and really any LLM will struggle with:
Long contexts (gets lost of what is the main goal, what matters, what does not)
Complex and unique domain logic - If you are building a todo list, or on the other hand, a HTTP server from scratch, it will do both quite well, simply because these are well-understood problems. If you are working on a product that is niche, doing something that is not common (e.g. Bills of Material analysis as an example of something that there is not much code on GitHub for LLMs to learn from), it will do worse and end up going down the wrong path quicker.
Large codebases - a combination of long context, complexity, and simply not having the awareness of how various parts interconnect.
So I now mostly use AI for smart autocomplete, building isolated functions/modules that can be defined and tested in isolation and are not too unique for LLM to get confused, and also as a rubber duck to bounce ideas, brainstorm.
@andriusbartulis I totally agree with your points. The models sometimes don't catch the main issue that should be addressed and even make small scratches on the details at the same time when the context gets long.
And I've heard choosing which framework to go is especially important when the goal is maximizing the AI assistant. For example, models have tones of knowledge about React, but one cannot expect the same performance of AI or output of it when building the same thing with Svelte.
AI surely is a good tool for fast prototyping and accelerating the speed of building, but completely depending on it or giving it a real deep serious job? I'd still keep my control over the AI!
I have encountered this several times, where Cusor suggested I create a brand new project, and in the end, I had to revert.
@feifei_ai Decision to migrate the project should be made under thorough review and consideration, but Cursor seems to suggest it too easily as if it's not a big deal. Reverting your project back to the version that worked (maybe still with errors) must have been a big job.
Hey all, loving this thread. Seeing a few of you mention Cursor being like a “super smart intern” or getting caught in error loops - it’s such a real pain point. I think the future of dev support communities isn’t just about sharing fixes, but how we think through these tools together. The stuff that doesn’t get captured in docs or changelogs.
Curious that when Cursor gets it “almost right,” what do you guys usually do?
Do you patch manually, ask it to try again, or jump back to classic docs/forums?
@ambika_vaish I love your expression "pain point" haha And yes I agree with you that there should be discussions about the tools themselves, not to mention the bug fixes in the dev community.
That's an interesting question you asked and I face such situations from time to time.
I personally give Cursor "hints" and see if it goes to the right direction. It's fun to see the AIs suddenly admitting their misses and sometimes they even apologize for mistakes!
But when hints fail, I usually do the jobs manual
@hyuntak_lee Haha, yeah, 'pain point' is putting it lightly sometimes! Giving Cursor hints sounds like a cool way to guide it—I imagine it’s like nudging an intern in the right direction. Have you noticed if certain types of hints work better? Or does it just depend on the situation?
Hi!
I used it to build a whole plugin within Figma and for me it was making backups the whole time what was a hassle. This is months ago and I haven’t been able to update due to time issues and I will get back in a couple of weeks and I am wondering if it changes a lot.
The biggest issue was, when I had something fixed, then got into a new feature, it would brake the whole plugin and I couldn’t get back to a previous state. This made me make manually backups the whole time something worked.
For me this isn’t a problem as I’m not a developer but was interested in the capabilities of it and they are amazing!
@tmathe Hello Tommie! Thanks for sharing your experience!
New feature messing up the current plugins was a frequent cause of headache for me too. Manually backing up without using tools like Git must have been a great cause of headache!
Just telling you a developer joke in Korea, we say a job of making current things work while implementing a new feature (like CI/CD) is like "changing a wheel of a running car."
I'm sure things have changed (new features again) when you get back, but I'm sure you'll get used to it soon!
I built my product long before this "vibe coding" wave, so I personally had to struggle with AIs (teaching them solutions of StackOverflow) on the process.
You could share your stories back in the old days maybe? I just wonder if anyone had similar experiences using Cursor on their journey to building something
Coda
I had a basic PHP website that I was trying to get up and running from 6 years of not touching it. Cursor did add a bunch of debugging code to help visualize what the error was (I know I could've used the Browser Tools MCP to help with this). The issue I had wasn't so much the loop of errors but more of extra debugging code that I didn't need. It started making my files hard to read. I could have prompted to get rid of all the debugging code but then it became hard to see what diffs were made from 10 chats ago that actually helped fix the site. You'll see a lot of the issues people are having in the Cursor forum like this thread.
@alchen Debugging codes that I created with Cursor did really need a lot of manual touch too. Sometimes it was about the logic, sometimes it was about the correct order of checking things.
Having read the forum articles you've linked, I remember having the same issue too - Cursor updates getting worse. The updates (which were quite frequent) were also one point of "what worked well yesterday suddenly stopped working today."
Seems like it wasn't only me having these cursor issues. But I not totally disappointed with Cursor tho. It's still my little helper for my better coding experience! Thanks for sharing your story!