Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you fix something it sticks, the AI won't keep making the same mistake, it won't change the code that already exists if you ask it not to. It actually ONLY works well when you are doing iterative changes and not used as a pure code generator, actually, AI's one-shot performance is kind of crap. A mistake happens, you point it out to the LLM and ask it to update the code and the instructions used to create the code in tandem. Or you just ask it to fix the code once. You add tests, partially generated by the AI and curated by a human, the AI runs the tests and fixes the code if they fail (or fixes the tests).


All I can really say is that doesn't match my experience. If I fix something that it implemented due to a "misunderstanding" then it usually tends to break it again a few messages later. But I would be the first to say the use of these models is extremely subjective.


I think we have very different experiences then. I find multiple prompts with narrow focuses each executed to update the same file work much better than trying to one shot the file. I think you would have a better experience if you used /clear (assuming you are using Gemini CLI), the problem isn't the change in the file, the problem is probably the conversation history instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: