How to overengineer with an LLM, don't state clearly the requirements, shove your pet patterns first, it is more important to follow the slice redux awareness hook than to have working solution, never trust your developers to make decisions, worry more how it is built than building a solution.
My way to work with an LLM is to have a good, clear requirement and make the LLM write a possible file organization and query the contents of each file, just the code no comments and assemble a working prototype fast, then you can iterate over the requirements and evolve from there.
Generally, I agree that approach works well. It’s going to perform better if it’s not trying to fulfill your teams existing patterns. On the other hand, allowing lots of inconsistencies in style in your large code base seems like a quick way to create a hot mess. Chat prompts seem like a really difficult way to communicate code style and conventions though. A sibling comment to yours mentions that a copilot autocomplete seems like a much better pattern for working in an existing code base, and I tend to agree that’s much more promising. Read the existing code, and recommend small pieces as you type
How often do you get working code that way ? Unless it's something trivial that fits in it's scope I'd say that's going to produce garbage. I've seen it steer into garbage on longer prompt chains about a single class (of medium complexity) - I doubt it would work project level. Mind sharing the projects ?
I work only with closed source codebases and this approach for prototypes, but, using the same example as the blog i prompt: "the current system is an online whiteboard system. Tech stack: react, use some test framework, use konva for the canvas, propose a file organization, print the file layout tree. (without explanations)." The trick is that for every chat the context is the requirement+the filesystem + the specific file, so you don't have the entire codebase in the context, only the current file, also use gpt4, gpt3 is not good enough.
My main point is that the blog post final output is mock test awareness hook redux, where an architect feels good to see his patterns, with my approach you have a prototype online whiteboard system,
My way to work with an LLM is to have a good, clear requirement and make the LLM write a possible file organization and query the contents of each file, just the code no comments and assemble a working prototype fast, then you can iterate over the requirements and evolve from there.