Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is a relative succinct summary of the downside case for LLM code generation. I hear a lot of this and as someone who enjoys a well-structured codebase, I have a lot of instinctive sympathy.

However I think we should be thinking harder about how coding will change as LLMs change the economics of writing code: - If the cost of delivering a feature is ~0, what's the point in spending weeks prioritizing it? Maybe Product becomes more like an iterative QA function? - What are the risks that we currently manage through good software engineering practices and what's the actual impact of those risks materializing? For instance, if we expose customer data that's probably pretty existential, but most companies can tolerate a little unplanned downtime (even if they don't enjoy it!). As the economics change, how sustainable is the current cost/benefit equilibrium of high-quality code?

We might not like it but my guess is that in ≤ 5 years actual code is more akin to assembler where sure we might jump in and optimize but we are really just monitoring the test suites and coverage and risks rather than tuning whether or not the same library function is being evolved in a way which gives leverage across the code base.



> As the economics change, how sustainable is the current cost/benefit equilibrium of high-quality code

"High quality code"? The standard today is "barely functional", if we lower the standards any further we will find ourselves debating how many crashes a day we're willing to live with, and whether we really care about weekly data loss caused by race conditions.


And if that's what's economically beneficial then it shall be. Unfortunately.



> However I think we should be thinking harder about how coding will change as LLMs change the economics of writing code: - If the cost of delivering a feature is ~0, what's the point in spending weeks prioritizing it?

Writing code and delivering a feature are not synonymous. The time spent writing code is often significantly less than the time spent clarifying requirements, designing the solution, adjusting the software architecture as necessary, testing, documenting, and releasing. That effort won't be driven to 0 even if an LLM could be trusted to write perfect code that didn't need human review.


I agree with your point on finding a new standard on what developers should do given LLM coding. Something that matters before may not be relevant in future.

My so far experiences boil down to: APIs, function descriptions, overall structures and testing. In other words, ask a dev to become an architect that defines the project and lay out the structure. As long as the first three points are well settled, code gen quality is pretty good. Many people believe the last point (testing) should be done automatically as well. While LLM may help with unit tests or tests on macro structures, I think people need to define high-levle, end-to-end testing goals from a new angel.


The question is whether treating code as a borderline black box balances out with the needed extra QA (including automated tests).

Just like strong typing reduces the amount of tests you need (because the scope of potential errors is reduced), there is a giant increase in error scope when you can’t assume the writer to be rational.


Black box designs beget black swan events.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: