Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, I 100% don't think it will happen.

LLMs have made the value of content worth precisely zero. Any content can be duplicated with a prompt. That means code is also worth precisely zero. It doesn't matter if humans can understand the code, what matters is if the LLM can understand the code and make modifications.

As long as the LLM can read the code and adjust it based on the prompt, what happens on the inside doesn't matter. Anything can be fixed with simply a new prompt.



But how do you know that it's "fixed", if you don't understand the code?

You can have functional tests, sure, but if there's one thing that LLMs (and AI in general) is good at, it's finding unconventional ways to game metrics.


TDD is perfect for vibe coding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: