I agree with your first point, maybe AI will close some of those gaps with future advances, but I think a large part of the damage will have been done by then.
Regarding the memory of reasoning from LLMs, I think the issue is that even if you can solve it in the future, you already have code for which you've lost the artifacts associated with the original generation. Overall I find there's a lot of talks (especially in the mainstream media) about AI "always learning" when they don't actually learn new anything until a new model is released.
> Why does it require 100% accuracy 100% of the time? Humans are not 100% accurate 100% of the time and we seem to trust them with our code.
Correct, but humans writing code don't lead to a Bus Factor of 0, so it's easier to go back, understand what is wrong and address it.
If the other gaps mentioned above are addressed, then I agree that this also partially goes away.
> Regarding the memory of reasoning from LLMs, I think the issue is that even if you can solve it in the future, you already have code for which you've lost the artifacts associated with the original generation. Overall I find there's a lot of talks (especially in the mainstream media) about AI "always learning" when they don't actually learn new anything until a new model is released.
But this already exists! At work, our code is full of code where the original reasoning for the code is lost. Sometimes someone has forgotten, sometimes the person who wrote it is no longer at the company any more, and so on.
> Correct, but humans writing code don't lead to a Bus Factor of 0, so it's easier to go back, understand what is wrong and address it.
But there are plenty of instances where I work with code that has a bus factor of 0.
The conclusion of your article is that vibe coding is "fundamentally flawed". But every aspect you've described about vibe coding has an analog in normal software engineering, and I don't think you would claim that is "fundamentally flawed".
I can't speak for the author, but I would definitely claim that having a bus factor of zero for any remotely-mission-critical piece of software is "fundamentally flawed", no matter the cause. I'd say the same for a bus factor of one in most settings.
I think that's moving goalposts. The original post never talks about vibe-coding mission-critical software - and I wouldn't advocate for that, either. The post says that all vibe coding is fundamentally flawed.
That's fair, and I agree with you that the generalizations in the article's conclusion go too far.
I added the "remotely-mission-critical" qualifier to capture additional nuance. Tolerance for a low bus factor should be inversely correlated with a project's importance. That wasn't explicitly stated in the article, but it seems uncontroversial, and I suspect the author would agree with me.
> But there are plenty of instances where I work with code that has a bus factor of 0.
Do you think this is a problem?
As per my other replies, if all of these instances are in completely unimportant projects, then I could see you answering "no" (but I'd be concerned if you're spending a lot of time on unimportant things). If they are important, isn't the fact that knowledge about them has been lost indicative of a flaw in how your team/company operates?
I do think this is a problem. But the article goes one step further than claiming it's a problem - it claims that a bus factor of 0, and by extension, vibe coding, is "fundamentally flawed". I don't think that a bus factor of 0 is indicative of a "fundamental flaw".
1. If a process necessarily results in bus factors of zero, that process is flawed.
2. The nature of vibe coding is such that it always produces code with a bus factor of zero (i.e. this is a "fundamental" fact of vibe coding).
I definitely agree with the first point, and I think agree with the second as well (at least if "vibe coding" carries its original implication that you don't even look at/care about the code produced by the LLM).
Did you have a different interpretation? Or do you disagree with one of these points?
It doesn't seem a priori evident that producing a bus factor of zero is bad. For instance, using a library which is not maintained has a bus factor of zero. That doesn't seem "flawed" to me. I think the problem I find with the author's, and your, statements is absolutism. Few things in engineering are ever truly 100% right or 100% wrong choices. Using a library with no maintainers is decision with tradeoffs, but in some contexts those tradeoffs would be worth it. Similarly, vibe coding code that has a bus factor of zero is also a decision with tradeoffs, and sometimes tradeoffs are worth it.
As for the second point - is it really so. hard to read the code that an LLM produces? I am continuously reading output from LLMs for any code which is even remotely important. Again, this is another decision with tradeoffs. Sometimes I vibe-code a 500 loc script, but I can manually verify the output of the script, and therefore there is no reason to read every line. Sometimes I am working on more important code that must be right; I typically inspect it line by line, like a code review.
Document. Whenever a member of the team leaves, you should always make them document what is in their head and not already documented (or even better, make documenting a required part of normal development).
The thing to understand is that every (new) AI chat session is effectively a member entering and leaving your team. So to make that work well you need great onboarding (provide documentation) and great offboarding (require documentation).
I agree with your first point, maybe AI will close some of those gaps with future advances, but I think a large part of the damage will have been done by then.
Regarding the memory of reasoning from LLMs, I think the issue is that even if you can solve it in the future, you already have code for which you've lost the artifacts associated with the original generation. Overall I find there's a lot of talks (especially in the mainstream media) about AI "always learning" when they don't actually learn new anything until a new model is released.
> Why does it require 100% accuracy 100% of the time? Humans are not 100% accurate 100% of the time and we seem to trust them with our code.
Correct, but humans writing code don't lead to a Bus Factor of 0, so it's easier to go back, understand what is wrong and address it.
If the other gaps mentioned above are addressed, then I agree that this also partially goes away.