obviously you're not a devops eng, I think you're wildly under-estimating how much of business critical code pre-ai is completely orphaned anyway.
the people who wrote it were contractors long gone, or employees that have moved companies/departments/roles, or of projects that were long since wrapped up, or of people who got laid off, or the people who wrote it simply barely understood it in the first place and certainly don't remember what they were thinking back then now.
basically "what moron wrote this insane mess... oh me" is the default state of production code anyway. there's really no quality bar already.
I am a devops engineer and understand your point. But there's a huge difference: legacy code doesn't change. Yeah occasionally something weird will happen and you've got to dig into it, but it's pretty rare, and usually something like an expired certificate, not a logic bug.
What we're entering, if this comes to fruition, is a whole new era where massive amounts of code changes that engineers are vaguely familiar with are going to be deployed at a much faster pace than anything we've ever seen before. That's a whole different ballgame than the management of a few legacy services.
after a decade of follow-the-sun deployments by php contractors from vietnam to costa rica where our only qa was keeping an eye on the 500s graph, ai can't scare me.
That's actually a good comparison. Though even then, I imagine you at least have the ability to get on the phone and ask what they just did. Whereas LLM would just be like, "IDK, that was my twin brother. I'd ask him directly, but unfortunately he has been garbage collected. It was very sad. Would you like a cookie?"
I wonder if there's any value in some system that preserves the chat context of a coding agent and tags the commits with a reference to it, until the feature has been sufficiently battle tested. That way you can bring them back from the dead and interrogate them for insight if something goes wrong. Probably no more useful than just having a fresh agent look at the diff in most cases, but I can certainly imagine scenarios where it's like "Oh, duh, I meant to do X but looks like I accidentally did Y instead! Here's a fix." way faster than figuring it out from scratch. Especially if that whole process can be automated and fast, worst case you just waste a few tokens.
I'm genuinely curious though if there's anything you learned from those experiences that could be applied to agent driven dev processes too.
it has been amazing to watch how much of agentic ai is driven by "can you write clear instructions to explain your goals and use cases" and "can you clearly define the rules of each step in your process."
Meanwhile new grads can't even start their careers to begin with and are left scambling to even take a step into adulthood. They missed the boar. kids aren't even on the horizon. What does that say?
almost, what you're seeing there is the too cute by half smug nugget of wisdom tone, which is really the trademark of the self-styled "writer", but because self-styled writers wrote most of the internet it has reflected onward in becoming the trademark llm tone. but there are still og hacks in the game!
the people who wrote it were contractors long gone, or employees that have moved companies/departments/roles, or of projects that were long since wrapped up, or of people who got laid off, or the people who wrote it simply barely understood it in the first place and certainly don't remember what they were thinking back then now.
basically "what moron wrote this insane mess... oh me" is the default state of production code anyway. there's really no quality bar already.
reply