They come from science. Engineering applies laws, concepts and knowledge discovered through science. Engineering and science are not the same, they are different disciplines with different outcome expectations.
My take on this is that, from a SW development POV, user stories are not the right unit of work. Instead, I treat user stories as "Epics". Stake holders can track that Epic for progress, as the unit of work from their POV.
Internally, the team splits Epics into "Spikes" (figure out what to do) and "Tasks" (executing on the things we need to do).
- Spikes are scoped to up to 3 days and their outcome is usually a doc and either a follow-up Spike or Tasks to execute.
- Tasks must be as small and unambiguous as possible (within reason).
Well OK, but that's just the same thing with extra steps.
The point I'm making is that there are large cross-cutting concerns that shouldn't be sliced up by feature, but rather that the features should arise out of the composition of the cross-cutting concerns.
A single user story commonly requires the holy trinity of UI, 'business logic' and data storage, and my contention is that it's more efficient and robust to build those three layers out holistically rather than try to assemble them from the fragments required for all the user stories.
Our job as SWEs is to convert the vertical slice of functionality into something that fits well and robustly in the various technical layers that need to be touched.
The process that I outlined above explicitly creates the space for SWEs to consider the wider implications of the required changes in the architecture and make robust.
Part of that is understanding what the roadmap is and what is the product vision in the mid term, so that the tech layer can be built, step by step, towards what fits that vision.
Hasn't balancing quality (in this context due diligence) and speed (AI code gen.) been the name of the game in the industry for ever? Management should have enough experience by now to understand the trade-off.
Sure, but for the most part people don't use them, because you don't have to; Python method calls are always potentially polymorphic, unlike Golang method calls.
I have said many times to teammates: the only code that is perfect is the one that hasn't left our minds. The moment it's written down it becomes flawed and imperfect.
This doesn't mean we shouldn't try to make it as good as we can, but rather that we must accept that the outcome will be flawed and that, despite our best intentions, it will show its sharp edges the next time we come to work on it.
Math can be greasy and messy. Definitions can be clumsy in a way that makes stating theorems cumbersome, the axioms may be unintuitive, proofs can be ugly, they can even contain bugs in published form. There can be annoying inconsistencies like optional constant factors in Fourier, or JPL quaternions.
Yes, prototypical school stuff like Pythagoras are "eternal" but a lot of math is designed, and can be ergonomic or not. Better notation can suggest solutions to unsolved problems. Clumsy axioms can hide elegant structure.
I think applied mathematicians started to encounter this reality of the impure world the first time someone taped a dead moth into the logbook of the Harvard Mark II.
Do you like writing all the if, def, public void, import keywords? That is what I’m talking about. I prefer IDE for java and other verbose languages because of the code generation. And I configure my editors for templates and snippets because I don’t like to waste time on entering every single character (and learned vim because I can act on bigger units; words, lines, whole blocks).
I'm not bothered by if nor def. public void can be annoying but it's also fast to type and it doesn't bother me. For import I always try my best at having some kind of autoimport. I too use vim and use macros for many things.
To be honest I'm more annoyed by having to repeat three times parameters in class constructors (args, member declaration and assignment), and I have a macro for it.
The thing is, most of the time I know what I want to write before I start writing. At that point, writing the code is usually the fastest way to the result I want.
Using LLMs usually requires more writing and iterations; plus waiting for whatever it generates, reading it, understanding it and deciding if that's what I wanted; and then it suddenly goes crazy half way through a session and I have to start over...
This is how I feel. I mentioned this to a couple of friends over a beer and their answer was that there are many not "decently competent programmer"s in the industry currently and they benefit immensely from this technology, at the expense of the stability and maintainability of the system they are working on.
For whatever it is worth, I have reached the same conclusion and I have been building systems like you describe for the last few years.
Recently I changed jobs. In the new team they love their ORM with all the foreign keys and direct mapping between business logic data objects and the database schema.
Needless to say, it is a mess. Despite all the automated tooling they have around migrations that "should make it easy" to change things, the reality is that the whole application depends on what the database schemas look like and migrating the existing representation to a new one would break everything.
This has become an elephant in the room that nobody talks about and everyone works around.
Then, there are companies that ran a bunch of them, which lowered the ratio even further.
IMO, it's more effective, cheaper and easier to mod smaller forums (be it web communities or game server communities) than to do for huge ones.