It's not an assumption of correctness. It's an assumption of purpose. Chesterton's fence[1].
Let's say you inherit a large codebase, and it in fact does not have any tests, and is relatively complex and convoluted. Obviously that's not going to be very maintainable without testing, and it may make sense to do a rewrite...
But first, write tests. Test your assumptions. Read the commit history and see how the code base evolved over time. Maybe the reasons it was built the way it was are dumb ("I like naming my variables after my favorite flavors of pie!") or maybe it was done for really good reasons ("The database library breaks with odd numbers of connections, so the pool always has to be an even number."). The issue is that you simply don't know and risk repeating your ancestors mistakes unless you do some investigation.
You're assuming there's always 'commit history' to review :)
I completely agree you need to do investigation, and document what you can about the system. Some of that documentation will take the form of tests. Without a doubt.
The 'assumption' I refer to is either clients or other devs assuming something is correct. I've lost track of the number of times I've heard "it was working fine until 2 days ago", when, in fact, it was never actually working, just not throwing a visible error until 2 days ago.
I will look at the fence - have seen reference to it before.
"until the reasoning behind the existing state of affairs is understood." - you may also need to realize the original rationale may never quite be understood. I've hit this a few times, and we've ended up just scrapping a particular set of functionality because no one could actually tell why it was there any more - everyone involved who may have used it or wanted/needed it is gone, and it's useless (or is now a blocker for other progress).
I think you are making some great points here. I have seen code commits are mostly right click in Eclipse and commit all modified files irrespective of if they are local config changes or preferences.
The senior devs in projects I worked have made 10 level deep 'strategy patterns' for future enhancements which still had hardcoded values underneath and absolutely non-extensible.
So in my books senior developers knew what they were doing comes with huge assumption. I have noticed many sr engineers simply had delusions of grandeur far beyond their programing skills.
> So in my books senior developers knew what they were doing comes with huge assumption.
This is another big assumption, and... especially given that we know in our industry that 'title inflation' is a real thing... "sr" just doesn't seem to mean much. I'm seeing lots of job postings today that call for "sr foo engineer", calling for 3+ years of experience. My definition and expectations of 'sr' are far different.
Interestingly, I've only been in a couple of places that even defined what they actually meant by that title - what was expected was written down. It was still open to interpretation, but a baseline to judge you against, and to give jr folks something to shoot for.
> which still had hardcoded values underneath and absolutely non-extensible.
Don't even get me started on people that just learned the 'final' keyword and abuse the hell out of it. :)
Let's say you inherit a large codebase, and it in fact does not have any tests, and is relatively complex and convoluted. Obviously that's not going to be very maintainable without testing, and it may make sense to do a rewrite...
But first, write tests. Test your assumptions. Read the commit history and see how the code base evolved over time. Maybe the reasons it was built the way it was are dumb ("I like naming my variables after my favorite flavors of pie!") or maybe it was done for really good reasons ("The database library breaks with odd numbers of connections, so the pool always has to be an even number."). The issue is that you simply don't know and risk repeating your ancestors mistakes unless you do some investigation.
[1] - https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence