Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Imaginary problems are the root of bad software (cerebralab.com)
946 points by deofoo on June 18, 2023 | hide | past | favorite | 393 comments


If anything it's the incentive system in software industry, which is at fault.

1. No designer is given promotion for sticking to conventional designs. It's their creative & clever designs that get them attention and career incentives.

2. No engineer is paid extra for keeping the codebase without growing too much. It's re-writes and the effort he puts in to churn out more solutions (than there are problems) that offers him a chance to climb the ladder.

3. No product manager can put "Made the product more stable and usable" in their resume. It's all the new extra features that they thought out, which will earn them reputation.

4. No manager is rewarded for how lean a team they manage and how they get things done with a tiny & flat team. Managers pride themselves with how many people work under them and how tall in the hierarchy they are.

Our industry thrives on producing more solutions than needed. Efforts are rewarded based on conventional measurements, without thinking through- in what directions were the efforts pointed at.

Unless the incentives of everyone involved are aligned with what's actually needed, we'll continue solving imaginary problems, I guess.


> “No designer is given promotion for sticking to conventional designs. It's their creative & clever designs that get them attention and career incentives.”

This is a massive change from my first software industry job in 1997.

I was essentially a “design intern who knows HTML” on a team that built a shrinkwrap Windows application for enterprises. The core of the design team was a graphic designer, a cognitive scientist, an industrial designer, and a product manager with industry experience in the customer domain.

The application was Windows native. User research was conducted on site with scientific rigor. Adhering to platform guidelines and conventions was a priority. I think I spent a few weeks redrawing hundreds of toolbar icons so they’d be in line with the Office 97 look (the kind of boring job you give to the junior). If the app stood out on the Windows desktop, that would have been considered problematic.

Today a similar design team would only have a graphic designer and a PM, and neither of them would care the slightest about platform guidelines or customer domain. The UI is primarily an extension of the corporate brand. Hiring a cognitive scientist? Forget about it…

Everything certainly wasn’t perfect in the Windows/Mac desktop golden era. But the rise of the web wiped out a lot of good industry practices too.


Remember times where you could change the theme and all of the apps followed suit ?

Even in Linux there were tools to sync gnome with QT look so you could have one theme applied to every app for nice and consistent look, all the way to how the common icons look.

Nowadays ? Every fucking app gotta have their own different styling. Will the setting icon be three dots, gear, or honey badger ? WHO FUCKING KNOWS. You'd be lucky if you even get a choice between light and dark theme

But hey, we can write same code to work on windows, mac and mobile ! It will work shit in all of them and be slow but we don't care!


> Will the setting icon be three dots?

Multiple hamburger menus with a scattering of cryptic icons stuck at arbitrary places on the screen. What does the swirly icon with up arrow do? No text label for you!

Oh and let's move the next button to the top left of the screen and not highlight it. Mmmm that's some good UI design.


Might be a coincidence, but that's a weirdly accurate description of MS Teams.


Not mentioning about the wording (at least in spanish) is awful too.


I remember that theory. I also remember the reality that if you changed your background colour to anything but white, some app, somewhere, was going to become an unreadable black text on black background mess.


> Nowadays ? Every fucking app gotta have their own different styling.

This has more to do with the current state of GUI frameworks than with developer mindset. Microsoft is between GUI frameworks and rumoured to have deprecated everything between win32 and WPF, and in the meantime they are pushing React Native. Apple doesn't seem to know what to do with desktop environments and is stuck between their legacy objective C frameworks which they seem to purposely hide any form of documentation and swift-based frameworks which are broken out of the box. Linux has a couple of options which are ugly as sin since ever. There's Qt, but their licensing scares away anyone with two brain cells to rub together.

So where are we left?

Well, with webview-based frameworks, which is the worst of both worlds but at least they don't look half bad.

Except that webview-based frameworks are a far lower-level abstraction than any native widget-based framework. Developers are forced to reinvent the wheel, and this means dropping any standard look-and-feel because that's already ugly to start with but takes even more work to get into a working state.

And all you want to do is to provide a GUI for users to click around.


And it got worse with client side decorations. Now even window interactions are app-specific and don't adhere to global settings.


> Even in Linux there were tools to sync gnome with QT look

There are still such tools, you don't have to use the past tense here.


Even better, every app is doing their own styling so they can all look like Discord


Is it just me or there are others who find discords handling of threads terrible?! I mean Slack is far from my favourite but at least they treat a thread like a thread where I can see the entire conversation in one place instead of visually parsing questions and replies while frantically scrolling up and down.


Discord and Teams both have terrible UX. Discord has the look and feel of a poorly designed game and Teams can't even get highlight and focus right.


Threads in Discord appear in the sidebar usually. Are you thinking of replies?


I was thinking more like seeing all the replies to a comment as a thread without the explicit "create thread" which mostly people don't. E.g. I right-click on a comment that has replies and I see the root comment along with the replies in the sidebar.


Oh man I totally forgot about that. Thank you for the reminder.

Total flashbacks to windows 95 and every so often changing the windows color, text font etc. for the entire system

Good times


A lot of native apps on iOS still at least follow the light/dark theme and global font sizes. But I’m not sure if that works by default with Flutter or React Native etc. or if they’d have to implement it explicitly.


> Every fucking app gotta have their own different styling

Have you considered that the average user never cared and never will?

Having ultra consistent styling across all apps ruins the ability to... you know... sell the software. It gives far too much power to some group of annoying elitist nerds in denial of their opinions preaching UI/UX pseudoscience.

> But hey, we can write same code to work on windows, mac and mobile ! It will work shit in all of them and be slow but we don't care!

Ain't nobody got time for reading even shittier documentation for badly written OS APIs.


From a person who started using computers from the early 2000s era:

THANK YOU!

None of the current SaaS apps I use can come close to the experience of using softwares from that era.

Take a simple list view of a typical Windows/Mac software?

1. Command clicking selected multiple objects

2. Shift clicking selected a range.

3. Right clicking brought up selection actions.

4. Double clicking opened an object.

This pattern was followed in almost list views and there was no re-learning and surprises.

Now can you say the same about the list views of modern web apps?

Can you apply the same list selection experience across Google Drive and Microsoft OneDrive and Apple iCloud? Nope.

That's were we failed as an industry. We let a lot of designers run too wild with their ideas, to put it bluntly.


The problem isn't really with designers per se (tho they played a role). The problem is with the web as an application delivery platform - it was never designed for this.

Windows, or MacOS, both have design guidelines that was produced, and they expected native apps to follow. Most native apps do (the few that doesn't have either good reason, or were unique enough that their customers don't care - think photoshop).

With the advent of the web, such a guideline for software no longer mattered, because the controls and UI elements are all custom - since html is not an application GUI library!

So every man and their dog has a different UX and UI interaction built for their own app, because the web encourages it. The designers are also at fault for not standardizing on a set of common UI widgets, but i cannot blame them as this isn't the easiest path.


> design guidelines that was produced, and they expected native apps to follow.

It almost sounds old-fashioned in 2023 to talk about usability, affordance, and user-experience.

Part of being a native application was/is for the application to look and behave like the rest of the user interface. Standards are important because learning how to use a tool is important. Users are important.

Software has become a way to make users miserable. Oh, and while confusing them, throw some advertising at them too. ^_^


And everything had a keybind so if you worked in software every day you could be as fast as cli nerds.


Absolutely...closing a popup with 'esc'? Naah, wasn't requested so tha message stays up


And not even be slowed down by animations.


I read "even" as "ever" - still made complete sense.

Animations may seem fine and fun the first time you encounter one, but when you think you'll have to suffer through it every time an action is taken, it becomes a whole different story.


Also:

CTRL+A selects all

CTRL+Shift+End selects all from where you are to the end

CTRL+Shift+Home selects all from where you are to the top

One of the (many) problems of web UIs is they often ignore the keyboard completely.


Or as is increasingly the case: they actively hijack what should be system-wide keybindings - making it even worse.


>CTRL+Shift+End selects all from where you are to the end

>CTRL+Shift+Home selects all from where you are to the top

Those two don't need CTRL, just so you know.

>One of the (many) problems of web UIs is they often ignore the keyboard completely.

They are also starting to ignore the mouse.


> Those two don't need CTRL, just so you know

Well, yes they do, only it's different with and without CTRL. For example in a text editor (or a normal webpage without any fancy JS):

- Shift+End goes to the end of the line

- CTRL+Shift+End goes to the end of the document

and the same is true for Home (substitute "end" with "beginning").

I have seen people, even "technical" people, select a text with the mouse from the middle of a large Word document to the end, because they didn't know this.

They also had to do it often (several times a day) and it had become a significant part of their workload...


Emacs would like to have a word.


Emacs is a text ecosystem. And it's trivial to add these shortcuts. Evil[0] basically rewires everything to be Vim.

[0]: https://github.com/emacs-evil/evil


It is a desktop application that doesn't adhere to the user experience guidelines of the OS it runs on. I'm an emacs user and fan.


I can’t help but feel that Agile is at least partly to blame for this. Things like you are describing usually don’t come under the “minimum viable product” purview and thus get pushed out indefinitely until the product is at the “very mature” stage. At that point there’s the risk that the product will be re-written and the cycle reset again.


Nah. If anything a lot of these trends are directly anti-agile, e.g. avoiding labels and using icons so that it's easier to translate your app, even though the icons take longer initially and you're never actually going to translate your app.


I’m not really arguing that. I’m saying there is a trade-off between having the common OS feature set described above (Ctrl/Cmd-select, etc) and implementing/iterating a web/mobile product quickly. Often standard UI paradigms are never implemented or implemented inconsistently. AWS, widely known for their Agile practices, might be the epitome of this with their web console. Sometimes I can sort columns by clicking on them, sometimes I can’t. Some products allow shift or control click and some do not. Etc etc. I can only assume the product teams are doing their best within their constraints but as an end user it’s a piece of suck.


The problem with the controls is mostly because they don't want to pay for Qt (multiplatform toolkit), so instead every company (badly) implements their own controls in HTML to save money. I suspect ultimately they waste much more money than they save.


Qt isn't even in the radar of most companies currently building multi-platform applications in HTML. And it won't be soon, for two main reasons: developers able to use it are expensive, and most Qt applications in the wild still have the "uncanny valley" look and feel about them on every OS but Linux.

Not to mention that with SaaS being more profitable than selling unlimited-use licenses, a lot of apps also have HTTP backends and webapp versions, which can share a lot of HTML/CSS/JS code with a browser-based desktop version. Think of Slack/Discord/VSCode/etc. Sure: Qt, Flutter, etc also have web versions, but they just don't look/feel as good in the browser as an HTML app normally can.

If you want a "Premium" native look and feel, people gotta go directly to the source: native APIs. Qt won't do it without a lot of work. Lots of companies have separate Android and iOS teams. Or they go directly to HTML when there's not enough cash (or even things like Flutter, which look ok in mobile). High-quality macOS apps, like those made by Panic, Rogue Amoeba, etc, use Cocoa directly.

Standardized controls and shortcuts unfortunately end up being collateral damage in all of this.


All this sounds great from the perspective of retail shrink wrap software. Today, the major use cases for Qt on the desktop are in-house corporate software. And, no one cares about "uncanny valley". Only HN and online nerds care about that stuff. Average, non-technical, corporate users care about functions, not form. Hell, they are happy with a VBA app! Also, lots of in-car and in-seat (aeroplane) entertainment systems are built in Qt.

    If you want a "Premium" native look and feel
Who is doing this in 2023? No one outside my examples above.


There's a reason in-house corporate software started becoming web based 15-or-so years ago: web is cheaper, easier to hire for, faster to develop and deploy, doesn't require installation or updates, easier to troubleshoot, doesn't need special pipes for communicating with the server (yeah I remember the days of DCOM, CORBA, WCF and other weird protocols instead of HTTP).

It is things like Qt that only HN and online nerds care about.

Don't get me wrong: I really like well made native apps and I think they're great. But Qt apps are almost never as great as software made with native toolkits directly.


No, long term, web apps are more expensive because the front end needs to be completely re-written ever few years. A desktop app written in Qt can be kept running for a decade with very little work. (This assumes no Qt library upgrades, which for in-house apps, are most unnecessary.)

The primary advantage of web apps is "zero install", which is a major point of friction at large corps.


> ...needs to be completely re-written ever few years.

No it doesn't, the HTML spec hasn't changed, whereas QT has. The issue is how the webapps were built in the first place. You don't need Nextjs and all it's friends to post a form to a backend once an hour but somehow that is becoming the standard approach...


“because the front end needs to be completely re-written ever few years”

Citation needed. I’m currently working in a jQuery+Coffeescript app that still works like a charm.

Like the sibling post says, Qt itself had breaking changes that HTML/JS/CSS didn’t.


> Who is doing this in 2023? No one outside my examples above.

For macOS check apps made by Apple (Logic, Final Cut), Panic, Rogue Amoeba, or apps like Pixelmator, Affinity, Pixelmator, TablePlus, Tower, Dash... there's more.

There's even open source ones like IINA providing a premium look+feel.


> If you want a "Premium" native look and feel, people gotta go directly to the source: native APIs.

I've seen this regurgitated a hundred times, but I don't really know any more what that "native look and feel" is. On Windows, it might be the old Windows 95 controls, but those are quite limited. What system today is made of dropdowns, checkboxes and OK buttons?


Well, yeah. Windows is... special. Even Microsoft stopped using the normal controls. But the problem is that Qt looks like an uncanny valley version of those Windows 95-style apps.

But by Premium I'm talking more about macOS, iOS and Android.

For macOS check apps made by Apple, Panic, Rogue Amoeba, or apps like Pixelmator, Affinity, Pixelmator, TablePlus, FantastiCal, Tower, Dash. Or even "non flashy" apps like Pacifist.

Those all use Cocoa and have a bit more of a slick appearance, even though they're mostly using native controls, with very few special parts here and there.


According to Microsoft, native look and feel includes tiny tiny dialog boxes with even smaller panes inside that can't be resized, so yeah, not going to bother with that


Qt has uncanney valley... do you think people using HTML care to make things look native then?


No. Why would they? The reason HTML apps don’t suffer from the uncanny valley problem is because they don’t try to look native at all, it's just something totally different.

When they suck it's not because it looks "almost but not quite there".

But there are exceptions: when Cordova/Ionic tries to imitate the look of iOS/Android's native controls. Then we have an uncanny valley problem.


> None of the current SaaS apps I use can come close to the experience of using softwares from that era.

I'm afraid you're looking at the past with rose-tinted glasses. In general software back then sucked hard. User experience did not existed at all. Developers bundled stuff around, and users were expected to learn how to use software. Books were sold to guide desperate users through simple user flows. Forms forms forms forms everywhere, and they all sucked, no exception. Forget about localization or internationalization. Forget about accessibility. Developers tweaked fonts to be smaller to shove more controls into the same screen real estate, and you either picked up a magnifying glass to read what they said or you just risked it and clicked on it anyway.

Software back then sucked, and sucked hard. Atrocities like Gimp were sold as usability champions. That's how bad things were.


Those four actions worked for me on the Google Drive web interface


Great! Now try it in any other list view in any other app.

Maybe the list of docs in https://docs.google.com?


On my phone so it's hard to check, but Gmail's ctrl and shift clicks work really well and is intuitive to me. I'm shocked they wouldn't use the same mechanics everywhere.

Best example: Gmail is one of the only webapps I'll ctrl click a few items at the top of the list, ctrl click a few in the middle, and then shift click to the bottom and it works exactly how I'd expect - everything stays selected and the shift+click selects from your previous click to the next item. I think it gets wonky if you change directions but I can't imagine how I'd expect ctrl+click item 1, 2, 9, 10, then shift click on 4 would work.


For myself, I'd expect the items between 4 and 10 to be added to the selection: everything but 3 would be selected.

Having just checked, this is indeed what the file explorer on my desktop does, and I'm pretty sure windows file explorer does the same.


Today they might have a psychologist on the team to research which buttons may serve as dopamine triggers so you're a lot more likely to upgrade to Premium before thinking it over


Right. Making people click on things they didn’t mean to and buy things they didn’t want — those goals were not even a part of the 1990s UI paradigm.


1990s design was all about preventing people from accidentally doing actions they might not have wanted to. Even down to the “Are you sure you want to quit” dialogues that were all the rage.

It’s sad just where we are now compared to the design goals of old.


There's been a large influx of people chasing dollars and status, they could not care less about the product or who uses it. You can still find orgs that do care, but they're swiftly pushed out by the VC-funded "growth at any cost" model or are acquired into oblivion.

The indie scene is looking excellent however, a lot of programmers who have already made their money seem to be pushing excellent hobby projects out.


There is a fundamental difference between working on software products vs bespoke software development:

In the former you make money from selling the result, in the latter you make money from selling the hours spent creating the thing.

If the former is unusable it will lead to bad sales. In the latter it might even lead to additional hours sold in change requests.

The former is bought often after evaluation and comparison by the user of the software. The latter is sold to an executive that will never have to use the software as a project.


Microsoft basically used to do this too. We can see that they clearly don't anymore.


I have another theory: it's all about the screen size. When you had only 320x240—1014x768, you simply MUST have thought about UX (in the sense of "How can I fit all this info in this little space?"

Now you don't have to. So no one does.


This is true in a sense, but we now have a different restriction, called a mobile view.


That sounds expensive AF though. Was that really necessary?

I’m all for craftsmanship, but having four top guys fiddling with design.. it all depends on the domain I guess.


Considering most of us here are ready to murder a few designers for wasting so much of our time now?

Yes, absolutely necessary cost.


Uh ... plenty of companies have a UX team. They're mostly all graphic designers. "four guys fiddling with design" is nothing. Either you care about having a well-designed UI or you just want a pretty one with a high conversion rate, and who you hire reflects what you really value.


Uber was a massive example of this. The best engineers kept their head down and just tried to keep things afloat, fixing bugs, etc.

However, a large and insidious cadre of B tier engineers was constantly writing docs proposing meaningless arbitrary system changes and new designs for the sake the changes themselves. These new projects were the only way to get promoted. The entire e4->5->6->7 track was written in such a way that it only ever encouraged “TL”/“architect”types to grow.

This led to constant churn, horrible codebases, and utter bullshit self made problems which could only be solved by digging a deeper hole.

There are companies who handle this well. Ultimately it comes down to engineering culture.


The career ladder is among the biggest fuck ups of the tech industry. They incentivize bullshittery more than actual innovation. There are more rewards for BS RFCs than in keeping the ship running.


I had a completely different view of RFCs before coming into contact with some peers that followed this approach to the letter. RFCs for such small issues would take 10 pages and barely mean anything. Of course they would be praised by upper management (it didn't matter the RFCs would be ignored most of the time).


> No engineer is paid extra for keeping the codebase without growing too much.

I am. I'm paid more than most developers to run a team doing just this. We make minimal change, have an absolutely non-negotiable focus on stability and minimalism and reject any change that isn't absolutely driven by validated majority user need. Even then, the bar is high.

I'm not saying this is a common situation, but it certainly isn't rare in my experience. Software is a vastly wide scope of software types and requirements. I'm paid to be ruthless, and to know what ruthless looks like in terms of delivering consistently without downtime/data loss/critical issues.


Confirming a hypothesis I put forth in a different comment: Would you say you and your team have ownership of the product?

That is to say, there isn't one team doing what you're doing and then another separate team trying to graft new features on all the time, is there? Maybe there is, and maybe that causes issues down the line.


We do have ownership, and I try and structure the development such that every engineer has ownership, decision making power and accountability. I aim for a flat responsibility structure as much as possible. We have lots of work to do, lots of changes in process and despite the constraints we have a steady stream of features we do add.

The trick is to ensure the culture of solid engineering goes right through the organisation and informs everything from commercial/financial through to QA.


Are you guys hiring. I pride myself on writing short, simple and readable code.


Minimal change, or minimal code? Refactoring code can make code smaller but depends on good testing. Applying minimal changes results in redundant and complicated code, but less likely to break existing functionality.


In the first instance, both. However, i'd take more code that was better reasoned and easily understood over less verbose code that was smaller for smaller sake.

In terms of minimal change, we refactor when there's a clear business case to permit taking on the risk. Otherwise, we make the most minimal, most stable, least risk change to the existing code even if that code isn't optimal/pretty/well-structured/has-errors/...

Like most other engineering in the world really.


IME this can be hard if built on a platform or dependencies one doesn't control, which is common at early stage companies.

Because often the dependencies require surfing latest, or close enough, versions to maintain a secure system or avoid stalls for jumping major versions. Sometimes even core languages and standard libraries may require staying at least near latest versions.


This is true, but in our case all dependencies are vendored and frozen.

We _do_ have instances where target systems for deployment become ABI/API incompatible with the libraries, which is rare and happens roughly every 5 years.

The project was structured to put stability at the core, rather than being cutting edge.


What kind of software do you work on? At what company?

I have seen low level parts that are managed well, because employees have skin in the game

But I’ve also seen a lot of what this post is talking about


I'm in the same boat as GP. I work in finance. CTO said he wanted to keep things simple and stable, so I do it.

It is only possible because I'm a technical manager myself and I have a very competent (and technical) product manager working with me.

But for every person like me, there are 10 other devs trying to cram every pattern from the GoF book in their corner of the codebase, so I have to spend my scarce time reviewing PRs.


Am GP, it is finance based, but not commercial finance.

Most of the systems handle complex calculations. The system is a monolith that has been around for 15 years or so.

It isn't cool. It isn't pretty. Lots of it would be better for a refactor, but absolute stability is the goal. Refactoring things may result in long-term cost savings, but with risk. The business has no risk appetite, so it doesn't make sense. If it works, it stays, however ugly and costly.

That isn't to say some things don't get refactored, but there's a strict business case that needs to be met. Usually if the system is under performing, error-prone or end-users want features/performance that can't be accommodated without a refactor.

It's nice. The latest framework isn't being integrated year after year, there's no microservices, nothing fancy.

It's java. It's tested. It works. It makes money. It pays.


Interesting, I believe there is something to finance that allows or forces things this way. There’s much less fidget spinning and much more business in it for some reason.


Regulations (as sibling comment mentioned) is one aspect, but also the cost of screwing up is real-world. A status quo system that isn’t screwing up has a much higher bar to replace where the risk that the replacement system will screw up.

Much of the same reasoning and even more extremely applied, is medtech - hence why you see many medical imaging setups still obviously running a version of Windows XP.


Regulation and risk. The same applied to government too.

The cost (financial, legal, reputation) of certain classes of bugs is so high that avoiding those risks becomes top concern.

One calculation issue, one buggy float operation means millions or billions in damage and the loss of your clients ... never to return ... because your name is tarnished.

I actually think this is _wrong_ and we need better resiliency in finance to being able to roll-back transactions and rewind and replay with adjustments.


Regulations, I’m guessing.


Regulations are a good excuse, but there is still a lot of fuckery in Finance apps, especially in the Fintech space.

Stability is something that must be culturally hammered and enforced by leadership.

If it's done by stakeholders, the app will look simple but still be a juggernaut of over-engineering underneath.


I work at a consulting firm.

Our salary is loosely linked to what percentage of our work is billable (with leniency for inexperienced staff, who aren't expected to be profitable while they're learning their craft).

If you spend three hours figuring out why things fall apart on the 31st of the month... that generally can't be billed to the client, and therefore it's bad for your salary.

On the other hand, if you spend three hundred hours writing tests and implementing an awesome multi-stage deployment process that avoids one production bug a month? Your manager can totally bill that work (with the right client).


I would argue the billing model, client relationship and everything else commercial isn't running effectively at that firm.

If I were a client, I wouldn't want these perverse incentives to exist. I would want a razor sharp focus on _my_ needs, and assurance that _my_ needs are modelled in the billing.

And for that, I would pay more.


Why is the bug fix not billable, but the test writing is?


Until you have the star developer that starts promising the product manager he can do all the extra features in a week. And then of course none of them actually work decently or at all. But at that point it must be maintained by the whole team anyway.


Well yes, but I have veto power for that reason, as lead engineer.

I am the lead because I know, from experience, not to allow this kind of nonsense to happen.


I’m also I’m a similar situation, but I get the call when the thing has been on fire for a while so it’s a lot easier.

I can’t imagine a software engineer developing an interest defensive software engineering will be very visible until after there has already been a crisis to screw people’s heads on straight.

A lot of people seem to see “Do things that don’t scale” and think that’s a phrase meant for engineering.


For that to happen, two things must match: a product guy who knows your job and you who know how to make products. It doesn’t even have to be stable/featureless, in my experience. New developers tend to worship some new paradigm that focuses on “how” instead of “what”, which is all paradigms can do. And once they’re in, it goes downhill because the how dominates the what. Add a clueless product guy into the mix and it loses all limits, including budget. In the end they proclaim “software is hard” and move on.


I'm not GP but I'm in a similar situation, and yeah, this is how I do it.

My Product Manager is competent both in technical and product/design matters and is also able to call BS on complexity for complexity sake. I ensure that the development part is focused.

New developers have to prove with technical and business arguments any new paradigm or random refactoring they want to do. If there is no immediate need, we just skip it.


I'm fortunate enough to be both the lead engineer and the person making product/design/feature decisions with the engineering team.


> I'm not saying this is a common situation, but it certainly isn't rare in my experience.

I think your experiences may be skewed by the position you find yourself in?


Bless you man, you're doing the lords work


It ain't much, but it is honest work ...

... and well paid.


I've been working in consultancy for most of my career and have been in so many projects by now that seemed to be bullshit rewrites in $tech of the month; at least two projects where microservices were pushed through. The last one I was in was funny because while they had a small army of consultants and self-employed engineers vying for influence and carving out their own slice of the pie, the existing team of running, working, earning .NET software was just going about their day.

It was quite telling that after 1.5 years of that project, where all staff had already been replaced once, all they had to show for was a fancy product name and a presentation. And that the manager that led the project for ~2 years left right before or right when it went live - and he did that in a previous project too, where a working .NET backend for ecommerce was replaced with a Scala microservices on AWS system.

I did hear about the latter; I heard they went back to .NET, but the Scala services are still up and running and a maintenance nightmare.

But the lead developer got to play with his favorite tool and landed a job at lightbend because of it. Career-driven development, and I don't even believe he did it for his own career, but for self-gratification. Ecommerce is boring; REST APIs and a front-end are boring. But Scala, distributed systems, AWS, that's all cool and new.

I'm so tired.


New is not always better, but many times it is. We see this for example in programming languages, where newer ones incorporate the best features of their predecessors.

I think there are two things to be wary of: 1) Selecting a new technology just because it's hot, and 2) Refusing to consider new technology because the old stuff "just works." A good engineer looks at the requirements and selects the best tool to solve the problem while weighing the costs and benefits. Sometimes that's microservices. Sometimes it's monoliths. Granted, I don't know anything about the developers or business problems at that company, but to say that Scala microservices are just bad without justification doesn't sit right with me. It's all situational.

If an engineer comes to me and asks to use something like Scala, he'd better know all the upsides AND downsides (e.g. effect and streaming abstractions, ease of long-term maintenance, referential transparency, vs learning curve, hire-ability, 100 different ways of doing things, etc).


If new is not always better, then you’re stuck with the really hard job of knowing when it’s worth moving to the new thing.

Worse, you’ll be blinded by survivability bias. One easily notices the good rewrites and can easily ignore the bad ones.

Even worse, bad rewrites may be noticed in a place that a year or two ago was deemed a success story. I’ve seen many such cases due to misunderstandings or just political dynamics.

And lastly, don’t let that Engineer do Scala, they’ll brush off the compilation time regression and make all developers lives slightly worse (assuming the project is big enough)


Yeah, good point--when I said new wasn't always better, I was just talking about the case where the new tech solves a problem, but it's not the one you have.

Like choosing GraphQL just because it's new, even if your data doesn't have the structure for it.

Will have to disagree with you on Scala for several reasons I won't go into here--but the point was just that, in order to make these arguments in the first place, you need to do your research. Seems commonsense, but surprisingly many people don't do it (including younger me).


With developers, incentive misalignment is just insane at all levels.

- There is bias towards rewarding more lines of code or more code commits (which is often the exact opposite of what characterizes good software design).

- There is bias towards rewarding speed of initial implementation (which often goes against medium-term and long term maintainability and ability to handle requirement changes which is usually far more important). The costs of this tends to fall on other developers who come in later and act as scapegoats for the 'apparent 10x developer'.

- The industry keeps dismantling narratives which would otherwise help to reward talent. For example, many companies dismiss the idea of a '10x developer' - Probably because they are aware of the previous point about the fast developer who cuts corners and creates technical debt for others. 10x developers do exist, but they're not what most people expect because paradoxically, they may be slow coders when it comes to LOC metrics and speed of implementation for new features; their code really only shines in the medium and long run and it improves the productivity of their colleagues too so it's hard to properly allocate credit to 10x devs and they only really matter on greenfield projects.


Mega agree with this. It was really bad for my personal/career growth to get a ton of praise for doing things fast: granted, a lot of people doing the praising had precious little experience in tech themselves. I probably have 2-3 whole dead years where I could have been learning/improving a lot more but got put in “10x developer” expectation projects where I’d churn something out, get a big shiny star sticker for it, and then 2 years later it would be abandoned because there was no incentive for anyone but me to maintain it, and who would want to because it was shitty code with hacks and tech-debt, and anything that isn’t writing a fucking mountain of new garbage code gets in the way of shiny star collection.


> but got put in “10x developer” expectation projects where I’d churn something out, get a big shiny star sticker for it, and then 2 years later it would be abandoned

I feel like I've fallen into this hole at my current gig, where I just churn shit out to solve a problem as quickly as possible

I get away with it just bc general code quality was already not good to begin with

Biggest mistake was going fast the first time, now I'm getting assigned way more shit

Word of advice to readers: don't make the same mistake I made. You'll just get taken advantage of


I really think we have too many people working at most companies. It pushes people to the extremes and edges just to have something to work on. Managers need more people under them to get promotions. And managers want to manage managers to keep moving up. They fill teams of people on products that could really be ran by a fraction of the engineers. But that’s not where we are, we are on large teams working on small areas of the product inventing areas to build in and often ruining the product as a result.

We also get slower with so many people. The coordination overhead is killer and losing context as the product is sliced up into small parts that move on without you


> we have too many people working at most companies.

I half-disagree with this. My take is significantly more top-down: senior management has a deficient concept of how product development works. They believe Manpower is to be spent to achieve revenue, either by directly selling the result as a product (e.g. airplanes selling wifi to passengers) or by it being a differentiating feature for the sales department. This causes every allocation decision (like hiring) to fundamentally be biased around getting a tangible return: by creating new projects, new features, and new buggy microservices.

Further, since management only has two knobs (manpower and timeline) to play with, they like to move them to feel like they're optimizing the project. It's always the same fallacies, too: "we hired more people so we can create explosive growth", "we created ambitious timelines, now we're striving to fill them" etc.

I don't have a solution for this, except to note that it can be mitigated by managing up. Construct your own narrative, and take advantage of the fact that the non-technical people above you govern almost entirely by gut feeling.


Yeah I dunno, I hear this a lot, but there has universally been way more work to do than people to do it at every company I've worked for. But that doesn't mean the right things are being prioritized.


There is a lot of work, but a lot of that work is generated by people doing the wrong thing too often.

If we had a smaller and more competent team, the initial build might have been marginally slower, but we wouldn’t have to spend a permanent 50% to just keeping down the technical debt.


You’re working from a cost-efficiency / cost optimization perspective. That’s a great perspective in some contexts, for example, mature late-stage products, fully saturated markets, etc.

Is cost efficiency an effective perspective for innovation or revenue growth? Mostly, no. As long as your risk-of-ruin is low, then you want to fail. Sometimes people misinterpret this as “doing the wrong thing”. But it takes doing a lot of wrong things to do the right thing.

The difference between right and wrong, if there ever was such a simple dichotomy, is so marginal and only understood in hindsight.


You ask the executive if he wants to get to his goal with a team of 200 after lighting on fire 20M, or with a team of 20 after paying 2M.

Ultimately you end up in the same spot, but one choice is fairly suboptimal there.


The whole point of the comment you replied to is that, if the "goal" is something innovative rather than just sustaining, then no, you don't end up in the same spot in those two scenarios.

But this is why some executives are better for some kinds of businesses and others are better for other kinds. Some executives don't understand your parent comment's point (or just don't find it comfortable), and will be very allergic to the "waste" necessary to experiment and iterate on poorly understood projects. Other executives will be uncomfortable just constantly figuring out how to optimize costs without damaging revenues.

A very tricky part of the lifecycle of many companies that get gigantic is to figure out when to flip this and start switching out the executive team to focus on a different model.


Maybe I am biased by my background in finance tech but I saw a ton of guys get rewarded for what (at the time) seemed like boring maintenance of systems.

In retrospect - it was recognized that they built or took good care of money making systems with little drama and that was well appreciated by the companies.

In FAANGs I see now more of "what will get me promoted" versus "what is gonna make the company money" ethos.


"finance tech" -- those people maintaining those systems are middle-aged, comfortable, and well-paid. If you were junior under them, you would never want to stay. Fifteen years ago, there were large fresh hire classes each year. Tons of juniors around the office. Plenty of young ones dreaming up unnecessary "upgrades". Most of that is gone as the industry has matured. If anything, the hoards of junior hires have moved from finance to Big Tech.


Promotion driven features are definitely not entirely a myth IMO, but on the other hand, anecdotally I saw quite a few people get promoted at Google for doing make-things-work-better work. The trick was figuring out which were the important things to maintain and make small improvements to, rather than just which were the things that seemed fun to tinker with, but weren't as impactful.


They're not mutually exclusive.

FOSS software has vastly different incentives than commercial software, yet suffers from many of the same problems: bugginess, poor performance, lack of documentation, feature misprioritization, bad UI.

That alone indicates that the problem is not merely "misaligned incentives".

Actually, you can reduce most problems down to "misaligned incentives" if you're overly reductive enough. That doesn't mean that it's a useful way to think about the world.


I think Free Software suffers from the misaligned incentives. Take documentation for example. Why would I write it? I already know how the system works. I designed it! If I forget in a few years, a quick glance at the code will refresh my memory. One would argue that you should write documentation so that people will use your thing. That's true! But there is almost no incentive to have users; you pay a cost, but they pay nothing in return. (Someone will send a bugfix now and again, of course, but it's very very rare.)

Some other incentives are balanced, though. Persistent low performance or bugginess affects the author and end users equally; the more the author uses their own software, the more this will hurt. Sometimes the low performance is a design trade off; Python isn't Rust, and the users seem to be okay with that. It was done on purpose. Sometimes low performance is a factor of the author's needs; you're trying to run the thing on 1 billion machines, they only have 1; something has got to give. But that's not misaligned incentives so much as it is lack of suitability for a particular purpose. A screwdriver is terrible for hammering in nails. That's not the screwdriver's fault.


> I think Free Software suffers from the misaligned incentives.

It's really hard to tell what the motivation of any given free software author is. That makes it really hard to even know what incentives matter to any give author, team or community. It's just really diverse.

> That's true! But there is almost no incentive to have users; you pay a cost, but they pay nothing in return.

It's fascinating to see free software with huge user bases getting on with a tiny number of contributors. It seems like good code + near zero support cost + near zero support expectation seems to work.


Many, likely most, FS projects seem to fall under "likes tinkering." So it's often a different spin on "add shiny tech", except without any PM at all.


Reminds me of one of my favorite papers ever: "Nobody Ever Gets Credit for Fixing Problems that Never Happened". https://web.mit.edu/nelsonr/www/Repenning%3DSterman_CMR_su01...

Unfortunately, for my money, I think the only real way you can create an incentive structure which emphasizes stability and change is by offering some kind of form of insurance.

My father was an electrician who often complained about how he never got paid adequately for the stellar, stable work he did, and one day I asked him whether he ever thought of raising his rates but providing a kind of service guarantee, where if a problem occurred that could be traced back to his own work, he would step in and perform the additional work at a reduced fee. Naturally he laughed out loud, because that's not how business works.

Ownership of an already-mature product is sort of like providing an insurance policy by default, of course. And sticking with conventional designs can be a solid business strategy if you use their slow-changing nature to e.g. build the thing faster than you could otherwise. That's the strategy I'm using for my consulting: Stick with what we know best (Hugo+Bootstrap for a napkin sketch UI demo as fast as possible, then SQLite+Django+React to build out the main functionality ASAP too). Emphasize solving the _business_ problem over the shiny tech.


I don't know if there is a name for it, but this is a plague for every security-related thing, or any jobs where the more skilled you are, the more people forget you (like a sound engineer for a movie production).

An ex-director of a French national security agency complained about exactly that during an interview, that you get more budget after a terrorist attack, or after you stop one that was well under way, but never if you avoided the condition to create a terrorist cell altogether, or nipped it in the bud.


I don't know, the more I advance in my career, the more I see it as the opposite. Wide eyed developers with big designs, who are obsessed with the technical aspects of a solution, who disregard the practicalities, long term implications at the social level (who is going to maintain this, do we have people that have that skillset, is this worth the effort, does it really matter to be this elegant, or is it more important to ship quickly and economically) come off as a bit immature and the more effective engineers who understand these priorities are given more respect and authority.


As a team lead, I’ve found it really difficult to keep curious, smart, young engineers on track. Everyone wants to go off and build shiny things instead of solving real problems. I have to find enough shiny problems that actually need solving to balance out the daily grind. Interestingly, I also find it difficult to instill a sense of meticulousness, and how important it is to write code in a way that reduces bugs. Clever engineers come up with clever, complicated solutions that are written quickly and rely on coincidence to function. Life experience is the best teacher for this, but I often need to step in. I’m still not sure what the balance is there.


> I’ve found it really difficult to keep curious, smart, young engineers on track

I’ve found the opposite. The young engineers are generally willing to listen to reason. The older Enterprise Architects are the ones that want to keep making things more complicated, or want to keep using suboptimal solutions because we’ve been using them for years.

Now that I write it down it’s kind of curious how on one hand it’s complicating things with stuff they already know, and on the other hand it’s absolute rejection of stuff they don’t.

Maybe I’m the same?


> the more effective engineers who understand these priorities are given more respect and authority.

The problem with "given more authority" I see is that management plucks these engineers out to make their day job basically "sit in meetings" if you're even slightly effective at simplifying life for everyone else.

Because that is the place of most leverage to place those people, but then those people are in a constant tug-of-war with the first group of "fresh ideas".

Eventually, the people who are in charge of the "prevention of bad architecture" become the bad guys because they (or me, I'm projecting) get jaded into just finding out what's wrong with something as fast as possible to be able to keep up with that workload.

You go from a creative role to a fundamentally sieve role with destructive tendencies, where you are filtering out the good from the bad as fast as possible, be.

First of all, not all new ideas are bad and "there's something bad about X" is not a "let's not do X".

Secondly, going from making things to shooting down things is intellectual suffering if you have a bit of empathy.

Some people on the "committee" with you don't have empathy & literally enjoy it - you're either trying to damage control on a case-by-case or building a "these assholes need to be fired" doc out of the meetings.

I realized what I would become if I conflated authority and respect ("respect my authoritah!").

Quitting was really the only way out of it. But it wasn't hard to explain to my spouse that i needed to leave for a job a level down and which paid half as much, because she could see me bringing my "why the hell do we have to do this" attitude home & to family decisions.


The problem for a lot of the social level issues is that it is pure pure politics.


There’s a couple of woodworking hand tool companies who among other things make replicas of old school Stanley tools, the way Stanley used to make them (materials and tolerances). They also fuse the best elements of several eras or manufacturers to make slightly better versions. Surfaces from this one, handles from that one, adjustment mechanism from a third.

I hope that I live to see a time when software applies modern algorithms to classic designs and produce “hand tools” in software.


The field of software is maturing as we’re reaching the end of Moore’s Law and time passes. Times of constant innovation is very slowly coming to an end, the curve is slowly flattening. You can already see it with general trends like type-safety, DX features universal in all languages (linting etc.), browsers finally becoming the universal OS (Wasm, WebUSB, GPU), more and more things being standardized every day.


Proebsting's Law says compilers double code efficiency every 18 years. I wonder what the doubling interval is for algorithmic performance. I expect it would be tough to calculate, like the cost of living, because algorithmic improvements rarel affect all aspects of code performance equally. Incremental improvements in sorting efficiency likely have one of the broadest reaches, followed by concurrency improvements and object lifetime analysis. Then there's a long tail of niche applications that only apply to certain domains. Only the Amdahl's Law parts of the code has a substantial impact on performance.


> No designer is given promotion for sticking to conventional designs. It's their creative & clever designs that get them attention and career incentives.

This. I recall the case of a couple of FANGs who on one hand expect their engineers to deliver simple, maintainable and robust systems to minimize operational costs, but on the other hand they demand engineers operate at the next level to be considered for a promotion, which means they are expected to design non-trivial systems which have a certain degree of complexity and require significant amounts of work to pull off. Therefore, as an unintended consequence, the pressure inexperienced engineers to push needlessly complex projects where they are required to design solutions well above their level of expertise, and put them in a position where their career is personally threatened if anything gets between them and their promotion-driven project.


Coworker of mine wrote a thing in java… it was too slow (intensive locking between threads)… so he rewrote it in C… it was crashing all the time (because he sucks), then he rewrote it in go. Got promoted for this feat.


That's kinda what initally Go was made for IIRC. They noticed people which job is not programming but had to write some code (say some analytics or sth) often did it in Python, and if it was too slow they moved to C or Java and were predictably terrible at it, so that's why Go was made simple and with builtin concurrency primitives.


But his job is programming. He is very proud of his C skills. If you listen to him, it crashed because C is intrinsically bad (it is difficult yes), but I guess it was also such a bad codebase that rewriting it made sense.

He also holds a grudge against me for having dared to rewrite a sacred C library he wrote (that was a constant source of segfault and I rewrote in 1 afternoon).


Sounds more like his job is producing tech debt. I saw few people like that, basically none of their code was left as most of that needed to be eventually replaced coz it was shit.


Yet, the people who wrote the minimalist, elegant and usually open source software we all rely on (e.g. sqlite) are highly regarded.

All of what you said is true, but there are still people who think a minimalist, rugged and reliable solution is superiour. That maintainability is a value in itself (and thus, one should not choose the wildest, most experimental dependencies).


I'm not sure it's just incentives. Inexperienced early stage founders often end up solving imaginary problems, despite having a real incentive to get it right. The Y Combinator moto is "make something people want" because so many people don't.


^ this ^

Until we figure out a nice metric for "removing complexity" and then rewarding for it, it's not likely to change, IMO.


Definitely.

I’ll say one thing though: all of those skills you mentioned as not being valued are extremely useful for indie dev/taking ownership. Shipping simple + correct code frequently is extremely possible with sufficient practice and discipline.

More to your point, this is why I switched toward being a research engineer. There is a higher barrier of entry, projects are quite technically challenging, and constraints often force thinking of a world of computing beyond the tiny sphere of the web browser.

It’s hard work, but I love it.

If this resonates, you are in the US, and looking for a change, drop me a note (see profile).


Just want to throw out a way we could align everyone’s incentives: a robust universal economic safety net. If people were working to make a good product rather than stop their children from starving, our natural inclination to take pride in our work would be allowed to flourish.

Not gonna convince anyone but hopefully someone reads this and starts thinking about such options. They are not as impossible as those in power would have us believe.


You can still get away with these things if your only user is yourself or maybe a small handful of non-enterprise folks. This could be why the so called "scientific" programmer feels like they can be as productive as a team of 10+ software developers.

And also why the most frequent request from the users is: Please don't change anything.

The principal-agent problem looms large in software.


"And also why the most frequent request from the users is: Please don't change anything."

I think it is: "Please don't change anything I did not request"

(who does not hate updates that break your workflow?)

But they usually very much like changes, that makes life easier for them. Best way to find out, is to watch them using your tool and talk with them.


I like that it's this way so we can easily compete with these dysfunctional companies. Don't fix them :)


There are exceptions to all the above, but only after the worst case burns from failure to do those things. If you are losing customers to competition that doesn't crash, then suddenly you can be the hero by making yours more stable. Of course only if you can do this before the company dies.


I agree with just about everything you've said, except that bit about rewrites. Rewrite projects typically go down in flames, and on the rare occasions that they don't, the business stakeholders are still mad because their precious feature factory was down for maintenance for months.


I don't think you two disagree, GP is just saying that large-scale rewrites are rewarded, regardless of the result. I've seen that happening even when stakeholders were unsatisfied.


> months

Please.


Not universally true. Just find a manager who values successful delivery of project goals and is fine with "boring" i.e. tried-and-tested technology.

I blame Career-Driven Development for a lot of "shiny toy" (i.e. failed and complicated) projects.

Whether that's wrong is a deeper question - if it leads to better salary through job-hopping and promotions, despite a catalog of failures, is it actually the wrong approach from an individual engineer perspective?

I still say yes, FWIW; I enjoy seeing my projects succeed and thrive. But I know others may disagree.


And at least at the “enterprise” level for B2B software, there is intense customer demand for more features and, simultaneously, more stability.

From my view, that and pressure from the analyst racket are the main drivers behind feature bloat with self promotion a distant third.


You can think of it in even broader terms: being/staying lean, re-using tried and tested things, adapting your requirements to widely available solutions rather than developing custom solutions to fit your requirements, making everything work more efficiently, etc -- all those resource-minimization issues are "less" problems: not good problems to be working on when your raison d'être is "more" -- and that's the only raison d'être for a lot of people and actually for all businesses. (Obviously there are also specialists for doing "less" "more": unsurprisingly, they are generally paid a cut of savings.)


and the re-writes have to be in a trendy language that other companies are using, just for the engineer to stay relevant.

everything about engineering group decisions are about creating a reason to use a newer framework in a way other people can vouch for.


"Show me the incentive, I'll show you the outcome." - Charlie Munger


Isn’t he the guy who tried to get a university to build a giant windowless dormitory cube?


I mean, he's not wrong though - the incentive was "University gets a large amount of cash" and the outcome, predictably, was "University bends over backwards to accommodate insane requests of donor".

(The correct solution, obviously, is that universities should be sustainably state-funded and not require mega-donors with their associated insanities, etc. to survive.)


Still don't understand the opposition to that. Students get affordable, safe, private housing on campus, where there's a million places to hang out beyond your windowless bedroom. Libraries, open spaces, study halls, cafes.

The alternative is often paying $1K for some barely maintained triplex basement shared bedroom off of campus, from a negligent landlord.


I don’t think it’s as easy as you’re making it to seem. The issue is that there are unintended consequences to each one of those points you mentioned. I’m pretty sure a substantial amount of thinking goes into software design from all aspects and it’s a bit reductive to say that it’s just lack of incentive. Humans doing software are not some machine learning algorithm to train with reinforcement learning techniques.


All of those are absolutely real. There is a lack of economics and a surplus of optics in the game for all players.


I don't think these "No"s are entirely right, though they are directionally right. But there are actually healthy businesses (and healthy divisions within less healthy businesses) out there that do incentive those behaviors, and it's a huge competitive advantage for them.


Just for context for others: This is an extreme description that doesn't match many of the actual jobs. My team/environment is the exact opposite of this, for example. I think parent projected their experience way too far on the whole industry.


I’m going to reference this comment in the future when I build my next software product. Thanks


This kind of problem surfaces in a large amount of systems involving humans and roles.


I call this "the tragedy of software development"


How do companies keep pushing their quarterly numbers higher and higher? By manufacturing innovations! Welcome to 21st century capitalism.


The author hits the nail on the head with his claim that imaginary problems are more fun than real ones.

As developers and smart folks in general, we like complicated problems that are big and far away. How many times have I heard in a meeting, "Yeah, but when we have 1M users..."

It's great fun to think your product will get to 1M users. It's also very unlikely. It's not nearly as fun to finish and ship and market and monetize the half-broken thing the team is working on now. Yet that's the only way out and the only way anyone gets to 1M users to begin with.


> The author hits the nail on the head with his claim that imaginary problems are more fun than real ones.

Not necessarily. It's just that most developers have never worked in a setting where they got to work on problems properly.

Solving real problems for real people is very addictive. There is a reason some people like working for startups. It's because you live very close to your users and when you make them happy you know.

The second interesting fact is that if you just plow ahead and solve many real problems fast you will eventually run into problems that are both real and interesting.

After having tried that there has been no going back for me. I am allergic to imaginary problems. It feels as pointless as watching people who are famous for being famous argue on TV.

I think we are all victims of our feedback loops. (University) Education sublely teaches us that the only important problems are those that are very difficult and preferably have never been solved before. Those same problems also make for better blogposts. In the real world the incentives are mostly opposite. Problems with no known solutions (or only really difficult solutions) are generally bad. They can be worth it, but you should stay away from them until you know that are worth it. Software engineers seem to almost pride themselves on not knowing what their users want.

It takes a while to scrub all that bad learning out and replace it with something better. Unfortunately some people are stuck.


> (University) Education sublely teaches us that the only important problems are those that are very difficult and preferably have never been solved before. Those same problems also make for better blogposts. In the real world the incentives are mostly opposite. Problems with no known solutions (or only really difficult solutions) are generally bad.

This is a great insight. One can add value to other people's lives by applying known solutions in relatively novel contexts (e.g. building a CRUD form at XYZ employer), whereas it's very hard to add value to other people's lives by trying to develop entirely novel solutions (because the probability of success is so low). Most of our training however, focuses on the methodology used to develop these novel solutions, rather than on the application of the solutions themselves.


> Solving real problems for real people is very addictive.

This was the feedback loop that worked best for me. imaginary problems and needless complexity go hand in hand, ruthless editing at the planning stage is necessary to combat it.


Second this.

The endorphins from making someone's job less sucky are a way better high than solving some code puzzle.


Reminds me of a PM I used to work with. "Will this work for 1000 simultaneous users?" After almost 2 months, we have less than 100 users total, maybe 5 of them log in a day, and maybe 1 will actually do anything of interest.

There is no technical problem. The problem is nobody worked on actually marketing the product. Build it and nobody shows up is the norm.


I was interviewing with a company that had barely any customers and they were asking scaling questions with Spark, etc. The salaries they paid could barely hire a team capable of dealing with the complexities of Spark, so they asked, "what would you do."

I told them I'd buy another stick of RAM and scale vertically until I had more customers, and save money on staff in the meantime. The interviewer went cold, I didnt get the job.


About 10 years ago, I worked on a project where I had to develop some sort of elaborate chain of map/reduce jobs because "big data" and "Hadoop." We were processing about 10 megabytes for each run. Most of the processing was consumed in job scheduling / overhead.


Sounds like a dodged bullet.


""Will this work for 1000 simultaneous users?""

Whenever someone asks me this question, I reply with a question "How many simultaneous users do you/we have today and what is our projection say for 12-18 months from now ?". If the answer is not clear, I tell them not to worry yet. If the answer is very clear but numbers are much smaller today (say 5-10), then I challenge them on where they think they/we could be in 12-18 months. A lot of times, it helps the other side see that they are mostly asking "How long is a piece of string".


I’ve worked somewhere like that. Our baseline requirements for concurrent users were based on the numbers required for the product launch team to maximise their bonus.

We never saw anywhere near those numbers in production, but I don’t really blame them - it was a big company and you do what you can to get ahead. A lot of money was spent on infrastructure that wasn’t needed but nobody seemed to care.


And people underestimate how well some solid, dumb solutions can scale. Boring spring boot with a decent data model and a bit effort to stay stateless scales to the moon. Or, we're having a grand "data export" system customers use to collect data from us for their own DWHs. It has survived 2 attempts at replacement so far. At it's core, it's psql + rsync, or was recently migrated to psql + s3 when we decomissioned our FTP servers. And it's easy to extend and customers are happy because it integrates well.


> And people underestimate how well some solid, dumb solutions can scale.

I'd say they underestimate how long "just throwing money" (servers) at the problem can work.

If you earn decent money now, scaling number of app servers 10x times to serve 10x times the traffic will still earn decent money. Doesn't matter that "PHP is slow", deal with it when your infrastructure cost warrant hiring more/better developers to fix it.

Especially now. Pair of fat servers on some NVMes and 1TB of RAM gonna cost you less than few dev-months and that can serve plenty of users in most use cases even before any extra caching will be needed.


Even then, you don't have to throw money at the problem right away. If you feel you can save time and money by using Rust instead of PHP (just using the two languages as examples, not a specific indication of Rust or PHP's resource drain), go ahead. Making that decision early on costs nothing.

It's only after a project is off the ground that caring about these decisions winds up wasting everyone's time, that's when you wind up slowing momentum tremendously due to dangling a potential new toy in front of your team.


>> And people underestimate how well some solid, dumb solutions can scale.

I think this started with the old Apache web server. When the www got started, that server did the job so everyone used it. The problem was it didn't scale, so all kinds of cool solutions (load balancers and such) were developed and everyone building something bigger than a personal blog used that stuff. For most the root problem was that Apache had terrible performance. Nginx has solved that now, and we also have faster hardware and networks, so anything less than HN can probably be hosted on an rpi on your home network. OK, I'm exaggerating, but only a little. Bottom line is that scaling is still treated like a big fundamental problem for everyone, but it doesn't need to be.


Yep. Facebook got pretty far down the road with PHP, MySQL, memcached, etc.


All they had to do was write a PHP compiler and a new storage engine for MySQL.


HipHop (later HHVM) was around 2010, so they scaled from 2004-2010 before that became needed. MyRocks was 2015. Wikipedia says FB was around 300 million users in 2009, then 400 million users in 2010.


Yes, good point. But you have to wonder what kind of engineering effort went into scaling PHP and MySQL up to the point where they decided to build a compiler and a storage engine.


When you have half a billion users that both read and write all day, you have to optimize, no matter the tech.


That is undeniably true, but I do think the starting point still matters.


It was much, much, much cheaper than a rewrite would have been, that's why they did it.

Edit: also, in 2004 when they got started, what else could they have used?


The trick was there was enough growth that the savings from the compiler were massive. (I worked there at the time.) The inefficiency of the PHP interpreter was a great problem to have, because it came from the success it enabled.


So I think the interesting question is whether the rest of us can learn anything from what happened there.

I believe Mark Zuckerberg simply used the technology he knew and took it from there. That's fine. I probably would have done the same thing.

But many people are making an ideology out of this, arguing that not giving a shit about performance is always the right choice initially because that's how you grow fast enough to be able to retool later.

I think this is based assumptions that are no longer true.

In the early 2000s, mainstream programming languages and runtimes were either fast and low productivity or slow and high productivity (Exceptions such as Pascal/Delphi did exist but they were not mainstream). And the cost of scaling up was prohibitive compared to scaling out.

Today, you can choose any fast high productivity language/runtime and go very far mostly scaling up.


I take away two lessons from it, which are in a kind of essential tension that can only be mediated by wisdom and experience:

1) Pick technologies that you can optimize.

2) Don't over-optimize.

Also, the concept of "optimization" here has very little to do with the language itself. It's far more about the overall stack, and it definitely includes people and processes (like hiring). It's not like FB invested $0 toward performance before swapping out PHP interpreters! Its massive caching layer, for example, was already taking shape well before HPHP (the C++ transpiler which preceded HipHop), not to mention the effort and tooling behind the MySQL sharding and multi-region that still exists in some form today. Many backend FB services were already written in C++ by 2010. But they had already gone very, very far—farther than most businesses ever will—on "just" PHP. Heroics like HPHP only happened after enormous pipelines of money were already flowing into the company.


Learn what? That you should use the language that you’re more comfortable with and then scale? Or that languages have become more efficient? Php 8, for example, is many times faster than the php 4 and 5 that Facebook was using.


In part the reason that PHP8 (and it was 7 that had the quantum leap in perf) are now so fast is precisely because of Hack - it was easy to accept the status quo on performance until Hack showed there really was a lot of performance left on the table.

For me the biggest win was the changes they made to how arrays are stored in memory, I saw some systems drop by half in memory usage and had to change basically nothing - those kinds of wins are rare.


Yeah, I know the performance optimization were in part because of hhvm.


I think using what you already know remains a choice that is very hard to criticise. But we didn't have to learn that, did we?

Beyond that I think there is more to unlearn than to learn from the history of the Y2K batch of startups. The economics of essentially everything related to writing, running and distributing software have changed completely.


Did they succeed because of PHP or was it just a tech used at the time and anything else similar at the time would be fine either way?


They succeeded because of php. It was easy to use for them. So it enabled them to materialize their ideas. It was the right tool for them. Anything else would have been fine either way, if it was the language they were the most comfortable with. In their case, it happened to be php.


That just sounds like "they succeeded because they knew a programming language", not that it was right one compared to competition


No, they totally succeeded because the used php. I think zuckerberg said it himself that php allowed them to add new features easily. I think he mentioned that it was easy for new people to pick it up. I’m pretty sure Facebook wouldn’t exist today if it had been written in the more corporate/esoteric languages available at the time.

Its ease of use allowed him to launch the site from his dorm room. Iirc, YouTube was also written in php (it had .php urls), before google bought it and rewrote it using python, so you could probably thank php for that site too.


Just checked. It appears it was indeed first written in php then changed to python then to java.


Well, OK. But by that logic, if the language they had been most familiar with was Fortran, should they have used Fortran for Facebook? I tend to think that there are actually material differences between languages and technologies, and it's worth knowing more than one language and not using terrible ones.


“ But by that logic, if the language they had been most familiar with was Fortran, should they have used Fortran for Facebook”

Absolutely. Otherwise they wouldn’t have been able to release the actual product and keep adding features to it the way they did with Facebook. They’d spend half the time learning the “right” language & environment. That would have slowed them down to the point they wouldn’t have been able to work on the actual product as much as they did.

And feature-wise, Facebook evolved really quickly.


I don't think there was anything similar to PHP that wasn't proprietary (Cold Fusion etc.), and FB engineering culture was to avoid vendor lock-in.

In any case, in the 2000s a PHP programmer/designer was analogous to a JavaScript developer today. Lots of talent out there, and it only took a few weeks of orientation and familiarizing for new hires to be productive.


your comment implies your understanding of the timeline is backwards. they had to do those things after they had gotten hundreds of millions of users.


Depends on what you consider "far down the road" and what they had to do before writing a compiler and a storage engine.

How long did it take until Facebook engineers realised that their technology stack was not the best tool for the job? It definitely wasn't the day when they decided to build a compiler and a storage engine.


I'm not sure there was really a best tool for the job in 2003-2004 that would have been high-level enough to be productive, and scalable enough to stay mostly as-is. Java, maybe.


I agree, and I'm not criticising the choices that Mark Zuckerberg made at the time. But we are no longer facing the same situation he did. We do now have high productivity, high performance language runtimes. And scaling up has become much cheaper (relative to the number of users you can serve).

That's why I think it can't hurt to remind people of the great lengths to which Facebook had to go in order to deal with the limitations of their chosen platform.


Yeah, I kinda don’t agree with the dichotomy of “you either optimize or you build features”. They’re not exclusive. If you understand the tools and their trade offs you should be able to use the right tools for the job which won’t hinder you in the future.

Of course if all you know is JavaScript, then that requires going a bit outside your comfort zone.


> And people underestimate how well some solid, dumb solutions can scale.

And overestimate how expensive it is to just add another server, or underestimate how expensive a rebuild is. But then, part of that is also that IT departments want the budget; if they don't spend their annual budgets on innovation, their budget gets cut.

Or in my neck of the woods, the EU subsidies will stop if they don't have anything they can file as "innovation".


I worked with a project manager that went too far in the opposite direction, though. Their pushback against premature optimization manifested in wanting to start with a functionalish "proof of concept" without any real design phase so they can say we cranked out an MVP in the first sprint... and before you know it, like most non-blocking technical debt, the "migrate functionality to final codebase" kanban card moves to the "long term goals" column (aka trash) and you're stuck with a shitty, fragile producion codebase. The opposite side— trying to get everything into a final state right off the bat— it is like trying to play an entire 8 measure song in one measure.

At the beginning of a project, before I write a line of code, I try to: a) if it's user-facing software, get some UI designer input to shape functionality/interactions, not styling. To end users, the UI is the software, not just the shell, so mediocre UI = mediocre software. It can also illuminate needs you didn't consider that affect architecture, etc.; then b) block out the broad stroke architecture, usually on paper, c) intentionally choose languages/environments/tooling/etc rather than reflexively going with whatever we've been using recently, and d) spend some design time on a reasonably sane and extensible, but not overly detailed data model.

There's no perfect solution, but at least in my cases, it seems like a good middle ground.


The trouble is, an 'MVP' is often far from 'minimum' in those sorts of situations.

The reality is that MVP should be missing most planned functionality and should really just be a few core features and functions that the rest of the application builds off of, the trunk of the dependency tree so to speak. That idea is, unfortunately lost on the majority of PM's, and ultimately it costs more time/money to get to a finished v1 because of it.


It was actually the appropriate scope for an mvp, it was just an unreasonable time frame to make a solid codebase for anything other than a demo for the complexity of the project. That's fine for a genuine proof of concept/rapid prototype you're going to be disciplined enough to trash, but letting that slip into the role of your core codebase is like pouring a sloppy scaled-down concrete foundation as a test for a house, and then trying to just expand it into what you need.


As a solo dev on a project, I constantly re-evaluate whether the thing I am working on is beneficial to my users or if it's an "imaginary problem" as this post describes. In a large software project you always have a laundry list of things to do from: "fix the date format on this email template" to "implement a better feature flag system". While I'm tempted to always work on the "fun stuff" (the feature flag system), I make myself work on the "boring stuff" (fixing the date format on an email template) because I know that's what users need. Occasionally you get an intersection where the cool fun an interesting problem is also the most pressing one but I found those times are few and far between and most times I have to decide between the "cool fun [imaginary] problem" and "the boring [real] problem".


I remember having a similar argument with someone saying that your C code has to compile and work on every platform that exists, including weird CPUs with known bugs.

Unless you're working on something like the Linux kernel, that's an imaginary problem.


"Newer versions of the compiler build your code with security vulnerabilities" is a very real problem in C. E.g. since x86_64 has no aligned memory access instructions, a lot of programmers assume there's nothing wrong with doing unaligned memory accesses, but actually recent gcc/clang will happily compile those into RCE vulnerabilities.


This is why I think running a small business was the best thing I ever did for my software career.


How do you manage sales?


Now you have a whole bunch of New SQL databases that will scale past whatever number of users you can imagine. So your good old Django or Rails app can scale to 1M or 10M users without you doing anything exotic. That's not "fun" though.


I wonder if you've ever seen this classic video mocking such marketing claims which is called "MongoDB is Web Scale": https://youtu.be/b2F-DItXtZs


Have you ever seen Spanner? things don't stay static. And yes I have seen those vidoes since I've being doing web dev since 1999.


I recently added a couple of constants to some project. One of my teammates said it wasn't a good idea, because we could have hundreds of similar constants eventually.

Those constants represent the markets supported by that app. By the time the app supports even a few dozen markets, every engineer involved will have exercised their stock options and left.


I think this is an attitude shift that a lot of developers need to get over. They like writing code, they like working with computers, and they pick that to do as their day job.

But their day job actually isn't writing code, it's solving a problem for an end-user; the programming language is just a tool.

Rethink your job from a coder to a problem solver and you should be able to get over the compulsion to overcomplicate things for your own gratification.


Not the first time the issue has been pointed out:

> Simplify the problem you've got or rather don't complexify it. I've done it myself, it's fun to do. You have a boring problem and hiding behind it is a much more interesting problem. So you code the more interesting problem and the one you've got is a subset of it and it falls out trivial. But of course you wrote ten times as much code as you needed to solve the problem that you actually had.

[1] http://www.ultratechnology.com/1xforth.htm


I am dealing with that today. Talking about scaling out to hundreds if not thousands of aws accounts and I'm like, "we've added 6? in two years?" Why are we wasting time on this?


Reddit has 1M users and is half-broken, yet monetization is still a problem.



1M+ was implied. I am sorry I forced you to go on a stats-collection side quest.


So what’s your point exactly? The type of problems you have to solve for 1M users is completely different than 500M users including profitability.


I agree with this to some extent. but there’s a flip side too.

This mentality is often taken way too far. I had an old boss who wouldn’t allow me to write unit tests citing this thought process.

Even at places with decent engineering practices, I’ve seen so many examples of software where you’re limited to a one to many relationship for something that could and easily should have been implemented as many to many, rendering the product useless to many because a product person couldn’t stand to think a couple months ahead.

Some people seem to take this idea too far and basically assume that if a problem is interesting or takes away a tedious task, it must be overengineering, premature optimization, or an imaginary problem.

Perhaps a better way to phrase the issue would be “artificial constraints”, which would encompass the flip side too.


Yes. While it’s less common, I’ve seen orgs struggle because they didn’t have enough imagination.

Every feature is done quick’n’dirty and eventually you have people whose full time job is to respond to customer complaints and fix data straight in the production database.


Bad engineering but potentially good business if it’s all billed to the customer…


No, it’s bad business because it doesn’t scale. Software is lucrative because you make it once and sell it to thousands of customers. If you’re making every customer their own bespoke thing, you’ll spend all your time for little return.


“Billed to the customer” means you’re charging the customer by the hour / project. You can get plenty of return selling bespoke things this way. Accenture is a $200 billion company.


That’s called Professional Services. Professional Services assemble a solution for a customer from a variety of components and maybe build some glue or the equivalent of a dashboard. This is not the same as having a ton of “if” statements in code to handle customer X vs customer Y.

The secret, as a software vendor, is to generalise these bespoke customer requests so you can sell the solution to all your customers (and get more customers!). If you are really cheeky, you can even get that customer to help fund the development that will make your business more money (hey, it’s win-win). You need to ruthlessly follow this approach though, as the rot of bespoke code will quickly become an insurmountable quality nightmare that can sink your business.


Earlier in my career I ate up the lean startup, move fast and break things, y combinator stuff. And while there are some very good lessons there, I’ve also come to realize that when you stop working on a part of the code, that may very well be the last time someone goes in there to make serious changes for a while. So sometimes it makes sense to do it right, even if it takes a few days longer (but not if it’s going to turn into some massive overengineering project).


Yeah I agree. I think the biggest mistake people make when applying YAGNI is not considering how difficult it will be to change later. If it's just some hard-coded value that you could easily add to a config later? Fine. YAGNI.

If it's something more fundamental like language choice or system architecture... Well fine YAGNI now but if you ever do need it you're screwed.


Ive seen a lot of engineers complain about YAGNI being taken too far but none who have seen their concerns validated by reality.


I have seen it validated by reality several times… more times than the opposite. I had a boss refuse to let me do a refactor that changed these sketchy dynamic field tables into json columns because “it’s not customer facing.” They were unable to show off features in an important demo because the endpoints were timing out despite putting 2 other people on it for 2 weeks to find code-based optimizations.

3 days later I deployed my “nice to have” fix and the performance issues disappeared.

I’ve also seen a company stall out scaling for years and lose multiple million-dollar customers despite having a novel in-demand, market leading product because they refused to do anything to clean up their infrastructure.


>I had a boss refuse to let me do a refactor that changed these sketchy dynamic field tables into json columns because “it’s not customer facing

YAGNI isnt about not refactoring existing technical debt. It's about not trying to pre-empt future requirements.

If youre refactoring in anticipation of as yet unmaterialized requirements then YAGNI applies - e.g. generalizing code when when today there is 1 specific use case because tomorrow you think there will be 3+.

If youre cleaning up existing code while working on it and the boss stops you because "it's not customer facing" then he's just asking you to violate the boy scout rule.


All of these definitions are fuzzy... refactor versus upgrade versus feature. When the people wrote it the way they did, they were almost certainly thinking that they don't need to overthink or over-engineer, and that they should discount hypothetical future concerns.

I can give you an abundance of examples. We were creating a page that was going to use state in a certain way. I was trying to insist that we address the way state will be handled across pages ahead of time. These concerns were dismissed as premature optimization. A few months later we had 5 pages with the state being handled in 5 different ways, and being synced in different ways between each page, complete with if statements, sometimes passing state through URLs, sometimes through local storage, sometimes through session, sometimes through JWT data, generally through a combo of several of them. Then we'd end up with confusing redirect loops for certain edge cases, state getting overwritten, etc.. We spend weeks fixing these bugs, and, eventually, weeks refactoring to manage state in a simpler way. These bugs often got caught by customers, drawing us away from feature delivery that was critical for demos to large customers.

All of that could have been avoided by spending 1 day thinking a little harder and planning for the future.

It ultimately boils down to a couple assumption that people like to make. (1) engineers know nothing about the domain, they can never predict what will be needed. That might be true in a large company with obscure domain-specific things for engineers who work far away from the day-to-day, but sometimes the engineers know exactly what's going to come up. (2) You can hill-climb your way into optimal program implementation. You can get to local maxima this way, but there are regular ways that programs grow based on how the business is growing and you can predict certain places where you will soon hit diminishing returns for current implementations. As long as you're up front about it and double-check your assumptions about the way the business is growing (and hence the application), I think there are ample places where you actually are going to need it.


>I can give you an abundance of examples. We were creating a page that was going to use state in a certain way. I was trying to insist that we address the way state will be handled across pages ahead of time. These concerns were dismissed as premature optimization. A few months later we had 5 pages with the state being handled in 5 different ways.

The right time to address this was probably a bit at a time after the 1st, 2nd and 3rd pages. Certainly not before the 1st and definitely not after the 5th.

>All of that could have been avoided by spending 1 day thinking a little harder and planning for the future.

The reason why you try as hard as possible to avoid planning for the future is because it's really hard to predict the future. Moreover humans have an inbuilt bias towards thinking we are better than we are at it (hence the gambling industry).

Refactoring as soon as possible after the fact will always produce better designs than up front planning for this reason.

>there are regular ways that programs grow based on how the business is growing and you can predict certain places

This is the kind of phrase that makes alarm bells go off in my head that somebody SHOULD be following YAGNI and isnt.

If it's defaults in a well worn framework that doesnt railroad you then fine but anything more than that - red flags all around.


Rule of 3 is often correct, first time just do it, second consider if it'll very likely happen a third time and when the third time happens it's darn well time to do it!

HOWEVER, this only works if you have the agency at an organization to allocate time for doing something. Contrary to this when you are in an organization of management that doesn't understand technical debt (or is fine with it because it just means more consulting hours) then it's absolutely the correct choice to stall and/or fix things "prematurely" (iff you can see what product they are trying to create without knowing it) if you'll be holding the shit-can of duplicated crap down the line getting knuckled because things aren't going fast enough due to technical debt.


The problem comes from the emotional pressure to finish tickets quickly - this can be external pressure but it can also be internal.

In that case the temptation to close the ticket and skip the refactoring step can be too great.

If you're begging for time to refactor from the PM youre doing it wrong.


The good old "We have other priorities right now and lack of resources"

Preventing fires will never be a priority. Not even if you smell smoke.


I’ve had tons of times where YAGNI has bitten teams at a FAANG. It’s been responsible for re-orgs as products have to pivot to meet the goals that were dismissed but turned out to be needed.

I was creating a very important demo once, features i had said were important were classified as YAGNI. Leadership eventually saw that we couldn’t deliver without said features. YAGNI bit those teams in the butt.

these things happen all the time internally to companies but get ironed out internally as well.


All depends on what the I is in the YAGNI. I have seen development be parallised where the same work is being done again and again by different developers in different ways because YAGN { maybe 2-3 days of upfront architecture and design for a 6 month project }. This results in bugs and maintenance nightmares. This was before unit tests were common though, so maybe unit tests may have saved it. But surely it was slower to develop that way.

But tautologically you can't take YAGNI too far, if the "YAGN" part is actually true :-). But that is always under debate.


It certainly feels that way. 2 or 3 days of up front architecture and design with hindsight is always better than 2 or 3 days of up front design in reality, but of course you dont have that hindsight when you start.

I've had to do up front on multiple projects and it always results in overengineering - we focused on things that didnt matter, designed things that were inappropriate, etc.

I'd always rather take those 3 days and redistribute them as extra refactoring time.


I don’t know what is going on with this article. The first half is a maybe reasonable description of a common way for certain kinds of contracts to go wrong. But obviously lots of software doesn’t get developed in this sort of arms-length way. I would say that imaginary problems (as the author defines them) cause failed projects by consultants/contractors.

I find the rest of the article to be bizarre. The discussion around retail banking software seems unacceptably incurious and a very likely incorrect diagnosis of the cause of the problems (it basically stoops to an ‘I could do that in a weekend’ level of criticism[1]). It then transitions to a screed about Goldman Sachs which is, as far as I can tell irrelevant (Goldman do very little retail banking; their software development will be quite different to that done for retail banking), and then some description of how the author thinks (all?) large companies are (mis)run. I don’t know if Goldman was meant to be a prototype for this model of company management but it seems like a particularly strange example (in particular, they still will have some remnants from the culture of being a partnership, so they’ll be run somewhat differently from other big investment banks).

I found the second half did not ring true. I’m sure software projects fail at big companies (including retail banks, Goldman Sachs, other investment banks, tech companies, and so on) but I don’t find the reasons given in the article convincing to the extent that I think that section could have been written by someone who had only ever worked at very small companies. But maybe it’s just me and most companies are so obviously terribly wrong in these ways that no one even bothers to write about them and so I only see subtle ways they go wrong like projects dying off due to management acting in bad-faith ways or rewarding people for things that aren’t actually so good for the company or whatever.

If you’re interested in better discourse around certain kinds of bureaucracy, look into moral mazes.

[1] generally ‘I could do that in a weekend’ is code for ‘I could do some minimum thing that doesn’t actually solve whatever the important problems are in a weekend’


The second part of the article makes it clear that the author has never worked in online banking (I have), and possibly any other complex domain.

> Have you ever heard about those three web engineers who figured out that secure online banking is actually quite an easy problem to solve?

> The storage and transfer of numbers is not a particularly hard problem.

These quotes are so incredibly disingenuous that make me question any advice OP has to offer.

First, banking is quite a complex domain, and it's complexity increases exponentially with the kind of services that you offer.

Second, banking is a highly regulated industry, which makes everything why harder than "it should". In fact many "neobanks" have appeared in the last decade, and this is usually their biggest hurdle.

Third, online banking needs to deal with quite a few hard technical challenges. That's why the likes of Monzo, Starling or Revolut often given tech talks.

So no, imaginary requirements weren't the root cause of bad software when I worked in banking. A 20+ years old big ball of mud, inability to pay off any tech debt (unless you wanted to get literally yelled at in front of the entire team), flaky and severely insufficient tests, and a very toxic working environment were all causes of bad software.


Yeah I deliberately didn’t want to write that much about the retail banking stuff because I don’t know why it is the way it is (though there are a few reasons to guess). People would often give regulations/compliance as an excuse for eg not being able to set up an account online, but then the pando happened and somehow this stopped being such a problem. I feel like either those people were just not knowledgeable about the reasons, or they were rationalising business reasons for the bank to not do those things.


Yeah I agree the author got a bit dismissive about the inherent complexity of solving business problems on an ongoing basis. He even links to a Wikipedia article about Google and offhandedly claims that the problem of indexing the whole web was solved by a couple of guys. We all know Sergey and Larry created the original Pagerank algorithm, but it's farcical to believe that their original algorithm would have stood the test of time without input from hundreds of engineers who had to deal with the rapidly evolving web and all the ensuing SEO spam, ad scams, revenge porn, illegal content, international firewalls, international regulations, scaling their infrastructure to handle billions of requests, creating an ad network to support the endeavor, etc etc. That all cannot be done by two guys in a dorm room.

I'm sure Google as an org has accrued plenty of staff that are working on mild to non important tasks over the years, and I get where he's coming from, but reality is far more nuanced.


The second part might be summarized as “when technology starts to diverge from the business model, or vice versa, both become messy.”


This resonates and one way to describe it is an incentive problem. Someone whose incentives are tightly aligned with the business is going to solve the actual problem and simply and effectively as possible. Someone who is incentivized to build career capital and experience other than via impact (e.g. so they can get uplevelled, pass an external interview loop, etc) is much more likely to focus on unimportant hard problems and/or over engineer.


> Someone whose incentives are tightly aligned with the business is going to solve the actual problem and simply and effectively as possible.

Equity is the entirely the answer for cutting through all the bullshit. At least in my head. I don't know how it plays in other people's minds but mine sounds like: "If we ship and go live, I get x% of all profit moving forward in my personal scrooge mcduck money bin". Pretty big carrot. It's kind of like time share in my own personal business, but I don't have much of the headache that goes along with running my own 100%.

This has some caveats, namely that equity in a 10k person org is often times not nearly as meaningful as equity in a 10 person org. Shipping your code 2 weeks early at Ford or Dell means what, exactly? If the code you are shipping is the business, then things are different. It also really helps if you care about the problem you are solving.

I'd say this - if the idea of direct equity/ownership doesn't get you excited about pushing simple & robust techniques, then you are completely in the wrong place. You should probably find a different problem space or industry to work in. Hollywood might be a better option if the notion of equity or straight ownership in the business still isn't enough to put your tech ego into a box.


>> Equity is the entirely the answer for cutting through all the bullshit.

I agree for small companies which are largely founder owned. I think outside of that, Equity doesnt do much because so much effort is put into obfuscating the value/share of the equity. If you cant see the cap table, and you cant see the preference overhang, the equity is as good as worth zero. There is no discernable value for a fraction with no denominator.


I have a little bit of equity in the company that I work for now. It's super small and early stage, and still between me and the product decision there exists a Designer that reports to a CTO that reports to a CEO. For everything that I want to see done differently, I have to make a case that convinces all these stakeholders that it's the right way. Ultimately, equity or not, my job is to row the ship where the cap'n tells me.


Equity is the answer, I work in investment banking and we all get a share of firm profits, I'll often sideline small projects in favour of projects that I think will be more valuable to the org and increase my/our pay cheque come bonus time


You hit the nail on the head. There are different motivations for different roles within the same company, sometimes those motivations clash internally, all the while each individual IS acting completely logically from their own unique perspectives.


RDD: Resume Driven Development


This is absolutely a thing, but I'd say there's a related option which is "Job Listing Driven Development". The more niche, dated, or specific your platform is, the harder it is to hire people onto the team who don't need months of on-the-job practice and training to be useful.

You see the most extreme versions of dangers of this in stories about governments and older companies having to pay insane salaries to bring FORTRAN or COBOL developers out of retirement to keep systems running. If you keep doing the simple solutions within the existing system, you risk creating a system so inbred that only the folks who built it can maintain it effectively.

For less extreme setups, it's still a balancing act to consider how much your unique and specific solution that is the simple option for you company starts closing you off of the larger hiring pools in more common technologies and patterns.


What's kind of funny is that MUMPS is equally as archaic and idiosyncratic as Fortran or Cobol, yet there are companies willing to put new hires through a bootcamp to make them productive. Are all the Fortran and Cobol companies too small to afford a month or three of training time on new devs?


As someone who maintains a large Fortran codebase actively maintained from the 50s, I can say with 100% confidence that syntax, compiler, and other tools aren't even 10% of getting up to speed. It's some of the worst code you will ever see. A lot of it predates "GOTO considered harmful." It also comes from an era where different common blocks and subroutines were moved into and out of memory using a custom virtual memory system.

The demand for Fortran/Cobol experience has nothing to do with training. We need to make sure you are masochistic enough to trudge through the sludge.


My guess would be that the entities short-sighted enough to still be using those languages in 2023 are also ones short sighed enough to not invest in the training to preemptively hire juniors without the skillset to train them up.


In a large government IT department:

“I think we should use a Kubernetes cluster!”

“You’re joking, surely? This is a tiny web site with mostly static content!”

Next project:

“For this web app, I propose we use Kubernetes…”


I will take that!!!


Doing the sort of simple solutions to your specific job's actual problems can also be something that constrains your ability to work anywhere else. Often the best simple solution that's tightly integrated into your job's environment is something that is inconceivable as a good idea anywhere else. You're optimizing around other old decisions, good or bad. You're often correctly overfitting a solution to your specific problems.

I've often found myself now having issues even updating my resume, because what I did for the last year at work barely is explainable to other people on my team, let alone to someone in HR at another company. Or the more simple explanation is something that sounds like I'm doing work barely more complex than an intern could have done. Which often isn't wrong, but the intern wouldn't know which simple work to do.

My years of experience in the company's stack and org is valuable to the company, and nontransferable elsewhere.


I share this problem in the last year+ of job searching I've been up to.


And thus we will see the rise of the software solopreneur.


That's been a thing for 30 years. Entrepreneurship is HARD, and tech salaries are fat right now. I think we'll see a lot more software entrepreneurship when there's another recession.


Makes you wonder what the actual state of the industry is right now with thousands of layoffs, but then comments like this one. Probably it's a bifurcation and an uneven distribution of reality.


There were layoffs in the big tech companies, but the sector itself is strong. Still very low unemployment. They over-hired. It happens. It's been a relative minor correction.


> Someone whose incentives are tightly aligned with the business is going to solve the actual problem and simply and effectively as possible.

On average, and depending on skill. Incentives are hugely important (probably the most important metric any manager could work on), but even they do not guarantee results. If you hire so many juniors that nobody is there to upskill them fast, you only get one lottery ticket per employee. Conversely, if you hire a bunch of geniuses and fail to give them incentives to work on realisable, useful problems together, you get two lottery tickets per employee at twice the price.

(This comment feels woefully incomplete. Does anyone know of good resources to learn more about incentive structures and how they relate to individual and company success? I feel like the problem is that incentive structures change massively when companies grow, so even for unicorns there's just a short sweet spot where we can actually learn how they are supposed to look.)


I don't think its just an incentive problem only. I know plenty of engineers doing premature optimization or scope creep in good faith.


Great advice, spot on

At work we have this terrible “Enterprise Architecture” team comprising of highly paid people who haven’t written a single line of code and who don’t know the intricacies of our business but keep proposing complicated “Event Driven Architectures” and “Micro this and Micro that” and just reciting the latest buzz words just to keep appearing cool.

It’s insane how much total cost they add to the organization, both directly and indirectly.


I always find it bizarre how people like this can operate. After almost 20 years of software development, I've considered seeking some kind of an architect role, but I cannot, for the life of me, imagine operating as one without working closely and collaboratively with the development team on a solution, rather than just dictating how things should be done "from on high". But that may just be a personality thing, I don't know.


"real architects" write code and simply hop from one team to the other so that they have a reasonable picture of the overall system and can try to guide all the teams to make harmonious choices and possibly even reach some goal.

This still results in a lot of compromises and problems. Anyhow they should be there talking to the developers in a 2-way fashion so that then end result is not entirely "from on high".


I've seen this even without the code - them just bouncing between teams and sitting in on many meetings and sort of being the common note-taker who then knows where almost every team is going, what they need, and what cross-team work could be done to improve everyone's life.


I haven't ever seen one of them.

IME architects tend to be people who tell your team that you should be using Azure Cosmos after a Microsoft salesman takes them out to lunch. They last coded 5 years ago.


They exist - I am one - but it doesn't work unless your leadership wants someone who doesn't fit in existing hierarchies. I tend to consult and if I go somewhere where I have prior relationships with leadership, it works. If not, I get treated as part of someone's narrow reporting chain and all the incentives are wrong.


> I always find it bizarre how people like this can operate.

They are acting rationally within their belief system of making money.

Enterprise software is exactly that, software used by enterprise -- it doesn't signify any good qualities (far from it) only that it provides a cost-effective software solution to a defined business need.

The issue is when corps get bailed out, overfunded, or have revenue mostly outside of software (e.g. gov't contracts) as it eliminates cost-effective from the equation... so you just end up with buggy messes (code as a cost center).

You'd have to work in the tiny niches where tech is the true product to find good development ...and even then...


I consult as a software architect and my job is mostly the opposite of this: asking people what problem they are trying to solve and why they haven't considered $EXISTING_SOLUTION


I dont want to get paid and have fun.

I want to get paid to solve real fucking problems which has a factual validated need from real people.

Most people are terrible professionals. Writes code to have fun and expects to be paid too. And even blogs about it too.

I want to be in the trenches. Hard work. Real work. Not some glamorous bullshit that lasts nowhere. Quality long lived treasure is what I strive for.

And I dont want to progress by applying popculture technologies because some punks subjective opinion wants to have fun. One has to be a prick and tell these people off because they shovel shit and pad their own backs when the re-shovels it with a new fad.

I want to progress by making slow surgery like precision work. I want to make sure that what I do sticks and is sound quality, no code is written for fun. Code is a fucking liability.

Between the scams and the hustle, the number runners and the pick pockets, real people with real quality minds do real good work. Those are the only programmers worth being around and hire. Everything else is just a waste of time and mental capacity.

So many morons in this business. It's really too much.


A drop of anecdata: Once had a several-days argument about deploying a fix involving an SQL query analysing information from our deployed devices because the other developers were convinced it wouldn't work efficiently for 1000+ devices. Client was threatening to withdraw their money and support -- which would have killed the company I was working for -- if we didn't make things work RIGHT DAMN NOW.

Reader, we had 25 devices.


L'Enfer c'est les autres programmeurs.


Hell is other programmers: Existential programming and the perpetual ontological struggle of solving imaginary problems together.


I absolutely agree with the premise. People just love building things even when they're not needed. After a while, your ego gets attached to whatever it is you've built and you can't let it go.

I remember working at one place where somebody built a new framework to solve a common problem we all had. He pitched it to all the other devs in a meeting and I remember being confused about it because there was a standard framework that solved the problem already and did it in much simpler and more elegant way. (To be fair to him, this standard functionality was only recently introduced.)

During his presentation, I asked why the standard solution wouldn't work for him. It turns out he wasn't familiar with it. Fair enough, so later I messaged him and showed him the standard way to do it and how much simpler it was. He couldn't be swayed.

He just couldn't accept that his complicated solution wasn't necessary. He constructed scenarios where his idea was needed, even though I saw solutions to those scenarios using the standard framework.

Interestingly enough, one of the scenarios where his custom thing was needed was in some tests he had written where he did some complicated things to set things up. I looked at the tests and even there saw those complicated things weren't necessary! There were ways to simplify what he was doing so that the tests were better written and didn't need his custom tool.

Anyway, he wouldn't be convinced. And because he couldn't be convinced, we got stuck with his solution and saw people continue to work on it, add more functionality to it, fix bugs, etc. All of that work was just a waste of time when we could've relied on a standard solution, which was way more mature and way simpler.

All of this drove me crazy, but I realized that sometimes people are just unable to see simple solutions to problems. Worse, having one complex solution begets more complex solutions elsewhere.


> people are just unable to see simple solutions to problems

This person might just have been terrified of admitting that all the (very tangible) time they spent on payroll building this thing was for naught. It’s unclear how their manager might have responded to that. And they wouldn’t have been able to put „built system doing X used by Y developers and deployed to Z customers“ in their resume.

The reasonable choice often doesn’t have much skin in the game.


Your guy clearly had an emotional attachment to his work not an inexplicable intellectual attachment.

It's hard to admit that your baby is ugly.


This is one reason I’m glad I came from a research background into software. If something doesn’t work it doesn’t upset me and I don’t personally feel aggrieved, you just chalk it up to experience and move on. The number of times I worked on a study that had to be abandoned either because it didn’t work or because another research group beat us to it!


You're right. It's a good lesson to take away when it isn't you because when you're that guy, it's so hard to separate yourself from your work.


How do you manage boring part though? Going mad because of boredom is a real thing. I definitely agree that some of my job is caused exclusively by my need to keep myself entertained. But what's the solution?

Another factor is resume driven development. Yes, you can frown upon it all the day, but in the end I'll switch company and I'll need to find a new job. And, like it or not, but everyone these days wants a lot of experience from their workers. I'd love to write C89 in the dark corner for the rest of my days for reasonable compensation, but I don't see those jobs, what I see is billion keywords k8s spring boot react query metrics jaeger aws yada-yada.


i think it's important to acknowledge when you're working and when you're playing. working on fun things isn't inherently bad, and can lead to actual productivity when the things you learn during your fun geeky tangents turn out to be useful to the actual work.

but if you start convincing yourself that the fun distraction is the actual work you need to be getting done, then you might have a problem. (not to say that actual work can't be fun too. jut saying make sure you know which is which)


Try to find other real problems to work on that are different from your current boring parts? The new problems might eventually become boring as well, but often the change and fresh perspective is enough to pique your interest in a motivational and productive way.


Try to make it not about the job itself. Reward yourself with other things that are non-code related.

"If I get 5-10 of these done today, I'll reward myself with..."

Or use your imagination in someway like kids do with action figures. It can seem strange but there's ways it less mundane indirectly.


I do it by building side projects. As they’re purely experimental, I can use whatever I want and learn a ton.


My venom for design patterns has risen over the years, and I think the reason is that Patterns always represent concrete architecture changes long before the last responsible moment.

In my code I tend to leave negative space. A spot where a feature could logically fit, without having to really design that feature before we need it. And as my comfort with compound refactoring has improved, some code smells have fallen off of my veto list. If we need to do X then this solution will stand in our way, but we can alter it here and here if that becomes a problem. It works well for a team of one, but it can be difficult to express in a code review, when someone is adding the fourth bad thing to the same block of code and now I’m pushing back for odd sounding reasons because my spidey sense is tingling about bridges too far.


this is definitely one of the advice i give to senior dev i work with: if you're proud of how smart your solution is, there's a high chance you overengineered and made a mess of a simple problem.

i now take great pride when my code looks boringly obvious.


Related: my favorite pull requests are the ones that remove more lines of code than they add. People think you need to hoard old code that’s not used anymore like it’s made of gold. It’s not. You aren’t gonna need it, and if you do, you can find it in the git history.


Antoine de Saint-Exupéry — 'Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.'


I push that so hard. Put the removal in an isolated, clearly named commit that will be easy to search later, tag that commit so it never gets garbage collected, then take a deep breath and say goodbye.

You'll be better off without it and 99.9% of the time you won't have to retrieve it later anyway.


Author gets it half right. Too many developers trying to have fun by doing new things. Affordable initial software development tends to come from places that have boring templated solutions using 1-2 tools.

But the main argument is off - What some classify as imaginary problems is actually building in the wrong types of affordances. The only way to know the difference is by knowing your tools and the problem domain.

The root of bad software is leaders lacking practiced learning.


---

They’ve put their heart and soul into creating this app, and it has some amazing features:

    A state of the art recommendation system

    An algorithm generating the transcript of all your streams, in real time

    Your front page loads in sub 200ms times all over the world

    A streaming protocol and client build almost from scratch, in case you don’t want to rely on Facebook live

    A service that allows you to easily integrate over 20 ad exchanges
---

What a dumb article.

This is just made up. If you hired consultants and they pulled this lawyers would be involved.

I stopped reading here. Why continue reading something based on a fake premise.


A long article that can be succinctly expressed by the many variations of this cartoon: https://softwarehamilton.com/wp-content/uploads/2013/02/swin...


> They might realize Debra has been sitting in that corner, staring at uptime graphs of the internal server farm for 10 years, despite the fact that the company moved to AWS five years ago.

Ouch. The tone of the article is a little harsh, and I’m not sure if snippets like the above are intentionally hyperbolic, but there is a fair amount of truth in it.

Most of my career has involved convincing peers to do less, and solve simpler problems with simpler solutions.


This had good points up until the point where it conflated banking software with ‘moving a few numbers around’.

There are vast differences between the pathologies that affect small scale contract web app development as detailed at the start of the article, and those that affect global enterprise development such as is required to build large scale online banking systems. The biggest difference being that many of the things which are ‘imaginary problems’ for the small time web app are very much ‘real problems’ for a publicly traded company with responsibilities to several government regulatory agencies.

And sure, these institutions are just as prone to conjuring imaginary requirements, but it requires considerably more sophistication to tell the difference between ‘something someone in the German compliance office dreamed up to make themselves seem important’ and ‘something that if we get it wrong will result in billion euro fines’ when you’re building a banking system rather than a podcast website.


Well said. It's a relative of the premature optimization problem. I often think of them as unearned problems.


Alternately, it's more fun to write sci-fi than to deal with today's reality.

And the gap is widening...


Isn't this why companies have employees with different levels of experience?

Need a bog standard app to stream audio files? Give it to the person you hired right out of college, or maybe even the summer intern. She's never built something like this before, so to her it will be a challenging novel problem. A (somewhat) more experienced developer may need to provide initial guidance and review code, but that's a comparatively minor time investment, and besides, the act of mentoring someone else should keep it interesting.


> Isn't this why companies have employees with different levels of experience?

this runs into another common mistake businesses make: putting people onto a task because they're available, not because they're suited to it (in any sense).

the junior person (+ a little mentorship) would be great for the job, but they're mired in some big project. but hey, you've got this super senior fellow sitting around waiting for work, and we can bill the customer more for them, anyways.


I spend a lot of time saying “I told you so” to people who were sure my problems were imaginary. When you don’t stay somewhere long or you have a short time horizon it’s hard to connect cause and effect. Also pretending uncomfortable things don’t exist is a very popular character trait.

It’s not impossible that some of the problems I see others manufacture have a genesis in past traumas they are trying to avoid (some coping mechanisms are healthier than others).


Premature optimization is the root of all evil.

Simple > Complex.

It's amazing how many otherwise brilliant people dive headlong into project without considering these basic principals or even intentionally brush them aside.


I believe Saying simple > complex doesn’t actually mean anything because it’s effectively impossible to pin down definitions. They are totally in the eye of the beholder. Solutions that are simple in one axis almost always trade of complexity in other axes.


That's why I prefer the form "Do the simplest thing that could possibly work."

There's always going to be a minimum level of complexity. Sometimes that minimal level calls for throwing a PHP script up on some shared hosting provider somewhere. Sometimes that minimal level calls for an Enterprise-level design with ten layers of abstraction because you need to be able to individually unit test every aspect.

Being aware of where and when to introduce complexity is half the battle.


I still don’t think that’s a very useful mental tool because it also hinges on what _you_ think is simple.


As a good illustration, consider FEniCS[1], where you can write a few lines of Python code which looks almost exactly like the math you're trying to solve, and have it compute the answer. Very simple!

Except to make that work there's a lot of infrastructure, including runtime-generated-and-compiled C++ code that gets dynamically loaded by said Python code to perform the actual calculations. Quite complex!

The true skill comes in finding the right balance between simplicity and complexity for a given situation.

In the case of FEniCS, the complexity is worth it because it allows the system to be used by less skilled programmers (who might know more about the math and physics), and the complexity handled by experienced programmers.

For our codebase we've got junior programmers who might need to read and understand my code if I'm on vacation and shit hits the fan, so I err on the side of making it easy to read and reason about. Which might not be the "simplest" for some measures of simplicity (like fewer lines of code).

[1]: https://fenicsproject.org/


I’d argue that FEniCs was a prime example of this in some ways, at least earlier in it’s history.

I started using it in about 2014. It was not exactly what I would call a simple project. It used to be an ordeal to build, could only easily be used via Docker, ported to Python 3 a long long time after most of it’s dependencies did, had an unstable C++ API that changes were not documented for but nonetheless you were required to use if you wanted for reasonable performance for some calculations, etc. The national supercomputing centre in my country managed to only get one version to build because it was so poorly specified at the time and basically only supported Ubuntu!

FEniCsX is considerably better usability wise, but that’s the result of hard lessons learnt.


People need to stop throwing around 'Premature optimization is the root of all evil'. Simple > Complex <--- Look at that. It's optimization! Complex != optimized


It need to be thrown around more. I cannot count on one hand the designs I came across that were needlessly complicated that rather than solving a pain point became a problem in themselves.


Please stop taking it out of context.

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."


And now we will have AAA games with poor performance and DLSS slapped on.

Maybe optimal performance is a feature? Feature worth working on from start?


And here's the deal, with more years in this industry, I've always found that the optimal performance almost always comes from a simple architecture.

And that performance optimization done prematurely often achieve complete opposite result. Meanwhile performance optimization done at necessity rarely have the same problem.


yes we have experience very poor performance games i hope they working on it


Rollbacks of an integrated financial system, live and mid-collapse, built with a shockingly complex set of integrations and features on top of a series of underprovisioned distributed databases and queues, probably aren't as simple as the author believes.

Yes, they should have planned to be able to roll back, but it wouldn't have been simple. It would have been very difficult, and if they were fastidious enough to plan for a rollback they presumably would have done the development and migration in stages and performed parallel testing on the old and new systems, which would have removed the need for the rollback in the first place.


The hard part here is telling the difference between a pure phantasm of an imaginary problem and innovation. Things that have never been done (or never really been done well) might look as far off as hypotheticals.

I do think that “imaginary problem” is the answer most of the time though. Most things that look like unnecessary complexity or hobby horses really are.


And here am I struggling not to solve memory leaks or improve up the build time of or speed of our app because I am stuck implementing the most boring of the features already digested by a team of businesses analyst and a design team.

Sometimes boring is just pure torture.


Sort of related, but this is one reason I moved into “Frontend“ about 10 years ago. I started seeing over and over that when our (SaaS) projects didn’t start from what the customer sees, teams usually got distracted with imaginary or hypothetical design issues. It was a lot more effective to iterate from the visible features, and then let that drive much of the backend design, APIs, development timeline, etc.

This meant I needed to deal with more JavaScript than I originally intended with my career, and eyerolls from backend architect types, but projects go much smoother than in my past, that's for sure.


> It should be noted that this issue isn’t unique to developers. Management, sales, HR, support, legal, and even accounting departments have their own unique ways of creating imaginary problems. They try to involve themselves too much in a decision, when their presence at a meeting is just a formality or wasn’t requested at all. They overemphasize a minute problem that is related to their role, or hire teams much larger than necessary to illustrate their importance.

I run into this more often than problems I imagine. A whole series of what ifs and folks imagining things. The worst part is the fixes to their imaginary problem are usually not well thought out and drive things towards worse choices.

I’m a big believer in getting version 1 out the door as problems people imagine often… are never relayed by the actual customer.

I often work with some routing software (let’s say routing packages). There is a simple mode that works great. Anyone can use it.

The issue is people want to establish say 20 rules about how / when / what is routed. Business folks insist that it be “easy” for “anyone to use” just like the existing easy mode.

This is doomed from the start. We can make it easier for sure, but if you have 20 rules with different weighted priorities:

1. It is complex for most people to think of. It will look complex because it is complex.

2. That’s ok because the guy with 20 rules probably has thought about them and understand they have 20 rules.

Then we give them UI to visualize it all and customer is happy.

But the business folks are upset because the visualization is complex… and there we are again.

For the record I usually get through this slog and everyone is happy in the end, but it is a slog due to imaginary problems.


Even tho I'll get flak for it, I'll call bs on the article.

It's the same as the phrase "(premature) optimization is the root of all evil". Does it mean you should never optimize? No. Does it mean you should always optimize as a last step? Also no.

Here's the full quote: "Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

It's way more nuanced and basically says: "Stop wasting time on optimizations with little gain, but still do them where they are useful/necessary."

Right now I'm working on an embedded device running a small Linux. Our resource usage is well within the bounds, so thinking about better architectures or optimizations is an imaginary problem. Right? No. Not at all. Making our software smaller and/or faster doesn't only mean we can put on more software, it also means we could produce cheaper because we need less resources.

Thinking about and experimenting with different architectures or newer technologies also seems like imaginary problem solving at first, but there is a good possibility that you improve your system as well. A better architecture could make your software more maintainable or give you the flexibility to implement new features in a way, that was really cumbersome with the old code.

So while I agree with the sentiment that you should not implement things you don't need, I also think that there should be room for people to experiment and try out different things. Because sometimes, the people working with the code the longest are blind to it's many shortcomings. Because it's normal for them to just work around them. But getting rid of those shortcomings can save you hundreds of man hours of work in the long run.

To cut a long story short: Do experiment. Do think about problems you might have in the future. Do the mental exercise and think about how to improve your current code and architecture. But don't blindly implement it

Always evaluate what your trying to do. Check if it improves the things it's supposed to improve and also check if it doesn't make matters worse elsewhere. Get to know the tradeoffs and make informed decisions if changing something that works to something that's better is worth it to you.


This is such a great article. Amazon tackles this problem by emphasizing "working backwards", but it ultimately depends on the people who enforces such cultural value. I remember in one of the Uber all-hands meetings, an engineer asked the CTO whether Uber engineers should always make sure their systems can handle at least a million requests per second. To Thuan's credit, he unequivocally said no.


The species of this I most commonly encounter, and IMO the most illuminating of the issue, is Solution in Search of a Problem. People love having a solution to a problem so much, they will hallucinate the existence of the problem the solution is for. Especially when social approval has replaced any actual mission of the org.

In other words, when you have a hammer, everything looks like a nail.


OMG, never has truth been so funny and yet so tragic.

This article, its art and its font choice, shall be preserved forever in the archive of Historical Documents.

Our eternal response to unnecessary complexity? "Never give up! Never surrender!"

--

ADDENDUM:

The article makes a good case for formally adding "manufactured complexity" and "miscommunication complexity" to "accidental complexity" and "necessary complexity".

They are quite common distinct causes.


Ah yes. This explains very well why, when I ask our corporate IT for a static website with essentially text + some PDF downloads, I keep ending up in a “web platform” project based on a fiendishly complex CMS that is made for running large-scale e-commerce sites — of course delivered after ages and requiring multiple rounds with the CFO to justify the increased budget.

Been through that at several companies now.


On one project I did it was essential to be able to record how much a tool was used so that we could charge for it. The tool ran locally on the customers' machine and reported to our service. The overall mechanism that sent and received this information had to be reliable or we'd lose money but even worse would be to in some way overcharge customers. Lots of aspects of the design were complicated by this concern.

Then we ended up deciding not to make money out of it that way. So we burned enormous effort and created a horrible design for no reason.

So IMO the problem is usually with the way requirements are not usually well understood even by the people asking for them. Later on it becomes clearer what is needed but you're stuck with false assumptions baked into your design in a way that you never have bandwidth to remove because you need so much bandwidth just to do normal work....because of those assumptions.


I think the most important function of a good Product Management team is to understand what parts of the go-to-market impact tech decisions and spend 80% of their energy into pinning those down as firmly as possible. There is a happy medium between JIT delivery of specs for random features and a hard two-year roadmap that can't react to business changes.

YNGNA is generally true, but Product's should have a very clear vision of what kinds of entities the system is going to handle over then next 36 months before they start asking for specific functionality. I've seen extra shit get built, but I've also seen e.g. a travel booking system that was built without a "flight" being a first class entity. Flights were deduced on the front end from attributes attached to seats... which worked well until a PM asked for the UI to show fully-booked flights, which HAVE no available seats that make it to the front end. Same product couldn't handle the booker and traveler being different people, when they knew from day 1 that it would be a necessary feature. It would have been little extra work to incorporate into the data model from the beginning, even if the two values were always the same for a while.

I think the majority of the technical debt I've seen that isn't ci/cd related is disconnect between the domain model the product team is working in and the data model the engineering team is working with. Formalizing that domain model is now one of the first things I do when joining a team, so everyone agrees on precisely what the major nouns and verbs are and how they interact. Not just for the current system, for where we think we will be in 2-3 years. With everyone doing agile, it's amazing how many incompatible, un-written assumptions you discover that hadn't been ironed out.


It seems like the fundamental problem for the scenario presented in the article was an inability of the customer to communicate their strategy and a completely hands off approach during implementation when they should have had at the very least biweekly checkins with developers with pre-negotiated milestones.


The fundamental problem is that a non-technical person has specced out a technical product without input from technical people.

A restauranteur wouldn't develop a menu without input from a chef or prior experience as one. A layperson wouldn't design a real building without input from an architect or engineer.

Yet for some reason a myriad of non-technical people (read: laypeople) feel empowered to design, spec, and strategize about software. It still boggles my mind that "product management" is a real profession.


Yes, I agree. This is fundamentally a communication issue not a “developers run amok” issue to me.

They shouldn’t be checking in after two months. They should have regular check ins.

They shouldn’t have had such a vague brief.

They should have discussed a wide range of off the shelf options from the get go to see if they needed something bespoke or not.


It might be more general than that: imaginary problems are at the root of bad___

Where ___ could be something produced like software (or furniture, etc.), or theorised such as scientific theorems (as even though thought experiments are useful, if we don’t go beyond them, we are often lead to bad science), etc.


It’s impossible for a large organization to operate as efficiently as a tiny one but I also don’t really believe the article’s implied claim that “a couple smart guys” could solve essentially any given problem to clients’ satisfaction within a reasonable timeframe.


I frequently refer to this as “It would be cool if” driven development.

Crucial to fight against this kind of stuff.


If you only paid $15,000 for all of that technology after only two months and you're salty about not having automatons carrying out your commands, you probably should just stick to your podcast show and thank everyone for delivering something, even if it's a few shades different than what you asked for.

These aren't imaginary problems but interesting solutions. I'd bet that card punching programmers working on the first computers were the first to build interesting solutions along with addressing requirements. Look at how far managers have evolved since then.


> Most complicated or broken software is not designed to be overly complex or dysfunctional.

I beg to differ. This might have once been true, but no longer. Now developers demand a massive arsenal of dependencies before they are even willing to start on any project.

You say you need a web page and a few events? No problem. I will just sprinkle in some React, Redux, Grunt, GraphQL, PM2, and a plethora of plugins for each with a cascading list of dependencies for all those plugins. We absolutely cannot do less, because the risk of writing original code is too costly and we value retaining employment (blaming someone nameless outside the company).


The author is right, but doesn't seem to mention that this could be solved, at least in theory, with better project management. Devs focusing on the wrong thing? Project manager should be on it. Scope creep? Project manager should be on it. Client asking for irrelevant features? Project manager should be on it.

Replace those devs with ones who are focused on the right things? Great, but you still have the other problems to deal with (scope creep, uninformed clients), not to mention a host of other things that can derail a project, such as poor communication.

tl;dr Most projects are poorly managed. Poorly managed projects tend to fail.


I think the move from project management to "product management" really exacerbated this. The type of people that gravitate towards "product" vs "project" management is one thing.

Further, project managers tend to focus on delivering the product stakeholders have asked for on time and on budget.

Product managers however I find are frequently searching for a new expanded scope of stakeholders to brag they're solutioneering to. I find them more often in the "solutions looking for problems" business than I find developers, or at least developers are only doing it on a micro-scale, while product managers will take an entire team/org on 6 month fruitless missions to build software destined for the dustbin.


The moment project manager stops taking care of minutes, meeting agenda, getting all the stakeholders in the same meeting and ensuring all issues have an assignee and instead starts taking interest in the specifics of the project, I bail.


If the company structure is too complex or too big and your actual impact is either hard to measure or is held back by the system or by colleagues, you have to find a way to stay sane and find a tangential meaning in work. Funnily enough, this doesn't mean it is wasteful. It could turn out that it makes you put more energy in honing your skills, or perhaps you find a way to contribute to open source within your job. Neither are bad things. So I don't completely share the pessimistic outlook of the article.


Big tech internal incentives really exacerbates this. The entire performance system incentivizes engineers to create imaginary problems and solve them. Honestly, I think a good chunk of engineering activity are along these lines. Literally just imaginary problems being created which compound into more imaginary problems, all which are used by engineers to justify their existence.


The article rests on the idea that the management knows what needs to get built, but my experience so far was that they are usually even worse at that.


I like the first part of the article, but then he wents off to some weird rant about banking software that gets old quickly


I do agree with the general premise but there are a lot of passages that kinda trivialise a lot of complicated phenomena, e.g.:

> Much like victims of childhood hardship or abuse can find escape in fantasy books, victims of enterprise programming or freelance web development can find their escape in solving imaginary problems.


I know it’s just an arbitrary number picked but this bit jumped out at me:

“You’ve just wasted $15,000 on two months work with a team of contractors”

The project may also have been doomed because $15k is not very much for something like they described.

But again, fully aware that they probably picked a random number. I’d have just added another zero to make it more realistic.


In the world of entrepreneurship, you mostly find the extremes.

Either you have the bootstrap founders that will trawl through fiverr to find the cheapest labor (the same types that will scoff and get insulted at anything over $10k), or you have the well-funded founders that will pay whatever it takes.

From experience: Cheapskates and low-ballers will never be happy. They always want something free, more haggle room, more discounts, and always have high demands and expectations. The best thing one can do is to price yourself away from them.


In our agency 10 years ago we build such websites, it would cost 3000-5000 dollars max. Just with PHP and a simple self build CMS. Including a simple responsive design. We also hosted around 250 of such websites on a single dedicated server. It was very very fast.


I totally agree with this, normally it should be the cost of any content website for clients. The site is fast, and we just need to deploy on one dedicated server or for free with simple maintenance cost per month these days. We still do the same for small clients, with application building with us and want a content page for their blogs.


Where are your developers located and what what are their salaries?


In the Netherlands, Europe. We had a small agency with 3 friends. We did around 250k a year in revenue.

Exactly the type of team that is super productive for these kinds of job imo.


i think you might be trying to solve the author's imaginary problems.

all the author needs is a wordpress site with a couple plugins and a few weeks of back and forth on the design work. if you can't make a good profit on that with a $15k invoice, you're doing something very wrong.


The problem is that their brief is too vague and the real issue is communication.

At no point is it clear that they’ve discussed off the shelf solutions, or infrastructure (storage, server hiding, CDN etc).

If they had, then they either messed up because 15k over two months is too high or too low. It’s squarely in the: “nobody has actually defined the project territory” for me.

Yes, a Wordpress solution would work, but then why even budget 2 months for it, unless bespoke design is involved. Again that becomes a: this is too low or too high to be realistic


>At no point is it clear that they’ve discussed off the shelf solutions, or infrastructure (storage, server hiding, CDN etc).

why should they? the requirements are listed in the article. the job is to meet the requirements. "where should we store the files" is a job the client hires you to answer, not something requiring communication.


The requirements are insufficient if you’re going to be so particular as to stick exactly to them. You can’t guarantee the uptime’s requested without talking about infrastructure.


150.000$ for a podcast app with zazzle shop and google ads integration?

That sounds really high to me. I personally found 15k way too high already; guess it depends on the details/market/circumstances


It could be both too high or too low depending on the communicated expectations of the client. Which is I think the real issue here anyway.

But let’s say 15k for two months of a team of contractors. That’s 7.5k/mo. A team says at least two, but I think given they mention sales etc, that’s 3-5 people. Netting 2.5k/person before you take out built in profit margins, healthcare (because it’s dollars, I assume America) and more. That’s roughly 1.5k/person per month. (Of course they could have multiple projects but even then that’s a low amount imho for a contractor with sales)

If they’re expecting an off the shelf solution, then they’re getting fleeced on the costs and 15k is too high, but if they’re going to a company that does big solutions than they spent too little.

There’s too little info in general


15k is rather high for those specs. A single competent dev can build that in a month.


15k for a single dev over a month, sure. 15k for a team over multiple months is different though.


Summing it up:

* Communicate effectively - avoid middle layers

* avoid over imagination / premature optimization

* Incentivise for organisational efficiency


This is why I truly love support-driven development.

While it’s possible to prioritize problems that don’t affect most people (squeaky wheels) it’s a hell of a lot more effective than most of the methods I know to have a very low barrier to contacting you for users, and fixing the things that come up.


Engineers are terrible prophets.

We should focus on the problems of "now". The future will catch up.


I really like this quantification of crash rate and being so up front about it and what’s acceptable. Spending such a long part of my career being in the critical software space it’s kind of refreshing to imagine a more lax world out there.


This is a good addentum to the Bullshit Jobs thing that was posted a few days ago.


I gave up on this article when I found out that the first hypothetical scenario has no relation to reality.

Implementing something else “because if they implemented the real spec they would get bored” is too much psychology.


The author seem to infer whatever he needs to affirm his opinions, examples are poor and the whole thing is just a rant disguised as reasoning.


So it's resume driven development that's causing issues?


My favorite example of this is the grocery store checkout kiosks that make you weigh each item. So much wasted effort both to produce and use those machines.


In the imaginary case of the merch selling android app i dont think it is justified to solely blame the developers for the bad product. The root cause for a bad outcome lies in the simple fact that nobody wanted to tell the client that there was already a solution for 20 bucks and absolutely no need for something custom built. NOBODY except the business owner gains anything in this and the client deserves the ripoff too. It comes as no surprise that developers look for interesting challenges without appearing too distracted by the stupid and boring bs work they are stuck with.


Good reputation?


Pure luxury if you live from hand to mouth.


Reputation is what gets you more business in the future. You can expand your business or raise rates, or both.

Yes, this won't help if you're literally in danger of not meeting payroll next week, but I would hope most businesses don't operate so close to the edge.


Imagine someone writing small webapps for small companies and extrapolating to assume this is applicable for extremely large scale software design…


Developers have a lot in common with Rube Goldberg


"Rule Eight: Don’t try to create and analyze at the same time. They are two different processes."

— Today You Need a Rule Book, 1973.


We have beef with AI hallucinating - get 5 people to work on a problem and measure how much stuff they will make up from thin air.


> secure online banking is actually quite an easy problem to solve . . . The storage and transfer of numbers is not a particularly hard problem.

I don't know anything about banking software, but I have a hunch the author underestimates the complexity (as is typical for HN). The Efficient Markets Hypothesis would suggest that if it were so simple to make banking software, then someone would do it.


The shareprice of META fell by 75% from August 2021 to August 2022, wiping about three quarters of a trillion dollars off its market cap.

From October 2022 to now the shareprice tripled, adding half a trillion dollars to its market cap.

Explain that in terms of the Efficent Markets Hypothesis? "share prices reflect all information" - what half trillion worth of information change came out in 2022? and what in 2023?

What about insider trading laws? How can we say "share prices reflect all information" when we know there are people who have more information which would give them an unfair advantage, which means the current shareprice cannot be reflecting the information they have?


Not to disagree with you but if there are people with insider knowledge they will either dump a massive amount of stock or buy it up changing the stock price. So the insider knowledge should be priced in quite fast thanks to their greed.


1. Apple made iPhones more private so Meta can't target ads as efficiently

2. Interest rates affected the entire market


Easy, banking and especially banking software is not an efficient market.


The main imaginary problems I commonly see are wheel reinventions.

For the most part, the software I see works well for what it does, and either has far too few features, or else all the extra stuff they added is stuff that people actually use.

On social media things are different, that's been bad and unsalvageable from the moment endless scrolling made it into something people spend significant time on.


I'd prefer see wheel reinvention because it leads to better software.

1) it's debuggable 2) it's fixable 3) it does exactly what you need to do how you want to do it without any extra cruft and nonsense.

Libraries or game engines are too generic to be fast and easy to use because they need to solve for every possible use case. And even then, there are still edge cases where what you want to do cannot be done because of the architecture of the thing and so you're stuck with a slow weird ducktape solution to get around the 3rd party code's limitations.


Games already cost an insane amount to develop. I would imagine that they would either have to have less content or less realism or cost more.

Plus, there's a limited pool of devs who can do 3D game engines(I sure can't!), the more wheel reinvention, the less available resources to do new things.

And then wheel reinvention also leads to incompatibility. For some reason it's fashionable for formats and protocols to include optional features instead of making everything mandatory. The big ones support all common ones, the DIY ones usually just support the options they need, and exporting from one and importing to another might do something weird, it takes a lot of work to support the de facto undocumented standard that emerges from sets of optional features with a few popular implementations.

Large libraries are debuggable too, because the reuse lets devs throw insane amounts of resources at debugging them even if it's really hard. And for the same reason, they can often be pretty well optimized. Modern software seems to be pretty fast now that Moore's law slowed a bit..

In theory, it's really cool that smaller solutions are fixable, but I'm just not sure we could actually have all this software everywhere running the whole world with small and simple in house code.... I mean, that's kind of what we had in the early 2000s, and while most things in general seemed better and people were happier.... everything that ran on a PC from the Win95 to the Win11 era seemed pretty insecure and unreliable.


What does ICO stand for? Or ICO-ed, as it is used in the article


Initial Coin Offering


Ted Kaczynski’s “surrogate activities” term is very relevant here.


“Imaginary problems are the roots of negative developments”


"Premature optimization is the root of all evil".


Imaginary problems are the root of all problems

Maybe a Buddhist could say


What's a "real" problem, though?


Imaginary problems aka gold plating and yagni.


So very funny! In a highly cynical way.


that implies:

bad software = -1 problems


This sentiment get repeated a bunch but I doubt it's really true.

Bad languages and tooling is the root of bad software.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: