Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly I couldn’t disagree more. I built a startup and paid little attention to perf for years 1-5, and finally in year 6 we started to get bitten by some perf issues in specific tables, and spent a few engineer-months optimizing.

In terms of tech debt it would have been way more expensive to make everything perform well from the start, we would have moved much slower and probably failed during a few crunch points.

Instead we paid probably a few $k/mo more than we really needed to on machines, and in return saved man-months of effort at a time when we couldn’t hire enough engineers and the opportunity cost for feature work was huge. (Keep in mind that making everything perform well would have required us to do 10-20x as much work, because we could not know ahead of time where the hot spots would be. Some were surprising.)

Joins may be evil at scale, but most startups don’t have scale problems, at least not at first.

Denormalizing can be a good optimization but you pay a velocity cost in keeping all the copies in sync across changes. Someone will write the bug that misses a denormalized non-canonical field and serves up stale data to a user. It’s usually cheaper (in total cost, ie CapEx+OpEx) to write the join and optimize later with a read-aside cache or whatever, rather than contorting your schema.



> a few $k/mo

Isn’t that the cost of one engineer already?


In straight dollars, perhaps yes. But the new servers don't show up and spend 3 to 6 months before accomplishing anything meaningful, don't require sick time which can cause the optimizations to slip, and don't take 3 months to find the right fit for hire.

Part of the cost consideration is deterministic results. I will pay a premium for near-guaranteed good but probably sub-optimal results and will actively avoid betting on people I haven't met and don't know exist.

In my hiring, I hire now to solve problems we expect to hit after 4 quarters. It almost never makes sense to hire anyone into a full-time role for any project in a shorter timeframe. If you were wrong about the specific problems you expect to have in a year, you have a person who is trained in your development environment, tooling, and projects, and you already budgeted to use them in-depth in a year. There's no emergency. There is time to pivot. But if you're wrong about the need to hire someone now full time, you front load all of the risk and if it doesn't work out, you are stuck with an employee you do not need (and stuck is the right word. Have you ever terminated someone? It is harder than you think it is, and I don't mean just for emotional reasons).

Buy hardware over people. Treat the people you have as if the business depends on them. Let them know that it does. Everyone is happier this way.


> > a few $k/mo

> Isn’t that the cost of one engineer already?

Only for very cheap engineers and very large values of “a few”. $120k/year is pretty low total compensation for an engineer (and the cost of an engineer exceeds their total comp because there is also gear, and the share of management, HR, and other support they consume) and amounts to $10k/month.


In the Bay Area, no, an engineer costs an order of magnitude more. (For a round number, think $15-20k/mo including office space, benefits, etc. for a senior engineer; that's perhaps a bit high for the period I'm discussing but it also isn't attempting to price the cost of equity grants. At that time Google was probably spending something like $35-40k/mo (maybe higher, I don't know their office/perk costs) on equivalent talent at SWE5 including the liquid RSU grants.) But of course run the cost/benefit calc for your own cost of labor.

More importantly, it's critical to think in terms of opportunity cost. Like I said, we couldn't hire engineers fast enough at that time, so if I put someone on this work it would be taking them off some other important project. Plausibly for a fast-growing startup that means eschewing work that's worth $1-2m/eng-yr or more (just looking at concrete increases in company valuation, not present value of future gains). So we're talking on the order of $100k/eng-mo opportunity cost.


> I built a startup and paid little attention to perf for years 1-5, and finally in year 6 we started to get bitten by some perf issues in specific tables, and spent a few engineer-months optimizing.

This screams of if I don’t see it it the problem doesn’t exist view of the world.

How do you know it’s not a problem? Perhaps customers would have signed up if it as faster?

The problem is also treating it in terms of business value and/or cost.

A lot of things are “free” and yet it’s ignored.

For most people, in simple cases like turning on http3, brotli, switching to newer instances and many others are all quick wins that I see ignored 90% of the time.

A good design, implementing some good practices etc are performance specific and don’t always cost more.


A denormalized database model is considered bad desig to begin with, and has performance costs on its own. This is why the OP says this is a "hot take". :)

Maybe there are situations where this actually helps, although the resulting datastructure to me looks more like a multi-key cache.


Thank you. I’ve been reading these comments and thinking I’m losing my mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: