Hacker Newsnew | past | comments | ask | show | jobs | submit | mywittyname's commentslogin

Optimization takes up time, and often it takes up the time of an expert.

Given that, people need to accept higher costs, longer development times, or reduced scope if they want better optimized games.

But what is worse, is just trying to optimize software is not the same as successfully optimizing it. So time and money spent on optimization might yield no results because there might not be anymore efficiency to be gained, the person doing the work lacks the technical skill, the gains are part of a tradeoff that cannot be justified, or the person doing the work can't make a change (i.e., a 3rd party library is the problem).

The lack of technical skill is a big one, IMO. I'm personally terrible at optimizing code, but I'm pretty good at building functional software in a short amount of time. We have a person on our team who is really good at it and sometimes he'll come in after me to optimize work that I've done. But he'll spend several multiples of the time I took making it work and hammering out edge cases. Sometimes the savings is worth it.


> Given that, people need to accept higher costs, longer development times, or reduced scope if they want better optimized games.

God why can’t it just be longer development time. I’m sick of the premature fetuses of games.


The trade off they're talking about is to arrive at the same end product.

The reason games are typically released as "fetuses" is because it reduces the financial risk. Much like any product, you want to get it to market as soon as is sensible in order to see if it's worth continuing to spend time and money on it.


And this really shouldn't surprise professionals in an industry where everything's always about development velocity and releasing Minimum Viable Products as quickly into the market as possible.

> God why can’t it just be longer development time.

Where do you stop? What do the 5 tech designers do while the 2 engine programmers optimise every last byte of network traffic?

> I’m sick of the premature fetuses of games.

Come on, keep this sort of crap off here. Games being janky isn't new - look at old console games and they're basically duct taped together. Go back to Half-life 1 in 1998 - the Xen world is complete and utter trash. Go back farther and you have stuff that's literally unplayable [0], or things that were so bad they literally destroyed an entire industry [1], or rendered the game uncompleteable [2].

[0] https://en.wikipedia.org/wiki/Dr._Jekyll_and_Mr._Hyde_(video... [1] https://www.theguardian.com/film/2015/jan/30/a-golden-shinin... [2] https://www.reddit.com/r/gamecollecting/comments/hv63ad/comm...


Super Mario 64, widely recognized as one of the most iconic influential games ever... was released with a build that didn't have the compiler optimizations turned on. They proved this by decompiling it and with the exact right compiler and tools recompiling it with the non-optimized arguments. Recompiling with the optimizations turned on resulted in no problems and significant performance boosts.

One of the highest rated games ever released without devs turning on the "make it faster" button which would have required approximately zero effort and had zero downsides.

This kind of stuff happens because the end result A vs. B doesn't make that much of a difference.

And it's very hard to have a culture of quality that doesn't get overrun by zealots who will bankrupt you while they squeeze the last 0.001% of performance out of your product before releasing. It is very had to have a culture of quality that does the important things and doesn't do the unimportant ones.

The people who obsess with quality go bankrupt and the people who obsess with releasing make money. So that's what we get.

A very fine ability for evaluating quality mixed with pragmatic choice for what and when to spend time on it is rare.


> The people who obsess with quality go bankrupt and the people who obsess with releasing make money. So that's what we get.

I think this is a little harsh and I’d rephrase the second half to “the people Who obsess with releasing make games”.


Just wait until after launch. You get a refined experience and often much lower prices.

Obligatory mention of Kaze, who has spent the past several years optimizing Mario64 using a variety of interesting methods. Worth a watch if your interests are at the intersection of vintage gaming and programming.

https://www.youtube.com/@KazeN64


I was just about to post his video from August explaining how much excess ram mario 64 uses and where, which was the first serious mention I saw of a ps1 port being possible. He uses the ps1's smaller ram size as a kind of benchmark.

I did not expect it to happen so soon.

https://www.youtube.com/watch?v=oZcbgNdWL7w - Mario 64 wastes SO MUCH MEMORY


I've been working on it since mid 2024, so that video was a funny coincidence :)

He's great. (And ripped!)

I wonder what someone who has PS1 knowledge equivalent to Kaze's N64 knowledge could do on that console---perhaps using Mario 32 as the benchmark.

(Mario 32 = Mario 64 on PS1.)


The people who vote are the people also glued to 24hr news.

Plus, they already own all of the online media. The important bits, anyway.


I highly doubt this will happen. It will be natural gas all the way, maybe some coal as energy prices will finally make it profitable again.

If for no other reason than they're actively attacking renewable capacity even amid surging demand

Tokens are, roughly speaking, how you pay for AI. So you can approximate revenue by multiplying tokens per year by the revenue for a token.

(6.29 10^16 tokens a year) * ($10 per 10^6 tokens)

= $6.29 10^11

= $629,000,000,000 per year in revenue

Per the article

> "It's my view that there's no way you're going to get a return on that, because $8 trillion of capex means you need roughly $800 billion of profit just to pay for the interest," he said.

$629 billion is less than $800 billion. And we are talking raw revenue (not profit). So we are already in the red.

But it gets worse, that $10 per million tokens costs is for GPT-5.1, which is one of the most expensive models. And the costs don't account for input tokens, which are usually a tenth of the costs of output tokens. And using bulk API instead of the regular one halves costs again.

Realistic revenue projections for a data center are closer to sub $1 per million tokens, $70-150 billion per year. And this is revenue only.

To make profits at current prices, the chips need to increase in performance by some factor, and power costs need to fall by another factor. The combination of these factors need to be, at minimum, like 5x, but realistically need to be 50x.


The math here is mixing categories. The token calculation for a single 1-GW datacenter is fine, but then it gets compared to the entire industry’s projected $8T capex, which makes the conclusion meaningless. It’s like taking the annual revenue of one factory and using it to argue that an entire global build-out can’t be profitable. On top of that, the revenue estimate uses retail GPT-5.1 pricing, which is the absolute highest-priced model on the market, not what a hyperscaler actually charges for bulk workloads. IBM’s number refers to many datacenters built over many years, each with different models, utilization patterns, and economics. So this particular comparison doesn’t show that AI can’t be profitable—it’s just comparing one plant’s token output to everyone’s debt at once. The real challenges (throughput per watt, falling token prices, capital efficiency) are valid, but this napkin math isn’t proving what it claims to prove.

> but then it gets compared to the entire industry’s projected $8T capex, which makes the conclusion meaningless.

Aren't they comparing annual revenue to the annual interest you might have to pay on $8T? Which the original article estimates at $800B. That seems consistent.


But it's only for one datacenter...

im a little confused about why you are using revenue for a single datacenter against interest payments for 100 datacenters

I just misread the article, as it seems to bounce around between $nX capex for nY gigawatt hours in every paragraph.

But it looks like the investments are $80MMM for 1GW. Which, if true, would have the potential to be profitable, depending on depreciation and electricity costs.


Broad estimates I'm seeing on the cost of a 1GW AI datacenter are $30-60B. So by your own revenue projection, you could see why people are thinking it looks like a pretty good investment.

Note that if we're including GPU prices in the top-line capex, the margin on that $70-150B is very healthy. From above, at 0.4J/T, I'm getting 9MT/kWh, or about $0.01/MT in electricity cost at $0.1/kWh. So if you can sell those MT for $1-5, you're printing money.


> So if you can sell those MT for $1-5, you're printing money.

The IF is doing a lot of heavy lifting there.

I understood the OP in the context of "human history has not produced sufficiently many tokens to be sent into the machines to make the return of investment possible mathematically".

Maybe the "token production" accelerates, and the need for so much compute realizes, who knows.


Ship them somewhere else, then print a banner saying, "mission accomplished."

It worked at a state level for years, with certain states bussing their homeless to other states. And recently, the USA has been building up the capability to do the same thing on an international scale.

That's the "solution" we are going to be throwing money at. Ship them to labor camps propped up by horrible regimes.


Absolutely. And they will figure out how to bankrupt any utilities and local governments they can in the process by offloading as much of their costs overhead for power generation and shopping for tax rebates.

> Now, everybody knows enough of how to do it that it's assumed for many roles

Is it? I don't know anyone who can code proficiently outside of people who work tech jobs (or used to).


The thing is that there are enough people who are good enough enterprise CRUD developers - especially for remote roles and/or outsourced developers - that it’s hard to stand out from the crowd or command increasingly higher salaries. Gen AI has made the problem worse.

Even if you are targeting a major tech, if you are trying to differentiate yourself by how well you can reverse a btree on the white board, there are plenty of people who can do the same. It’s not a differentiator that you have previous experience in BigTech any more. So do thousands of others.


> why are companies going to hire me, who have gap years and are older, but not some fresh graduates who can work 80 hours per week and only demand half of the salary?

Given cost of living, I have a hard time believing young people come out being cheaper. I live in an area with cheap rents and my mortgage is still less than the average price of a one-bedroom apartment in the area. My cars are new and paid off, and I have pretty much all of the stuff I'd ever need in my life. Plus, no student loans.

That might be one of the real root causes of the job market for new grads being shit. The amount of money people need to meet their basic needs has skyrocketed, and young people bear the brunt of the burden. The only people who can readily afford to work too cheap are those with parents who can continue to support them to a degree.


This is what I do. I leave just enough on the resume to look "senior" while not appearing to be older than 30 or so on my resume.

Having a great, timeless linkedin profile picture helps too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: