Since he might not be known to most (especially a younger audience), the author is a writer best known for many of the Choose Your Own Adventure books that were hugely successful in the 80s.
"Today, it’s all too easy to see all of the limitations and infelicities of The Cave of Time and its successors: a book of 115 pages that had, as it proudly trumpeted on the cover, 40 possible endings meant that the sum total of any given adventure wasn’t likely to span more than about three choices if you were lucky. But to a lonely, hyper-imaginative eight-year-old, none of that mattered. I was well and truly smitten, not so much by what the book was as by what I wished it to be, by what I was able to turn it into in my mind by the sheer intensity of that wish."
These books were incredibly important to me as an 80s kid. Was a voracious reader in general but absolutely loved these because they had replay value! I remember scouring through these on long family trips in the car to find every possible ending.
The parallels with modern video games are obvious.
The first video game (and one of the first programs) I wrote was a self-styled Choose Your Own Adventure on a C64 with ASCII art and maybe a total of 10 pages.
The only person who acted impressed by it was my grandmother - who had paid for the C64 - but that was enough for me.
In fact, this inspired me to buy such a book for my 9-yo son! They've grown in size, apparently (250-300 pages). Let's see how, in the age of omnipresent screens, he likes it :)
I was pleased that at my local toy store (yes, we still have one, The Time Machine in Manchester, CT) they carry Choose Your Own Adventure books. What’s more, last week we picked up a copy of “The Cave of Time”. So many memories of that book growing up.
Sherlock Holmes stories are very interesting to read in order because they span a good 40 years. The first stories are set in the classic Victorian setting with horses and carriages and in the later ones the first cars appear, WW1 happens, etc…
I don’t see anything inherently wrong in a news site reporting different views on the same topic.
I wish more would do that and let me make up my own mind, instead of pursuing a specific editorial line cherry-picking what news to comment and how to spin them, which seems to be the case for most (I’m talking in general terms).
That makes no sense. No one sane wants to go back to a time where all mobile electronics had separate chargers, especially since the number of mobile devices we use on a daily basis is higher than ever before.
If anything, manufacturers that are able to provide working, compatible solutions should be preferred by consumers to those that don't, and the laws of economics will take care of the rest.
But some of those manufacturers have large loyal customer bases that will find ways to justify them even if they were to employ child labor, so there is that.
Yes, thanks OP for sharing. I check HN front page mostly everyday and had no clue such sophisticated scams existed (I pretty much don’t use social media).
It’s easy to think “eh, it will never happen to me” but hindsight is 20/20. I impulse-donated to things like Wikipedia in the past and I’m susceptible to FOMO as most people.
Actually this trick have been proven to be useless in a lot of cases.
LLMs don’t inherently know what they are because "they" are not themselves part of the training data.
However, maybe it’s working because the information is somewhere into their pre-prompt but if it wasn’t, it wouldn’t say « I don’t know » but rather hallucinate something.
Does it even make sense calling them 'GPUs' (I just checked NVIDIA product page for the H100 and it is indeed so)?
There should be a quicker way to differentiate between 'consumer-grade hardware that is mainly meant to be used for gaming and can also run LLMs inference in a limited way' and 'business-grade hardware whose main purpose is AI training or running inference for LLMs".
I remember the building hugh end workstations for a summer job in the 2000s, where I had to fit Tesla cards in the machines. I don't remember what their device names were, we just called them tesla cards.
I think apple calls them NPUs and Broadcom calls them XPUs. Given they’re basically the number 2 and 3 accelerator manufacturers one of those probably works.
Consumer GPUs in theory and by a large margin (10 5090s will eat an H100 lunch with 6 times the bandwidth, 3x VRAM and a relatively similar compute ratio), but your bottleneck is the interconnect and that is intentionally crippled to avoid beowulf GPU clusters eating into their datacenter market.
Last consumer GPU with NVLink was the RTX 3090. Even the workstation-grade GPUs lost it.
H100s also has custom async WGMMA instructions among other things. From what I understand, at least the async instructions formalize the notion of pipelining, which engineers were already implicitly using because to optimize memory accesses you're effectively trying to overlap them in that kind of optimal parallel manner.
Although the RTX Pro 6000 is not consumer-grade, it does come with graphics ports (four Displayports) and does render graphics like a consumer card :) So seems the difference between the segments is becoming smaller, not bigger.
Sure, but it still sits in the 'business-grade hardware whose main purpose is AI training or running inference for LLMs" segment parent mentioned, yet have graphics connectors so the only thing I'm saying is that just looking at that won't help you understand what segment the GPU goes into.
I'd Like to point at the first revision AMD MI50/MI60 cards which were at the time the most powerful GPUs on the market at least by memory bandwidth.
Defining GPU as "can output contemporary display connector signal and is more than just a ramdac/framebuffer-to-cable translator, starting with even just some 2D blitting acceleration.
If I understand correctly, looking at API pricing for Sonnet, output tokens are 5 times more expensive than input tokens.
So, if rate limits are based on an overall token cost, it is likely that one will hit them first if CC reads a few files and writes a lot of text as output (comments/documentation) rather than if it analyzes a large codebase and then makes a few edits in code.
If we assume that AI coding actually increases productivity of a programmer without side effects (which of course is a controversial assumption, but not affecting the actual question):
1) If you are a salaried employee, if you are seen as less productive than your colleagues that use AI, at the very least you won't be valued as much. Either you will eventually earn less than your colleagues or be made redundant.
2) If you are a consultant, you'll be able to invoice more work in the same amount of time. Of course, so will your competitors, so that rates for a set amount work will probably decrease.
3) If you are an entrepreneur, you will be able to create a new product hiring less people (or on your own). Of course, so will your competitors, so that the expectations for viable MVPs will likley be raised.
In short, if AI coding assistants actually make a programmer more productive, you will likely have to learn to live with it in order to not be left behind.
This is only true if the degree to which they increase productivity meaningfully rises above the level of noise.
That is to say: "Productivity" is notoriously extremely hard to measure with accuracy and reliability. Other factors such as different (and often terrible) productivity measures, nepotism/cronyism, communication skills, self-marketing skills, and what your manager had for breakfast on the day of performance review are guaranteed to skew the results, and highly likely, in what I would guess is the vast majority of cases, to make any productivity increases enabled by LLMs nearly impossible to detect on a larger scale.
Many people like to operate as if the workplace were a perfectly efficient market system, responding quickly and rationally to changes like productivity increases, but in fact, it's messy and confusing and often very slow. If an idealized system is like looking through a pane of perfectly smooth, clear glass, then the reality is, all too often, like looking through smudgy, warped, clouded bullseye glass into a room half-full of smoke.
The problem is that it doesn't actually matter if it really makes a programmer more productive or not.
Because productivity is hard to measure, if we just assume that using AI tools is more productive we're likely to be making stupid choices
And since I strongly think that AI coding is not making me personally more productive it puts me in a situation where I have to behave irrationally in order to show employers that I'm a good worker bee
I am increasingly feeling trapped between a losers choice. I take the mental anguish of using AI tools against my vetter judgment or I take the financial insecurity (and associated mental anguish) of just being unemployed
Jimmy Maher wrote about them recently https://www.filfre.net/2025/09/choose-your-own-adventure/