Hacker Newsnew | past | comments | ask | show | jobs | submit | Kurtz79's commentslogin

Since he might not be known to most (especially a younger audience), the author is a writer best known for many of the Choose Your Own Adventure books that were hugely successful in the 80s.

Jimmy Maher wrote about them recently https://www.filfre.net/2025/09/choose-your-own-adventure/


Thanks for sharing. This part resonated :)

"Today, it’s all too easy to see all of the limitations and infelicities of The Cave of Time and its successors: a book of 115 pages that had, as it proudly trumpeted on the cover, 40 possible endings meant that the sum total of any given adventure wasn’t likely to span more than about three choices if you were lucky. But to a lonely, hyper-imaginative eight-year-old, none of that mattered. I was well and truly smitten, not so much by what the book was as by what I wished it to be, by what I was able to turn it into in my mind by the sheer intensity of that wish."


These books were incredibly important to me as an 80s kid. Was a voracious reader in general but absolutely loved these because they had replay value! I remember scouring through these on long family trips in the car to find every possible ending.

The parallels with modern video games are obvious.


The first video game (and one of the first programs) I wrote was a self-styled Choose Your Own Adventure on a C64 with ASCII art and maybe a total of 10 pages.

The only person who acted impressed by it was my grandmother - who had paid for the C64 - but that was enough for me.


Same. I would compulsively graph the options as I went so I could backtrack back decisions.


In fact, this inspired me to buy such a book for my 9-yo son! They've grown in size, apparently (250-300 pages). Let's see how, in the age of omnipresent screens, he likes it :)


He's also the grandfather of David Cornswet, the actor playing Superman in the latest movie.


nice trivia


Discussed yesterday, for anyone curious:

Choose Your Own Adventure - https://news.ycombinator.com/item?id=45337450 - Sept 2025 (80 comments)


Thank for noting this! I had no idea while I was reading the piece, but I loved those books as a kid. What a delightful connection.


I was pleased that at my local toy store (yes, we still have one, The Time Machine in Manchester, CT) they carry Choose Your Own Adventure books. What’s more, last week we picked up a copy of “The Cave of Time”. So many memories of that book growing up.


The original headline actually said that, before it got edited out. :/


Sherlock Holmes stories are very interesting to read in order because they span a good 40 years. The first stories are set in the classic Victorian setting with horses and carriages and in the later ones the first cars appear, WW1 happens, etc…


I don’t see anything inherently wrong in a news site reporting different views on the same topic.

I wish more would do that and let me make up my own mind, instead of pursuing a specific editorial line cherry-picking what news to comment and how to spin them, which seems to be the case for most (I’m talking in general terms).


That makes no sense. No one sane wants to go back to a time where all mobile electronics had separate chargers, especially since the number of mobile devices we use on a daily basis is higher than ever before.

If anything, manufacturers that are able to provide working, compatible solutions should be preferred by consumers to those that don't, and the laws of economics will take care of the rest.

But some of those manufacturers have large loyal customer bases that will find ways to justify them even if they were to employ child labor, so there is that.


Yes, thanks OP for sharing. I check HN front page mostly everyday and had no clue such sophisticated scams existed (I pretty much don’t use social media).

It’s easy to think “eh, it will never happen to me” but hindsight is 20/20. I impulse-donated to things like Wikipedia in the past and I’m susceptible to FOMO as most people.


"what model are you?"

ChatGPT said: You're chatting with ChatGPT based on the GPT-4o architecture (also known as GPT-4 omni), released by OpenAI in May 2024.


Actually this trick have been proven to be useless in a lot of cases.

LLMs don’t inherently know what they are because "they" are not themselves part of the training data.

However, maybe it’s working because the information is somewhere into their pre-prompt but if it wasn’t, it wouldn’t say « I don’t know » but rather hallucinate something.

So maybe that’s true but you cannot be sure.


If you believe 'leaked system prompts', it tends to be part of the system prompt.

I believe most of these came from asking the LLMs, and I don't know if they've been proven to not be a hallucination.

https://github.com/jujumilk3/leaked-system-prompts


It's injected into their system prompt


...which is useless when the model gets changed in-between responses.


Does it even make sense calling them 'GPUs' (I just checked NVIDIA product page for the H100 and it is indeed so)?

There should be a quicker way to differentiate between 'consumer-grade hardware that is mainly meant to be used for gaming and can also run LLMs inference in a limited way' and 'business-grade hardware whose main purpose is AI training or running inference for LLMs".


We are fast approaching the return of the math coprocessor. In fashion they say that trends tend to reappear roughly every two decades, its overdue.


Yeah I would love for Nvidia to introduce faster update cycle to their hardware, so that we'll have models like "H201", "H220", etc.

I think it will also make sense to replace "H" with a brand number, sort of like they already do for customer GPUs.

So then maybe one day we'll have a math coprocessor called "Nvidia 80287".


I remember the building hugh end workstations for a summer job in the 2000s, where I had to fit Tesla cards in the machines. I don't remember what their device names were, we just called them tesla cards.

"Accelerator card" makes a lot of sense to me.


It's called a tensorcore and it's in most GPUs


"GPGPU" was something from over a decade ago; for general purpose GPU computing


Yeah, Crysis came out in 2007 and could run physics on the GPU.


I think apple calls them NPUs and Broadcom calls them XPUs. Given they’re basically the number 2 and 3 accelerator manufacturers one of those probably works.


By the way I wonder, what has more performance, a $25 000 professional GPU or a bunch of cheaper consumer GPUs costing $25 000 in total?


Consumer GPUs in theory and by a large margin (10 5090s will eat an H100 lunch with 6 times the bandwidth, 3x VRAM and a relatively similar compute ratio), but your bottleneck is the interconnect and that is intentionally crippled to avoid beowulf GPU clusters eating into their datacenter market.

Last consumer GPU with NVLink was the RTX 3090. Even the workstation-grade GPUs lost it.

https://forums.developer.nvidia.com/t/rtx-a6000-ada-no-more-...


H100s also has custom async WGMMA instructions among other things. From what I understand, at least the async instructions formalize the notion of pipelining, which engineers were already implicitly using because to optimize memory accesses you're effectively trying to overlap them in that kind of optimal parallel manner.


I just specify SXM (node) when I want to differentiate from PCIe. We have H100s in both.


We could call the consumer ones GFX cards, and keep GPU for the matrix multiplying ones.


GPU stands for "graphics processing unit" so I'm not sure how your suggestion solves it.

Maybe renaming the device to an MPU, where the M stands for "matrix/math/mips" would make it more semantically correct?


I think that G was changed to "general", so now it's "general processing unit".


This doesn't seem to be true at all. It's a highly specialized chip for doing highly parallel operations. There's nothing general about it.

I looked around briefly and could find no evidence that it's been renamed. Do you have a source?


CPU is already the general (computing) processing unit so that wouldn't make sense


Well, does it come with graphics connectors?


Nope, doesn't have any of the required hardware to even process graphics iirc


Although the RTX Pro 6000 is not consumer-grade, it does come with graphics ports (four Displayports) and does render graphics like a consumer card :) So seems the difference between the segments is becoming smaller, not bigger.


That’s because it’s intended as a workstation GPU not one used in servers


Sure, but it still sits in the 'business-grade hardware whose main purpose is AI training or running inference for LLMs" segment parent mentioned, yet have graphics connectors so the only thing I'm saying is that just looking at that won't help you understand what segment the GPU goes into.


I'd Like to point at the first revision AMD MI50/MI60 cards which were at the time the most powerful GPUs on the market at least by memory bandwidth.

Defining GPU as "can output contemporary display connector signal and is more than just a ramdac/framebuffer-to-cable translator, starting with even just some 2D blitting acceleration.


You can install ollama with a script fetched with curl and run a llm model with a grand total of two bash commands (including curl).


If I understand correctly, looking at API pricing for Sonnet, output tokens are 5 times more expensive than input tokens.

So, if rate limits are based on an overall token cost, it is likely that one will hit them first if CC reads a few files and writes a lot of text as output (comments/documentation) rather than if it analyzes a large codebase and then makes a few edits in code.


If we assume that AI coding actually increases productivity of a programmer without side effects (which of course is a controversial assumption, but not affecting the actual question):

1) If you are a salaried employee, if you are seen as less productive than your colleagues that use AI, at the very least you won't be valued as much. Either you will eventually earn less than your colleagues or be made redundant.

2) If you are a consultant, you'll be able to invoice more work in the same amount of time. Of course, so will your competitors, so that rates for a set amount work will probably decrease.

3) If you are an entrepreneur, you will be able to create a new product hiring less people (or on your own). Of course, so will your competitors, so that the expectations for viable MVPs will likley be raised.

In short, if AI coding assistants actually make a programmer more productive, you will likely have to learn to live with it in order to not be left behind.


This is only true if the degree to which they increase productivity meaningfully rises above the level of noise.

That is to say: "Productivity" is notoriously extremely hard to measure with accuracy and reliability. Other factors such as different (and often terrible) productivity measures, nepotism/cronyism, communication skills, self-marketing skills, and what your manager had for breakfast on the day of performance review are guaranteed to skew the results, and highly likely, in what I would guess is the vast majority of cases, to make any productivity increases enabled by LLMs nearly impossible to detect on a larger scale.

Many people like to operate as if the workplace were a perfectly efficient market system, responding quickly and rationally to changes like productivity increases, but in fact, it's messy and confusing and often very slow. If an idealized system is like looking through a pane of perfectly smooth, clear glass, then the reality is, all too often, like looking through smudgy, warped, clouded bullseye glass into a room half-full of smoke.


The problem is that it doesn't actually matter if it really makes a programmer more productive or not.

Because productivity is hard to measure, if we just assume that using AI tools is more productive we're likely to be making stupid choices

And since I strongly think that AI coding is not making me personally more productive it puts me in a situation where I have to behave irrationally in order to show employers that I'm a good worker bee

I am increasingly feeling trapped between a losers choice. I take the mental anguish of using AI tools against my vetter judgment or I take the financial insecurity (and associated mental anguish) of just being unemployed


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: