Hacker Newsnew | past | comments | ask | show | jobs | submit | Rebuff5007's commentslogin

Thats true, but I think the blame is more on "American society" and not the kids working through the system.

50 years ago, college was cheaper. From what I understand getting jobs if you had a college degree was much easier. Social media didn't exist and people weren't connected to a universe of commentary 24/7. Kids are dealing with all this stuff, and if requesting a "disability accommodation" is helping them through it, that seems fine?


That seems naive, it would be like if we started dumping tons of deer food into the woods and the next year when deer are grossly overpopulated we thought "why are there so many deer now?".

Humans are as a mass dumb animals, if we give them the opportunity for individual gratification at long-term cost for the group they are going to take it immediately.


Indeed, it's much more reflective of American society in 2025 than it is of the individual students (or even Stanford in general).

2025 didn't invent sneaky.

"Snake oil" is over 100 years old.


Failing out of college can be life-ruining. Tens or hundreds of thousands of dollars of high-interest non-dischargeable debt and employment opportunities completely nuked.

Come on, let's be serious. Most Stanford undergraduate courses aren't that tough, grade inflation is rampant, and almost anyone who gets admitted can probably graduate regardless of accommodations or lack thereof. We're talking about the difference between getting an A or A- here. And Stanford has such generous financial aid that students from families earning less than $150K get free tuition so no one should be leaving with huge student debts.

> no one should be leaving with huge student debts.

"In the 2023-24 academic year, 88% of undergraduates graduated without debt, and those who borrowed graduated with a median debt of $13,723." Source: https://news.stanford.edu/stories/2025/02/stanford-sets-2025...

So strictly speaking, not "no one". (But certainly smaller than the national averages.)


sorry, I often forget that I went to a university that's actually challenging, and that's not the typical case.

But why not? AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.

As an average consumer, I actually feel like i'm less locked into gemini/chatgpt/claude than I am to Apple or Google for other tech (i.e. photos).


> AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.

It was already tough to run flagship-class local models and it's only getting worse with the demand for datacenter-scale compute from those specific big players. What happens when the model that works best needs 1TB of HBM and specialized TPUs?

AI computation looks a lot like early Bitcoin: first the CPU, then the GPUs, then the ASICs, then the ASICs mostly being made specifically by syndicates for syndicates. We are speedrunning the same centralization.


It appears to me the early exponential gains from new models have plateaued. Current gains seem very marginal, it could be the future model best model that needs "1TB of HBM and specialized TPUs" won't be all that better than the models we have today. All we need to do is wait for commodity hardware that can run current models, and OpenAI / Anthropic et al are done if their whole plan to monetize this is to inject ads into the responses. That is, unless they can actually create AGI that requires infrastructure they control, or some other advancement.

That's what I was thinking as I was listening to the "be like clippy" video linked in the parent. Those local models probably won't be able to match the quality of the big guys' for a long time to come, but for now the local, open models have a lot of potential for us to escape this power consolidation before it's complete and still get their users 75-80% of the functionality. That remaining 20-25%, combined with the new skill of managing an LLM, is where the self-value comes in, the bit that says, "I do own what I built or learned or drew."

The hardest part with that IMO will be democratizing the hardware so that everybody can afford it.


Hopes that we all will be running LLM models locally in the face of skyrocketing prices on all kinds of memory sound very similar to the cryptoanarchists' ravings about full copies of blockchain stored locally on every user's device in the face of exponential growth of its size.

The only difference is that memory prices skyrocketing is a temporary thing resulting from a spike in demand from incompetent AI megalomaniacs like Sam Altman who don't know how to run a company and are desperate to scale because that's the only kind of sustainability they understand.

Once the market either absorbs that demand (if it's real) or else over-produces for it, RAM prices are going to either slowly come back down (if it's real) or plunge (if it isn't).

People are already running tiny models on their phones, and there's a Mistral 3B model that runs locally in a browser (https://huggingface.co/spaces/mistralai/Ministral_3B_WebGPU).

So we'll see what happens. People used to think crypto currencies were going to herald a new era of democratizing economic (and other) activity before the tech bros turned Bitcoin into a pyramid scheme. It might be too late for them to do the same with locally-run LLMs but the NVidias and AMDs of the world will be there to take our $.


There is a case that the indices owned by the major search engines are a form of centralization of power. Normal people and smaller companies would have to pay a lot of money to get indices for their new competing search engine. However the analogy falls apart when you look at a) the scale of the investments involved and b) the pervasiveness of the technology.

Creating a search engine index requires several orders of magnitude less computing power then creating the weights of an LLM model. Like it is theoretically possible for somebody with a lot of money to spare to create a new search index, but only the richest of the rich can do that with an LLM model.

And search engines are there to fulfill exactly one technical niche, albeit an important one. LLMs are stuffed into everything, whether you like it or not. Like if you want to use Zoom, you are not told to “enrich your experience with web search”, you are told, “here is an AI summary of your conversation”.


Exactly. I was paying for Gemini Pro, and moved to a Claude subscription. Am going to switch back to Gemini for the next few months. The cloud centralization, in its current product stage, allows you to be a model butterfly. And these affordable and capable frontier model subscriptions, help me train and modify my local open weight models.

Economies of scale makes this a space that is really difficult to be competitive in as a small player.

If it's ever to be economically viable to run a model like this, you basically need to run it non-stop, and make money doing so non-stop in order to offset the hardware costs.


I think the good news is that open-source models are a genuine counterweight to these closed-source models. The moment ads become egregious, I expect to see and use services for an affordable "private GPT on demand, fine-tuned as you want it"

So instead of a single everything-llm, i will have a few cheaper subscriptions to a coding llm, a life planning llm (recipes, and some travel advice?). Probably it.


Hot take: a flagship silicon valley startup built on hype and overzealous ambition crashing and burning in 2026 is exactly what the industry needs right now.

You seem to be complicating your thinking here.

They are spending more money than they are bringing in. This means they are losing money.


Amazon had a product on day 1... particularly one that was unique and made a lot of sense for the moment it was introduced.

OpenAI doesn’t have a product? Have we existed in the same reality for the last 3 years? Something something fastest grown user base in the history of tech

No, they don't, the value of AI isn't the AI itself, it's purely the output.

If someone else can achieve the same output as OpenAI at a similar price, they are completely toast. There is absolutely nothing tying you to ChatGPT because ChatGPT doesn't matter, only what it produces.

Amazon was in a (similar) situation, but not quite, because they offered a unique experience. But I strongly believe that if Sears just kept their catalogue for another decade, Amazon would not exist.


Something Something every one of those users (even the paying ones!) loses OpenAI money

Selling books online was certainly profitable, but I'm not sure about unique. Amazon's big success is that they had no particular ties to any existing publisher so they didn't have the corporate headwinds of "this will kill our brick and mortar stores and their distribution systems!".

No, it was unique. Being able to browse such an extensive collection of books and order them from your computer was mind-blowing.

Prominent AI researcher (currently at liquid.ai, affiliated with MIT) found in recently released Epstein files intellectualizing racial superiority.


And a sizable portion of the population believe vaccines don't work and/or have 5G!

I feel like I'm watching a tsunami about to hit while literally already drowning from a different tsunami.


Heres a definition. How impressive is the output relative to the input. And by input, I don't just mean the prompt, but all the training data itself.

Do you think someone who has only ever studied pre-calc would be able to work through a calculus book if they had sufficient time? how about a multi-variable calc book? How about grad level mathematics?

IMO intelligence and thinking is strictly about this ratio; what can you extrapolate from the smallest amount of information possible, and why? From this perspective, I dont think any of our LLMs are remotely intelligent despite what our tech leaders say.


Hear, hear!

I have long thought this, but not had as good way to put it as you did.

If you think about geniuses like Einstein and ramanujen, they understood things before they had the mathematical language to express them. LLMs are the opposite; they fail to understand things after untold effort, training data, and training.

So the question is, how intelligent are LLMs when you reduce their training data and training? Since they rapidly devolve into nonsense, the answer must be that they have no internal intelligence

Ever had the experience of helping someone who's chronically doing the wrong thing, to eventually find they had an incorrect assumption, an incorrect reasoning generating deterministic wrong answers? LLMs dont do that; they just lack understanding. They'll hallucinate unrelated things because they dont know what they're talking about - you may have also had this experience with someone :)


> So the question is, how intelligent are LLMs when you reduce their training data and training? Since they rapidly devolve into nonsense, the answer must be that they have no internal intelligence

This would be the equivalent of removing all senses of a human from birth and expecting them to somehow learn things. They will not. Therefore humans are not intelligent?

> LLMs dont do that; they just lack understanding.

You have no idea what they are doing. Since they are smaller than the dataset, they must have learned an internal algorithm. This algorithm is drawing patterns from somewhere - those are its internal, incorrect assumptions. It does not operate in the same way that a human does, but it seems ridiculous to say that it lacks intelligence because of that.

It sounds like you've reached a conclusion, that LLMs cannot be intelligent because they have said really weird things before, and are trying to justify it in reverse. Sure, it may not have grasped that particular thing. But are you suggesting that you've never met a human that is feigning understanding in a particular topic say some really weird things akin to an LLM? I'm an educator, and I have heard the strangest things that I just cannot comprehend no matter how much I dig. It really feels like shifting goalposts. We need to do better than that.


> and are trying to justify it in reverse

In split-brain experiments this is exactly how one half of the brain retroactively justifies the action of the other half. Maybe it is the case in LLMs that an overpowered latent feature sets the overall direction of the "thought" and then inference just has to make the best of it.


You might be interested in reading about the minimum description length (MDL) principle [1]. Despite all the dissenters to your argument, what your positing is quite similar to MDL. It's how you can fairly compare models (I did some research in this area for LLMs during my PhD).

Simply put, to compare models, you describe both the model and training data using a code (usual reported as number of bits). The trained model that represents the data within the fewest number of bits is the more powerful model.

This paper [2] from ICML 2021 shows a practical approach for attempting to estimate MDL for NLP models applied to text datasets.

[1]: http://www.modelselection.org/mdl/

[2]: https://proceedings.mlr.press/v139/perez21a.html


Animals think but come with instincts which breaks the output relative to the input test you propose. Behaviors are essentially pre-programmed input from millions of years of evolution, stored in the DNA/neurology. The learning thus typically associative and domain-specific, not abstract extrapolation.

A crow bending a piece of wire into a hook to retrieve food demonstrates a novel solution extrapolated from minimal, non-instinctive, environmental input. This kind of zero-shot problem-solving aligns better with your definition of intelligence.


I'm not sure I understand what you're getting at. You seem to be on purpose comparing apples and oranges here: for an AI, we're supposed to include the entire training set in the definition of its input, but for a human we don't include the entirety of that human's experience and only look at the prompt?


> but for a human we don't include the entirety of that human's experience and only look at the prompt?

When did I say that? Of course you look at a human's experience when you judge the quality of their output. And you also judge their output based on the context they did their work in. Newton wouldn't be Newton if he was the 14th guy to claim that the universe is governed by three laws of motion. Extending the example I used above, I would be more impressed if an art student aced a tough calc test than a math student, given that a math student probably has spent much more time with the material.

"Intelligence and "thinking" are abstract concepts, and I'm simply putting forward a way that I think about them. It works very much outside the context of AI too. The "smartest" colleagues I've worked with are somehow able to solve a problem with less information or time than I need. Its usually not because they have more "training data" than me.


That an okay-ish definition, but to me this is more about whether this kind of "intelligence" is worth it, not whether it is intelligence itself. The current AI boom clearly thinks it is worth to put that much input to get the current frontier-model-level of output. Also, don't forget the input scales across roughly 1B weekly users at inference time.

I would say a good definition has to, minimally, take on the Turing test (even if you disagree, you should say why). Or in current vibe parlance, it does "feel" intelligent to many people--they see intelligence in it. In my book this allows us to call it intelligent, at least loosely.


There are plenty of humans that will never "get" calculus, despite numerous attempts at the class and countless hours of 1:1 tutoring. Are those people not intelligent? Do they not think? We could say yes they aren't, but by the metric of making money, plenty of people are smart enough to be rich, while college math professors aren't. And while that's a facile way of measuring someone's worth or their contribution to society (some might even say "bad"), it remains that even if someone cant understand calculus, some of them are intelligent enough to understand humans enough to be rich through some fashion that wasn't simply handed to them.


I don't think it's actually true that someone with:

1. A desire to learn calculus 2. A good teacher 3. No mental impairments such as dementia or other major brain drainers

could not learn calculus. Most people don't really care to try or don't get good resources. What you see as an intelligent mathematician is almost always someone born with better resources that was also encouraged to pursue math.


1 and 3 are loopholes large enough to drive a semi truck through. You could calculate how far the truck traveled if you have its acceleration with a double integral, however.


Yeah, that's compression. Although your later comments neglect the many years of physical experience that humans have as well as the billions of years of evolution.

And yes, by this definition, LLMs pass with flying colours.


I hate when people bring up this “billions of years of evolution” idea. It’s completely wrong and deluded in my opinion.

Firstly humans have not been evolving for “billions” of years.

Homo sapiens have been around for maybe 300’000 years, and the “homo” genus has been 2/3 million years. Before that we were chimps etc and that’s 6/7 million years ago.

If you want to look at the entire brain development, ie from mouse like creatures through to apes and then humans that’s 200M years.

If you want to think about generations it’s only 50/75M generations, ie “training loops”.

That’s really not very many.

Also the bigger point is this, for 99.9999% of that time we had no writing, or any kind of complex thinking required.

So our ability to reason about maths, writing, science etc is only in the last 2000-2500 years! Ie only roughly 200 or so generations.

Our brain was not “evolved” to do science, maths etc.

Most of evolution was us running around just killing stuff and eating and having sex. It’s only a tiny tiny amount of time that we’ve been working on maths, science, literature, philosophy.

So actually, these models have a massive, massive amount of training more than humans had to do roughly the same thing but using insane amounts of computing power and energy.

Our brains were evolved for a completely different world and environment and daily life that the life we lead now.

So yes, LLMs are good, but they have been exposed to more data and training time than any human could have unless we lived for 100000 years and still perform worse than we do in most problems!


Okay, fine, let's remove the evolution part. We still have an incredible amount of our lifetime spent visualising the world and coming to conclusions about the patterns within. Our analogies are often physical and we draw insights from that. To say that humans only draw their information from textbooks is foolhardy; at the very least, you have to agree there is much more.

I realise upon reading the OP's comment again that they may have been referring to "extrapolation", which is hugely problematic from the statistical viewpoint when you actually try to break things down.

My argument for compression asserts that LLMs see a lot of knowledge, but are actually quite small themselves. To output a vast amount of information in such a small space requires a large amount of pattern matching and underlying learned algorithms. I was arguing that humans are actually incredible compressors because we have many years of history in our composition. It's a moot point though, because it is the ratio of output to capacity that matters.


They can't learn iterative algorithms if they cannot execute loops. And blurting out an output which we then feed back in does not count as a loop. That's a separate invocation with fresh inputs, as far as the system is concerned.

They can attempt to mimic the results for small instances of the problem, where there are a lot of worked examples in the dataset, but they will never ever be able to generalize and actually give the correct output for arbitrary sized instances of the problem. Not with current architectures. Some algorithms simply can't be expressed as a fixed-size matrix multiplication.


>Most of evolution was us running around just killing stuff and eating and having sex.

Tell Boston Dynamics how to do that.

Mice inherited brain from their ancestors. You might think you don't need a working brain to reason about math, but that's because you don't know how thinking works, it's argument from ignorance.


You've missed the point entirely.

People argue that humans have had the equivalent of training a frontier LLM for billions of years.

But training a frontier LLM involves taking multiple petabytes of data, effectively all of recorded human knowledge and experience, every book ever written, every scientific publication ever written, all of known maths, science, encylopedias, podcasts, etc. And then training that for millions of years worth of GPU-core time.

You cannot possibly equate human evolution with LLM training, it's ridiculous.

Our "training" time didn't involve any books, maths, science, reading, 99.9999% of our time was just in the physical world. So you can quite rationally argue that our brains ability to learn without training is radically better and more efficient that the training we do for LLMs.

Us running around in the jungle wasn't training our brain to write poetry or compose music.


> Us running around in the jungle wasn't training our brain to write poetry or compose music.

This is a crux of your argument, you need to justify it. It sounds way off base to me. Kinda reads like an argument from incredulity.


No, I think what he said was true. Human brains have something about them that allow for the invention of poetry or music. It wasn't something learned through prior experience and observation because there aren't any poems in the wild. You might argue there's something akin to music, but human music goes far beyond anything in nature.


We have an intrinsic (and strange) reward system for creating new things, and it's totally awesome. LLMs only started to become somewhat useful once researchers tried to tap in to that innate reward system and create proxies for it. We definitely have not succeeded in creating a perfect mimicry of that system though, as any alignment researcher would no doubt tell you.


So you're arguing that "running around in the jungle" is equivalent to feeding the entirety of human knowledge in LLM training?

Are you suggesting that somehow there were books in the jungle, or perhaps boardgames? Perhaps there was a computer lab in the jungle?

Were apes learning to conjugate verbs while munching on bananas?

I don't think I'm suggesting anything crazy here... I think people who say LLM training is equivalent to "billions of years of evolution" need to justify that argument far more than I need to justify that running around in the jungle is equivalent to mass processing petabytes of highly rich and complex dense and VARIED information.

One year of running around in the same patch of jungle, eating the same fruit, killing the same insects, and having sex with the same old group of monkeys isn't going to be equal to training with the super varied, complete, entirety of human knowledge, is it?

If you somehow think it is though, I'd love to hear your reasoning.


There is no equivalency, only contributing factors. One cannot deny that our evolutionary history has contributed to our current capacity, probably in ways that are difficult to perceive unless you're an anthropologist.

Language is one mode of expression, and humans have many. This is another factor that makes humans so effective. To be honest, I would say that physical observation is far more powerful than all the bodies of text, because it is comprehensive and can respond to interaction. But that is merely my opinion.

No-one should be arguing that an LLM training corpus is the same as evolution. But information comes in many forms.


You're comparing the hyper specific evolution of 1 individual (an AI system) to the more general evolution of the entire human species (billions of individuals). It's as if you're forgetting how evolution actually works - natural selection - and forgetting that when you have hundreds of billions of individuals over thousands of years that even small insights gained from "running around in the jungle" can compound in ways that are hard to conceptualize.

I'm saying that LLM training is not equivalent to billions of years of evolution because LLMs aren't trained using evolutionary algorithms; there will always be fundamental differences. However, it seems reasonable to think that the effect of that "training" might be more or less around the same level.


Im so confused as to how you think you can cut an endless chain at the mouse.

Were mammals the first thing? No. Earth was a ball of ice for a billion years - all life at that point existed solely around thermal vents at the bottom of the oceans... that's inside of you, too.

Evolution doesn't forget - everything that all life has ever been "taught" (violently had programmed into us over incredible timelines) all that has ever been learned in the chain of DNA from the single cell to human beings - its ALL still there.


This feels too linear. Machines are great at ingesting huge volumes of data, following relatively simple rules and producing optimized output, but are LLMs sufficiently better than humans at finding windy, multi-step connections across seemingly unrelated topics & fields? Have they shown any penchant for novel conclusions from observational science? What I think your ratio misses is the value in making the targeted extrapolation or hypothesis that holds up out of a giant body of knowledge.


Are you aware of anything novel, produced by an LLM?


For more on this perspective, see the paper On the measure of intelligence (F. Chollet, 2019). And more recently, the ARC challenge/benchmarks, which are early attempts at using this kind of definition in practice to improve current systems.


Is the millions of years of evolution part of the training data for humans?


Millions of years of evolution have clearly equipped our brain with some kind of structure (or "inductive bias") that makes it possible for us to actively build a deep understanding for our world... In the context of AI I think this translates more to representations and architecture than it does with training data.


Because genes don't encode the millions of years of experience from ancestors, despite how interesting that is in say the Dune Universe (with help of the spice melange). My understanding is genes don't even specifically encode for the exact structure of the brain. It's more of a recipe that gets generated than a blue print, with young brains doing a lot of pruning as they start experiencing the world. It's a malleable architecture that self-adjusts as needed.


If you're in the EU


Tried using it from outside EU (even from outside Europe ooho) and seems to work just fine. Where are you getting the "only in the EU" from?


I wrote “if you’re in the EU” because the project only has resolvers in the EU, so accessing it from the rest of the world will likely not be worth it over local alternatives.

As for the legality, the following text is from their website:

> Yes, our DNS4EU Public Service is completely free for citizens. Although primarily intended for users within the European Union due to our infrastructure's geographic distribution, we impose no restrictions on users from other locations.


Parent initially wrote "If you're in Europe" and you tried to correct them with "If you're in the EU" which is an irrelevant correction really, the revolvers are in Europe (and EU) so obviously makes sense for anyone in Europe or nearby to use it. And for others, as secondary/verification. And yes, primarily usage is obviously for Europeans/people within EU, makes sense.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: