Regardless of correctness, as a DSP dork I really identified with the question: "What kind of a monster would make a non-power of two ring anyway?" I remember thinking similarly when requesting a power of two buffer from a 3rd party audio hardware device and having it correct to a nearby non-power of two. Latency adding ringbuffer to the rescue.
Humans can fail at some of these qualifications, often without guile:
- being consistent and knowing their limitations
- people do not universally demonstrate effective understanding and mental modeling.
I don't believe the "consciousness" qualification is at all appropriate, as I would argue that it is a projection of the human machine's experience onto an entirely different machine with a substantially different existential topology -- relationship to time and sensorium. I don't think artificial general intelligence is a binary label which is applied if a machine rigidly simulates human agency, memory, and sensing.
If this quantification of lag is anywhere near accurate (it may be larger and/or more complex to describe), soon open source models will be "simply good enough". Perhaps companies like Apple could be 2nd round AI growth companies -- where they market optimized private AI devices via already capable Macbooks or rumored appliances. While not obviating cloud AI, they could cheaply provide capable models without subscription while driving their revenue through increased device sales. If the cost of cloud AI increases to support its expense, this use case will act as a check on subscription prices.
Google already has dedicated hardware for running private LLMs: just look at what they're doing on the Google Pixel. The main limiting factor right now is access to hardware that's powerful enough, and especially has enough memory, to run a good LLM, which will happen eventually. Normally, by 2031 we should have devices with 400 GB of RAM, but the current RAM crisis could throw off my calculations...
Hmmm, maybe use a different OS? I would never dream of using Windows to get any type of work done myself and there are many others like me. There certainly are choices. If you prefer to stay, MCP services can be configured to use local models, and people are doing so on Windows as well (and definitely with MacOS and Linux). From an OS instrumentation perspective, I think MacOS is probably the most mature -- Apple has acknowledged MCP and intends a hybrid approach defaulting to their own in house, on device, models, but by embracing MCP appears to be allowing local model access.
Exactly. I was paying for Gemini Pro, and moved to a Claude subscription. Am going to switch back to Gemini for the next few months. The cloud centralization, in its current product stage, allows you to be a model butterfly. And these affordable and capable frontier model subscriptions, help me train and modify my local open weight models.
I think it is incredibly healthy to be critical and perhaps even a tinge cynical about the intentions of companies developing and productizing large language models (AI). However, the argument here completely ignores the evolving ecosystem of open weight models. Yes, the prominent companies developing frontier models are attempting to build markets and moats where possible, and the capital cloud investments are incredibly centralized. But even in 2025 the choice is there, with your own capital investment (RTX, MacBook etc.), for completely private and decentralized AI. You can also choose your own cloud too -- Cloudflare just acquired Replicate. If enough continue to participate in the open weight ecosystem, this centralization need not be totalitarian.
Taken together, as Andrew Tsang (too) beautifully depicts, the United States Healthcare system is arguably the largest bureaucracy on planet Earth. Larger in employees and collective spending than any effective bureaucracies in India or China.
This is such a naive, simplistic, distrusting and ultimately monastic perspective. An assumption here is that university students are uncritical and incapable of learning while utilizing AI as an instrument of mind. I think a much more prescient assessment would be that presence of AI demands a transformation and evolution of university curricula and assessment - and the author details early attempts at this -- but declares them failures and uncritical acquiescence. AI is literally built from staggeringly large subsets of human knowledge -- university cultures that refuse to critically participate and evolve with this development, and react by attempting to deny student access, do not deserve the title "university" -- perhaps "college", or the more fitting "monastery", would suffice. The obsession with "cheating", the fallacy that every individual needs to be assessed hermetically, has denied the reality (for centuries) that we are a collective and, now more than ever, embody a rich mass mind. Successful students will grow and flourish with these developments, and institutions of higher learning ought to as well.
Even conceding that you, the person reading this comment, will only use AI the right way. With diligence and curiosity. It takes a significant amount of denial not to understand that the majority of people see AI a shortcut to do their job with the least possible amount of effort, or as a way to cheat. These are the people you will be interacting with for the coming decades of your life.
If a student is given a task that a machine can do, and there is some intrinsic value for the student to perform this task manually and hermetically, this value ought to be explained to the student, and they can decide for themselves how to confront the challenge. I think LLMs pose an excellent challenge to educators -- if they are lazily asking for regurgitation from students they are likely to receive machine-aided regurgitation in 2025.
> This is such a naive, simplistic, distrusting and ultimately monastic perspective
This is such a disingenuous take on the article, there's nothing naive or simplistic about it, it's literally full of critical thought linking to more critical thought of other academic observers to what's happening at the educational level. The context in your reply implies you read at most the first 10% of the article.
The article flagged numerous issues with LLM application in the educational setting including
1) critical thinking skills, brain connectivity and memory recall are falling as usage rises, students are turning into operators and are not getting the cognitive development they would thru self-learning
2) Employment pressures have turned universities into credentialing institutions vs learning institutions, LLMs have accelerated these pressures significantly
3) Cognitive development is being sacrificed with long term implications on students
4) School admins are pushing LLM programs without consultation, as experiments instead of in partnership with faculty. Private industry style disruption.
The article does not oppose LLM as learning assistant, it does oppose it as the central tool to cognitive development, which is the opposite of what it accomplishes. The author argues universities should be primarily for cognitive development.
> Successful students will grow and flourish with these developments, and institutions of higher learning ought to as well.
Might as well work at OpenAI marketing with bold statements like that.
The core premise is decidedly naive and simplistic -- AI is used to cheat and students can't be trusted with it. This thesis is carried through the entirety of the article.
That's not the core premise of this article, go read the article to the end and don't use your LLM to summarize it.
The core premise is cognitive development of students is being impaired with long term implications for society without any care or thought by university admins and corporate operators.
It's disturbing when people comment on things they don't bother reading, literally aligning with the point the article is arguing, that critical thinking is decaying.
That's an utterly hilarious straw man, a spin worthy of politics, and someone else would label, a tautological "cheat". Students "cheated" hundreds of years ago. Students "cheated" 25 years ago. They "cheat" now. You can make an argument that AI mechanizes "cheating" to such an extent that the impact is now catastrophic. I argue that the concern for "cheating", regardless of its scale, is far overblown and a fallacy to begin with. Graduation, or measurement of student ability, is a game, a simulation that does not test or foster cognitive development implicitly. Should universities become hermetic fortresses to buttress against these untold losses posed by AI? I think this is a deeply misguided approach. While I had been a professor myself for 8 years, and do somewhat value the ideal of The Liberal Arts Education, I think students are ultimately responsible for their own cognitive development. University students are primarily adults, not children and not prisoners. Credential provisions, and graduation (in the literal sense) of student populations, is an institutional practice to discard and evolve away from.
Seriously, you’re arguing with people who have severe mental illness. One loon downthread genuinely thinks this will transform these students into “genuises”
You can straw man all you like, I haven't used an LLM in a few days -- definitely not to summarize this article -- and what you claim is the central idea, is directly related to my claim. Its very easy to combine them directly: students intellectual development is going to be impaired by AI because they can't be trusted to use it critically. I disagree.
When AI tools make it easy to cruise through coursework without learning anything then many students will just choose to do that? Intellectual development requires strenuous work and if universities no longer make students strain then most won’t. I don’t understand why you think otherwise.
I’m not sure how you lived through the last decade and came to the conclusion that people aged 17-25 make rational decisions with novel technologies that have short term gain and long term (essentially hidden) negative side effects.
It seems that 10% of college students in the U.S. are younger than 18, or do not have adult status. The other 90% are adults and are trusted with voting, armed services participation and enjoy most other rights that adults have (with several obvious and notable exceptions -- car rental and legal controlled substance purchase etc.) Are you saying that these adults shouldn't be trusted to use AI? In the United States, and much of the world, we have drawn the line at 18. Are you advocating that AI use shouldn't be allowed until a later cutoff in adulthood? It is not at all definitively established what these "essentially hidden" negative side effects are, that you elude to, and if they actually exist.
Your argument seems overly reliant on the definition of an adult. What is an adult? Is it a measure of responsibility, mental maturity? Because I would wager the level of responsibility and mental maturity of the average 18 year old has been on the downtrend.
I’m not advocating for completely restricting access to AI for certain age groups. I’m pointing out that historically we have restricted prolonged interactions with certain stimuli that have shown to be damaging to cognitive development, and that we should make the same considerations here.
I think it’s hard to deny that younger generations have been negatively affected by the proliferation of social media engineered around fundamentally predatory algorithms. As have the older generations.
No one is misrepresenting your argument, it's well understood and being argued that it is false.
> students intellectual development is going to be impaired by AI because they can't be trusted to use it critically.
This debate is going nowhere so I'll end here. Your core premise is on trust and student autonomy, which is nonsense and not what the article tackles.
It argues LLM literally don't facilitate cognitive brain development and can actually impair it, irrelevant to how they are used so it's malpractice for university admins to adopt it as a learning tool in a setting where the primary goal should be cognitive development.
Student's are free to do as they please, it's their brain, money and life. Though I've never heard anyone argue they were their wisest in their teens and twenties as a student so the argument that students should be left unguided is also nonsense.
reply