Hacker Newsnew | past | comments | ask | show | jobs | submit | zadler's commentslogin

prepare for war, western man!


So happy humans will be able to stop using their brains soon due to AI !


Maybe this is propaganda and you are being told what to think.

Of course AI isn’t going to be a good thing for the majority of humans at all. But it’s very important to manage sentiment within tech audiences.


Luddites would have argued that textile machinery isn’t going to be a good thing for the majority of humans at all.

And they were wrong.


Why not? Similarly to how "Google" and "Wikipedia" will make people more dumb? Or how the video killed the radio star? I think the cultural or intellectual decline does not have to do much with Google (in terms of search) nor Wikipedia. Heck, some people do not even know how to search properly.


A little sugar is fine. Alot of it kills you.


Yes, the dose makes the poison, water can kill you too, in excess, so could oxygen.

Using LLMs "responsibly" is OK, I do it all the time. I am productive and I still learn a lot.

LLMs are not a substitute for thinking as many people would like to believe / pretend.


Do we need it


Judgement Day was supposed to happen on August 29, 1997. We're really behind schedule.


Modern tech "progress" feels like you're drowning in a lake and someone throws you a bag of m&m's. Sure I like m&m's but it really isn't anywhere near the top of my list right now.


Do "we" need anything else than food and sleep? Humanoid automata trope was invented in ancient times, and it's everywhere in sci-fi since about a century ago. Is it really surprising that someone actually tries to make it happen just for the sake of it?


Human-induced climate change is drastically worsening the living conditions for billions of people who will fight to survive in the coming decades and centuries.


And we still have a chance to handle the situation in peaceful and equitable ways - by transitioning from a competitive to a cooperative (non-commercial, trade-free, open access, open source) commons economy! Once human labour is not competitive any more (2027-28?), we have to change our economic framework anyways. That's why I'm rooting for the aforementioned option with all the zest left in my aging body..


And energy intensive humanoid robots are gonna help them fight that fight?


He might be arguing for "no".


Yes, and this will allow us to reduce the number of humans.


(Wait, another Pilkington reference in this thread?)

I look at it as art meets tech. I'm pretty sure Da Vinci would have loved this.


it may be the last grasping straw of the drowning market


Why are they like this?

Why dont they stop the nuclear posturing and just get on with reindustrializing


The Canadians will probably want to protect themself in the case Trump really means it with the 51st state stuff. They will have interest in getting nuclear weapons, and they can perhaps get them from their allies.


Which is not the same as avoiding collateral damage.


Well, my point is more that the damage cannot be considered "collateral," but yes, it will yield more of it, not less.


Precision targeting should or could yield less but what’s happened is more like what you do to a windows 98 machine every year or two


We are all sleepwalking into a terminator scenario or dangerous surveillance state at least. Not just Israel. This kind of tech is eventually turned inwards.


This is known as the "imperial boomerang"[1] for anyone who would like to do further research and look at historical examples.

[1]: https://en.wikipedia.org/wiki/Imperial_boomerang


Once you figure out what your issues are, you might however find yourself going to therapists with the questions “what are we to do or what am I to do about it?” And finding they have no answer. Just, “yea it’s a process…”


There‘s a therapy form that can help you in that case as well: Behaviour therapy. But if you try to understand the root causes (in the present) of your problem(s), you need psychodynamic psychotherapy. Both will be helpful. Depending on your needs.


Exactly this. There are multiple types of psychotherapy, each with a particular focus oriented toward particular themes and goals.

It’s easy for the layman to misunderstand how these different types work in practice and for what circumstances they’re well suited.

I once underwent psychodynamic psychotherapy for a serious interpersonal relationship problem that was taking a devastating toll on my life. When I had reached a point where I was ready to discuss what (if any) therapy came next, I thought CBT (cognitive behavioral therapy) would be right for me after reviewing the particulars.

It’s important to note that along with different therapies, psychologists are also quite different. There are different schools and concepts that a psychologist may subscribe to or favor, and of course each has their own personal approach and style.

It turned out that on paper I thought CBT would be a good next step, but when I got there with a therapist who specialized in it, it wasn’t what I needed or wanted, and while I liked the therapist, I didn’t much care for their style of rapport building.


I feel this way about AI. Oh wait, AI is actually an existential risk to humanity, also.


Generative AI is hardly an existential risk. There’s huge fears around AGI, but that’s not what people are building.


Look at (the comments on) the Genie announcement on the front page today or yesterday, and earlier generative "world models". People are itching to use those kind of models for the internal world representation of autonomous robots. More generally the fact that a model is "generative" does not mean it can not become an effective component in or pathway to AGI.


There’s huge fears around AGI, but that’s not what people are building.

Everyone is trying to build this.


“Trying” is an overly generous interpretation of what’s going on.

Training an LLM is not actually working on AGI just as people building skyscrapers aren’t getting to the moon. It’s an inherent limitation on the approach.


Training LLMs is not the only thing people are trying. They dominate the public attention right now but there are people everywhere trying all kinds of approaches. Here's one from IBM: https://research.ibm.com/topics/neuro-symbolic-ai

First sentence: "We see Neuro-symbolic AI as a pathway to achieve artificial general intelligence"


I agree some people are doing novel work, but that’s a long way from “Everyone”.


Everyone is trying to get to AGI, and yes mostly through LLMs for now.

You said you don't believe LLMs are capable of ever getting there, so I offered a link showing people are trying other things as well. My point was never "Everyone is doing novel, non-LLM work towards AGI".

But everyone is in fact trying to get to AGI:

Google: https://www.fastcompany.com/91233846/noam-shazeer-back-at-go... https://deepmind.google/research/publications/66938/

Microsoft: https://www.microsoft.com/en-us/bing/do-more-with-ai/artific...

Meta: https://www.theverge.com/2024/1/18/24042354/mark-zuckerberg-...

Salesforce: https://www.forbes.com/sites/johnkoetsier/2023/09/12/salesfo...

Not to mention obvious suspects (OpenAI, Anthropic etc). Just because you think it won't work doesn't mean they're not trying. Everyone is trying to get to AGI.


OpenAI has specifically said LLM’s aren’t a path to AGI, though they think they have utility in understanding how society can and should interact with a potential AGI, especially from a policy perspective.

Your other examples are giant companies with many focus who can trivially pay lip service to fundamental research without spending any particular effort. Take your link:

“Benioff outlined four waves of enterprise AI, the first two of which are currently real, available, and shipping:

  Predictive
  Generative
  Autonomous and agents
  Artificial general intelligence”
That’s a long term mission statement not actual effort into AGI. So if you’re backing down from actual work to “trying to get to AGI” to include such aspirational statements then sure, I’m also working on AGI and immortality.


Please, before we discuss this further, and I would like to, provide some idea of what would qualify as an "actual effort into AGI" for you.


I exclude things like increasing processing power/infrastructure as slow AGI is still AGI even if it’s not useful. Yes AGI needs energy, no building energy infrastructure doesn’t qualify as actually working on AGI. You’re also going to need money but making money it’s isn’t inherent progress.

IMO, the fundamental requirements for AGI need at minimum: A system which operates continuously, improves in operation, and can set goals for itself. If you know the work you’re doing isn’t going to result in that then working towards AGI implies abandoning that approach and trying something new.

Basically researching new algorithms or types of computation could qualify, but iterative improvement on well studied methods doesn’t. So some research into biological neurons/brains qualifies but optimizing A* doesn’t even if it’s useful for what you’re working on. There’s a huge number of spin-offs from AI research that are really useful and worth developing, but also inherently limited.

I’m somewhat torn as to the minimum threshold for progress. Tossing 1 billion dollars worth of computational power at genetic algorithms wouldn’t produce AGI, but there’s theoretical levels of processing power where such an approach could actually work even if we’re nowhere close to building such systems. It’s the kind of moonshot that 99.99…% wouldn’t work, but maybe…

So, it may seem like moving the goalposts but I think the initial work on LLM’s could qualify, but subsequent refinement doesn’t.

Edited with some minor clarification.


> It’s an inherent limitation on the approach.

What's your evidence for this?


AGI needs to be able to generalize to real world tasks like self driving without needing task specific help from its creators.

But the current LLM process separates learning from interacting and the learning process is based on huge volumes of text. It’s possible to bolt on specific capabilities like say a chess engine, but you’re now building something different not an LLM.


I assure you people are very much trying to build the titular Torment Zone in the hit scifi novel "Don't Build the Torment Zone"


Either way it’s headed off the rails. Sloppification of everything, followed by eventual machine takeover.


There’s still room for it to become an inflation hedge, we still havnt really seen much in terms of USD inflation and people scrambling for the exit.

OTOH, government crackdowns might be able to tank the price.


> There’s still room for it to become an inflation hedge, we still havnt really seen much in terms of USD inflation and people scrambling for the exit.

Bitcoin is no more an inflation hedge than gold is, which it is not:

* https://www.nber.org/papers/w18706

* https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3667789

and both are something worse, deflationary:

* https://www.nber.org/books-and-chapters/financial-markets-an...

* https://www.goodreads.com/book/show/775143.Golden_Fetters

Gold did not help in stabilizing currencies or economies:

* https://archive.is/https://www.theatlantic.com/business/arch...

and neither would Bitcoin.


the bet is that the incoming regime won't crackdown and may go do the opposite


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: