Why not? Similarly to how "Google" and "Wikipedia" will make people more dumb? Or how the video killed the radio star? I think the cultural or intellectual decline does not have to do much with Google (in terms of search) nor Wikipedia. Heck, some people do not even know how to search properly.
Modern tech "progress" feels like you're drowning in a lake and someone throws you a bag of m&m's. Sure I like m&m's but it really isn't anywhere near the top of my list right now.
Do "we" need anything else than food and sleep? Humanoid automata trope was invented in ancient times, and it's everywhere in sci-fi since about a century ago. Is it really surprising that someone actually tries to make it happen just for the sake of it?
Human-induced climate change is drastically worsening the living conditions for billions of people who will fight to survive in the coming decades and centuries.
And we still have a chance to handle the situation in peaceful and equitable ways - by transitioning from a competitive to a cooperative (non-commercial, trade-free, open access, open source) commons economy!
Once human labour is not competitive any more (2027-28?), we have to change our economic framework anyways. That's why I'm rooting for the aforementioned option with all the zest left in my aging body..
The Canadians will probably want to protect themself in the case Trump really means it with the 51st state stuff. They will have interest in getting nuclear weapons, and they can perhaps get them from their allies.
We are all sleepwalking into a terminator scenario or dangerous surveillance state at least. Not just Israel. This kind of tech is eventually turned inwards.
Once you figure out what your issues are, you might however find yourself going to therapists with the questions “what are we to do or what am I to do about it?” And finding they have no answer. Just, “yea it’s a process…”
There‘s a therapy form that can help you in that case as well: Behaviour therapy. But if you try to understand the root causes (in the present) of your problem(s), you need psychodynamic psychotherapy. Both will be helpful. Depending on your needs.
Exactly this. There are multiple types of psychotherapy, each with a particular focus oriented toward particular themes and goals.
It’s easy for the layman to misunderstand how these different types work in practice and for what circumstances they’re well suited.
I once underwent psychodynamic psychotherapy for a serious interpersonal relationship problem that was taking a devastating toll on my life. When I had reached a point where I was ready to discuss what (if any) therapy came next, I thought CBT (cognitive behavioral therapy) would be right for me after reviewing the particulars.
It’s important to note that along with different therapies, psychologists are also quite different. There are different schools and concepts that a psychologist may subscribe to or favor, and of course each has their own personal approach and style.
It turned out that on paper I thought CBT would be a good next step, but when I got there with a therapist who specialized in it, it wasn’t what I needed or wanted, and while I liked the therapist, I didn’t much care for their style of rapport building.
Look at (the comments on) the Genie announcement on the front page today or yesterday, and earlier generative "world models". People are itching to use those kind of models for the internal world representation of autonomous robots. More generally the fact that a model is "generative" does not mean it can not become an effective component in or pathway to AGI.
“Trying” is an overly generous interpretation of what’s going on.
Training an LLM is not actually working on AGI just as people building skyscrapers aren’t getting to the moon. It’s an inherent limitation on the approach.
Training LLMs is not the only thing people are trying. They dominate the public attention right now but there are people everywhere trying all kinds of approaches. Here's one from IBM: https://research.ibm.com/topics/neuro-symbolic-ai
First sentence: "We see Neuro-symbolic AI as a pathway to achieve artificial general intelligence"
Everyone is trying to get to AGI, and yes mostly through LLMs for now.
You said you don't believe LLMs are capable of ever getting there, so I offered a link showing people are trying other things as well. My point was never "Everyone is doing novel, non-LLM work towards AGI".
Not to mention obvious suspects (OpenAI, Anthropic etc). Just because you think it won't work doesn't mean they're not trying. Everyone is trying to get to AGI.
OpenAI has specifically said LLM’s aren’t a path to AGI, though they think they have utility in understanding how society can and should interact with a potential AGI, especially from a policy perspective.
Your other examples are giant companies with many focus who can trivially pay lip service to fundamental research without spending any particular effort. Take your link:
“Benioff outlined four waves of enterprise AI, the first two of which are currently real, available, and shipping:
Predictive
Generative
Autonomous and agents
Artificial general intelligence”
That’s a long term mission statement not actual effort into AGI. So if you’re backing down from actual work to “trying to get to AGI” to include such aspirational statements then sure, I’m also working on AGI and immortality.
I exclude things like increasing processing power/infrastructure as slow AGI is still AGI even if it’s not useful. Yes AGI needs energy, no building energy infrastructure doesn’t qualify as actually working on AGI. You’re also going to need money but making money it’s isn’t inherent progress.
IMO, the fundamental requirements for AGI need at minimum: A system which operates continuously, improves in operation, and can set goals for itself. If you know the work you’re doing isn’t going to result in that then working towards AGI implies abandoning that approach and trying something new.
Basically researching new algorithms or types of computation could qualify, but iterative improvement on well studied methods doesn’t. So some research into biological neurons/brains qualifies but optimizing A* doesn’t even if it’s useful for what you’re working on. There’s a huge number of spin-offs from AI research that are really useful and worth developing, but also inherently limited.
I’m somewhat torn as to the minimum threshold for progress. Tossing 1 billion dollars worth of computational power at genetic algorithms wouldn’t produce AGI, but there’s theoretical levels of processing power where such an approach could actually work even if we’re nowhere close to building such systems. It’s the kind of moonshot that 99.99…% wouldn’t work, but maybe…
So, it may seem like moving the goalposts but I think the initial work on LLM’s could qualify, but subsequent refinement doesn’t.
AGI needs to be able to generalize to real world tasks like self driving without needing task specific help from its creators.
But the current LLM process separates learning from interacting and the learning process is based on huge volumes of text. It’s possible to bolt on specific capabilities like say a chess engine, but you’re now building something different not an LLM.
> There’s still room for it to become an inflation hedge, we still havnt really seen much in terms of USD inflation and people scrambling for the exit.
Bitcoin is no more an inflation hedge than gold is, which it is not: