“The company’s mission is to understand the true nature of the universe”
- There’s no way an LLM is going to get anywhere near understanding this. The true nature of the universe is unlikely to be captured in language.
Considering what’s at Tesla, I don’t think it makes sense to assume they’ll be constraining themselves to text/LLM.
But on the philosophical side, if an understanding can’t be communicated, does it exist? We humans only have various movements and vibrations of flesh, sensing those, text, and images to communicate.
> But on the philosophical side, if an understanding can’t be communicated, does it exist?
There are deep mathematical results about our limits to understand things simply because we communicate through finite series of symbols from finite dictionaries. Basically what we can express and prove is infinite but discrete, but there is much larger infinities than that that will be beyond our grasps forever. Things like theorems that are true but can not be proven to be true, or properties on individuals real numbers that exist but can not be expressed.
And there is no reason to believe the universe doesn't have the same kind of thing: it remains to be shown whether or not you can describe or understand the universe with a finite set of symbols.
Yep. Expanding on that; before AI everyone I knew would postulate on the fictional Library of Babel. The idea was a thought experiment, where you assume there exists a library with every possible combination of words and letters written down in one of it's books. There would be millions of issues that are filled with garbled and meaningless text; only a few would be legible, and fewer yet understandable.
It begs the question, if sifting through noise is a meaningful way to look for scientific progress. And of course, what if it's wrong? Both the Library of Babel and AI are fully capable of leading us down untested and nonsense rabbit-holes. The difference between Alice and Wonderland and Jabberwocky is unknown to us; we wouldn't know which books are worth reading and which are not.
On the one hand, you have people excited by this idea. Some people really do think that the world's answers are up on a bookshelf in the Library of Babel, somewhere. The philosophical angle runs deeper yet, though; what kind of cargo-cult society would we build relying on a useful AI? Are we guaranteed meaningful progress because an AI model can keep pressing the "randomize" button? Do we eventually hit a point where fiction and reality are indistinguishable? It's all hard to say.
" Considering what’s at Tesla, I don’t think it makes sense to assume they’ll be constraining themselves to text/LLM. " Tesla is losing money and cant fulfill its promises about AI. What do you mean?
Could you name the competing self driving systems (as in currently competing, with similar performance) that are available to the public, for private transport, that you have in mind?
Waymo is operating and doing passenger miles commercially with no one behind the wheel. Tesla hasn't yet done that even for the controlled Vegas loop they said they would do it in. Waymo still has remote operators who can handle unusual situations but they handle multiple cars and only the car itself responds to sudden events. They are operating at level 4.
Tesla still has one local operator per car who has to be able to have twitch reactions at all times.
Competitors like Honda and Mercedes also let you take your hands off the wheel and eyes off the road in certain areas (level 3), which Tesla hasn't yet achieved.
Many are not available in the US. Audi have a leading system and Mercedes have the highest rated system available in the US and are officially at level 3. The problem is Musk sucks up so much air marketing Tesla as leading people have come to believe it. The leading systems aren’t super impressive yet but Musks lies about his system which doesn’t work aren’t proof of anything but hubris. He’s just pumping stock to the ignorant.
And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
For this reason I don’t think llms are going to be good film makers for instance. Sure an llm will be able to spit the scenario of the next action movie, those already seem to be automatically generated anyway. But making a film that resonates to humans takes a lot that can’t be formulated with language.
> And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
I don't know what you mean by that.
If you mean qualia, then sure. Unsolved and undescribed. But other than that, I think everything has a linguistic form; perhaps inefficient, but it is possible.
Separately, transformers don't have to use what humans recognise as a langue, this means they can use things such as DNA sequences and pictures. They're definitely not the final answer to how to do AI, because they need so many more examples than us, but I don't have confidence that they can't do these things, only that they won't.
Is where we are any good? I think one of the more germane issues with generative AI art is that it is distinctly not creative. It can only regurgitate variations of what it has seen.
This is both extremely powerful and limiting.
An LLM is never going to give you some of the most famous films like "Star Wars" which bounced around before 20th Century Fox finally took a chance on it because they thought Lucas had talent. Is what we want? A society that just uses machines to produce variations of the same thing that already exist all the time? It's hard enough for novel creative projects to succeed.
> Is where we are any good? I think one of the more germane issues with generative AI art is that it is distinctly not creative. It can only regurgitate variations of what it has seen.
Yes, state of the art models like midjourney, sd3 are _really_ good. You are bounded only by your imagination.
The idea that generative AI is only derivative was never an empirical claim, its always been a cope.
Yes... I'm not sure what the archetype of intelligence is, but for practical purposes I'd say: Humans have some of it. And it's not clear to me that what humans have is very far from what AI is starting to have. The hallucinations are weird and wonderful, but so are some of the answers I saw from below-average students when I was in university. Can't tell whether the two weirdnesses are different or similar. Exciting times lie ahead.
> Can't tell whether the two weirdnesses are different or similar
Because you focus on how they are similar and not how they are different, to me it is extremely obvious they are very different. Students make mistakes and learn and then stop doing them soon after, when I taught students at college I saw that over and over. LLM however still does the same weird mistakes they did 4 years ago, they just hide it a bit better today, the core different in how they act compared to humans is still the same as in GPT-2 to me, because they are still completely unable to learn or understand their mistakes like almost every human can.
Without being able to understand your own mistakes you can never reach human intelligence, and I think that is a core limitation of current LLM architecture.
Edit: Note that many/most jobs doesn't require full human general intelligence. We used to have human calculators etc, same will happen in the future, but we will continue to use humans as long as we don't have generally intelligent computers that can understand their mistakes.
I'm sure that's very important in principle, much less sure that it matters in practice. Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"
Maybe others can complete it, maybe it'll be easy to complete it in twenty years, with a little more hindsight. Maybe.
> Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"
Ok, but that's more on you than on current AI; the models which get distributed (both LLMs and Stable Diffusion based image generators) are already found in re-trained and specialised derivatives created by people who know how to and have a sufficiently powerful graphics card.
Which is a kind of workaround for the inability to learn after the end of training… It's not clear to me how much this workaround mitigates the inability to learn after training. Is it clear to you? If so, please feel free to post a wall of text ;)
>“The company’s mission is to understand the true nature of the universe” - There’s no way an LLM is going to get anywhere near understanding this. The true nature of the universe is unlikely to be captured in language.
I disagree. The day is coming when some *BIG* problem is solved by AI just because someone jokingly asks about it.
I regularly try to ask them to give me fluid dynamics simulation code to see what level they are at. Right now, they can't do that kind of thing all by themselves, and I don't know enough to debug the code they give me.
But even without any questions about free will or consciousness or whatever, a sufficiently capable — not yet existing — transformative search engine (as it has been derided as) and a logical inference engine (which it isn't, but it can use) could have produced the Aclubiere metric with nothing newer than the Einstein field equations and someone asking the right question.
I do not expect transformer models to be good enough to do that given their training requirements, but I wouldn't rule it out either.