And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
For this reason I don’t think llms are going to be good film makers for instance. Sure an llm will be able to spit the scenario of the next action movie, those already seem to be automatically generated anyway. But making a film that resonates to humans takes a lot that can’t be formulated with language.
> And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
I don't know what you mean by that.
If you mean qualia, then sure. Unsolved and undescribed. But other than that, I think everything has a linguistic form; perhaps inefficient, but it is possible.
Separately, transformers don't have to use what humans recognise as a langue, this means they can use things such as DNA sequences and pictures. They're definitely not the final answer to how to do AI, because they need so many more examples than us, but I don't have confidence that they can't do these things, only that they won't.
Is where we are any good? I think one of the more germane issues with generative AI art is that it is distinctly not creative. It can only regurgitate variations of what it has seen.
This is both extremely powerful and limiting.
An LLM is never going to give you some of the most famous films like "Star Wars" which bounced around before 20th Century Fox finally took a chance on it because they thought Lucas had talent. Is what we want? A society that just uses machines to produce variations of the same thing that already exist all the time? It's hard enough for novel creative projects to succeed.
> Is where we are any good? I think one of the more germane issues with generative AI art is that it is distinctly not creative. It can only regurgitate variations of what it has seen.
Yes, state of the art models like midjourney, sd3 are _really_ good. You are bounded only by your imagination.
The idea that generative AI is only derivative was never an empirical claim, its always been a cope.
Yes... I'm not sure what the archetype of intelligence is, but for practical purposes I'd say: Humans have some of it. And it's not clear to me that what humans have is very far from what AI is starting to have. The hallucinations are weird and wonderful, but so are some of the answers I saw from below-average students when I was in university. Can't tell whether the two weirdnesses are different or similar. Exciting times lie ahead.
> Can't tell whether the two weirdnesses are different or similar
Because you focus on how they are similar and not how they are different, to me it is extremely obvious they are very different. Students make mistakes and learn and then stop doing them soon after, when I taught students at college I saw that over and over. LLM however still does the same weird mistakes they did 4 years ago, they just hide it a bit better today, the core different in how they act compared to humans is still the same as in GPT-2 to me, because they are still completely unable to learn or understand their mistakes like almost every human can.
Without being able to understand your own mistakes you can never reach human intelligence, and I think that is a core limitation of current LLM architecture.
Edit: Note that many/most jobs doesn't require full human general intelligence. We used to have human calculators etc, same will happen in the future, but we will continue to use humans as long as we don't have generally intelligent computers that can understand their mistakes.
I'm sure that's very important in principle, much less sure that it matters in practice. Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"
Maybe others can complete it, maybe it'll be easy to complete it in twenty years, with a little more hindsight. Maybe.
> Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"
Ok, but that's more on you than on current AI; the models which get distributed (both LLMs and Stable Diffusion based image generators) are already found in re-trained and specialised derivatives created by people who know how to and have a sufficiently powerful graphics card.
Which is a kind of workaround for the inability to learn after the end of training… It's not clear to me how much this workaround mitigates the inability to learn after training. Is it clear to you? If so, please feel free to post a wall of text ;)
And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
For this reason I don’t think llms are going to be good film makers for instance. Sure an llm will be able to spit the scenario of the next action movie, those already seem to be automatically generated anyway. But making a film that resonates to humans takes a lot that can’t be formulated with language.