Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's what people said about AI art, yet here we are.


Is where we are any good? I think one of the more germane issues with generative AI art is that it is distinctly not creative. It can only regurgitate variations of what it has seen.

This is both extremely powerful and limiting.

An LLM is never going to give you some of the most famous films like "Star Wars" which bounced around before 20th Century Fox finally took a chance on it because they thought Lucas had talent. Is what we want? A society that just uses machines to produce variations of the same thing that already exist all the time? It's hard enough for novel creative projects to succeed.


> Is where we are any good? I think one of the more germane issues with generative AI art is that it is distinctly not creative. It can only regurgitate variations of what it has seen.

Yes, state of the art models like midjourney, sd3 are _really_ good. You are bounded only by your imagination.

The idea that generative AI is only derivative was never an empirical claim, its always been a cope.



And on the same theme, but a totally different example in a different media: https://youtu.be/5pidokakU4I


Is the current studio system?


Yes... I'm not sure what the archetype of intelligence is, but for practical purposes I'd say: Humans have some of it. And it's not clear to me that what humans have is very far from what AI is starting to have. The hallucinations are weird and wonderful, but so are some of the answers I saw from below-average students when I was in university. Can't tell whether the two weirdnesses are different or similar. Exciting times lie ahead.


> Can't tell whether the two weirdnesses are different or similar

Because you focus on how they are similar and not how they are different, to me it is extremely obvious they are very different. Students make mistakes and learn and then stop doing them soon after, when I taught students at college I saw that over and over. LLM however still does the same weird mistakes they did 4 years ago, they just hide it a bit better today, the core different in how they act compared to humans is still the same as in GPT-2 to me, because they are still completely unable to learn or understand their mistakes like almost every human can.

Without being able to understand your own mistakes you can never reach human intelligence, and I think that is a core limitation of current LLM architecture.

Edit: Note that many/most jobs doesn't require full human general intelligence. We used to have human calculators etc, same will happen in the future, but we will continue to use humans as long as we don't have generally intelligent computers that can understand their mistakes.


> because they are still completely unable to learn or understand their mistakes like almost every human can

So far as I know, all current AI need far more examples than we do.

But, that's not why LLMs are "unable" to learn: the part which does that is simply not included in when it's deployed for inference.


I'm sure that's very important in principle, much less sure that it matters in practice. Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"

Maybe others can complete it, maybe it'll be easy to complete it in twenty years, with a little more hindsight. Maybe.


> Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"

Ok, but that's more on you than on current AI; the models which get distributed (both LLMs and Stable Diffusion based image generators) are already found in re-trained and specialised derivatives created by people who know how to and have a sufficiently powerful graphics card.


Which is a kind of workaround for the inability to learn after the end of training… It's not clear to me how much this workaround mitigates the inability to learn after training. Is it clear to you? If so, please feel free to post a wall of text ;)


To me, that seems like describing ovens and stoves as work-arounds for supermarkets providing frozen food?

The weights are frozen on purpose. You can "thaw" them.


Training an AI model is comparable to natural selection of DNA, not comparable to human learning. We have no clue how to replicate human learning.


Ah, the ambiguity of "like".


Where is that?


ai doesn't make art tho, it paints whatever it's told to


So do human artists, if they want to get paid. And then you have the discussion about auteurs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: