Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All art is derivative and there's no such thing as originality. Every human artist draws inspiration from their visual and emotional experiences, copyrighted or otherwise, how is this different? If I watch Star Wars and then make a space opera film that's aesthetically similar to Star Wars, that's not copyright laundering, it's inspiration! Same principle applies here.


Because the AI doesn't have "experience", it has training data that it's deriving the work from.

People have shown fairly convincing examples of this in the more general sense: e.g., they've had well-known stock image (e.g., iStockPhoto) watermarks get produced in the output from the AI models (when not prompted). An artist with "experience" would not reproduce a watermark. Or in this article[1], where an AI was requests to mimic another artists style, and the output was (attempting to) reproduce the artist's signature.

(IANAL.) If you make a film that directly incorporates aspects from Star Wars (what I believe to be the more accurate version of what these models do), then yes, I would expect that you will be handed a C&D. "Glowing space swords" aren't copyrighted, but if you include something indistinguishable from a lightsaber & call it a lightsaber? I bet Disney would have something to say about that.

[1]: https://kotaku.com/ai-art-dall-e-midjourney-stable-diffusion...


I don't personally see much difference between how I trained myself to be a portrait artist vs. how diffusion models do. In order to learn to draw stylized portraits, I looped over:

1. Find a photo of a person as reference 2. Create portrait 3. See how well the portrait compares to the reference and the stylized art I was drawing inspiration from.

The work I was doing was original in colloquial sense, but also I see zero reason why what the AI's process is fundamentally inferior to mine.


I am pretty sure that children learning to draw will in fact include some of those “copyright markers,” when not explicitly admonished by adults. What humans do is not some magical “experience,” they just have worse memories and better self-censorship.


This Kotaku article is really trying to spread misinformation about this kind of model. The image shown in the article was not trying to imitate anyone, as the author of the image stated https://twitter.com/illustrata_ai/status/1558559036575911936 (the artist's name was not in the prompt), it is only RJ Palmer who for no reason thought this was the case, the signature also does not even come close to the original as the model is not really trying to copy anything, the signature is like the rest of the image completely made up. Also, in the article you linked it states that there are programs to explicitly remove the signature, this is also not true. Articles like the one you posted are usually full of nonsense, written by people who don't really understand this kind of technology and I wouldn't use them as a source of any kind. RJ Palmer's reaction to the image in the article: "This literally tried to recreate the signature of Michael Kutsche in the corner. This is extremely fucked up", these people are good at creating controversy, even when it is based on facts that are not true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: