For example, it's trivial to post an advertisement without disclosure. Yet it's illegal, so large players mostly comply and harm is less likely on the whole.
> I don't think it will be easy to just remove it.
No, but model training technology is out in the open, so it will continue to be possible to train models and build model toolchains that just don't incorporate watermarking at all, which is what any motivated actor seeking to mislead will do; the only thing watermarking will do is train people to accept its absence as a sign of reliability, increasing the effectiveness of fakes by motivated bad actors.
It's an image. There's simply no way to add a watermark to an image that's both imperceptible to the user and non-trivial to remove. You'd have to pick one of those options.
I'm not sure that's correct. I'm not an expert, but there's a lot of literature on digital watermarks that are robust to manipulation.
It may be easier if you have an oracle on your end to say "yes, this image has/does not have the watermark," which could be the case for some proposed implementations of an AI watermark. (Often the use-case for digital watermarks assumes that the watermarker keeps the evaluation tool secret - this lets them find, e.g, people who leak early screenings of movies.)
Exactly, a diffusion model can denoise the watermark out of the image. If you wanted to be doubly sure you could add noise first and then denoise which should completely overwrite any encoded data. Those are trivial operations so it would be easy to create a tool or service explicitly for that purpose.
I don't see how it would defeat the cat and mouse game.