Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reminder that even in the hypothetical world where every AI image is digitally watermarked, and all cameras have a TPM that writes a hash of every photo to the blockchain, there’s nothing to stop you from pointing that perfectly-verified camera at a screen showing your perfectly-watermarked AI image and taking a picture.

Image verification has never been easy. People have been airbrushed out of and pasted into photos for over a century; AI just makes it easier and more accessible. Expecting a “click to verify” workflow is unreasonable as it has ever been; only media literacy and a bit of legwork can accomplish this task.



Competent digital watermarks usually survive the 'analog hole'. Screen-cam resistant watermarks have been in use since at least 2020, and if memory serves, back to 2010 when I first starting reading about them, but I don't recall what it was called back then.


I just tried asking Gemini about a photo I took of my screen showing an image I edited with Nano Banana Pro... and it said "All or part of the content was generated with Google AI. SynthID detected in less than 25% of the image".

Photo-of-a-screen: https://gemini.google.com/share/ab587bdcd03e

It reported 25-50% for the image without having been through that analog hole: https://gemini.google.com/share/022e486fd6bf


Thanks for testing it!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: