This is exactly the reason why I'm against Apple CSAM. It's just a matter of time before someone innocent gets flagged.
Lots of parents take a photo of their child having a bath or something similar at least in Europe/Asia and it's clearly of non-sexual nature. Now, how does the Apple algo know that?
I'm generally curious: Have you tried to contact Google before about any kind of issue? It took me like a day or so to chat with an actual person the last time I had an issue.
I agree with your point as a whole but the Apple CSAM wouldn't really have this issue as it compares the images hashes against hashes of a specific list of CSAM.
Reminder that it's perceptual hashes, not cryptographic hashes. So it's enough that images have enough visual resemblance, according to the model. Natural collisions have been observed and generating collisions for planting on someone elses device is trivial.
Apple also only recieves a DB of hashes and so have no way to verify that they're only scanning for CSAM and not other "undesirable" content.
Lots of parents take a photo of their child having a bath or something similar at least in Europe/Asia and it's clearly of non-sexual nature. Now, how does the Apple algo know that?
I'm generally curious: Have you tried to contact Google before about any kind of issue? It took me like a day or so to chat with an actual person the last time I had an issue.