> From what you say it's clear you never read Apples paper on this. ... You seem to have an over simplified view of how it all works. They don't just throw hashes in. ... Again it's an over simplification. If the US government did do that. ... I'd recommend you read up on all of it a bit more. Most of your claims are unfounded in relation to the CSAM.
The posturing about supposed expertise adds nothing. If you want to make an argument, make it. Vague appeals to technical depth are just noise.
> The client puts a flag on a match. It is [...] verified on the server [...] The current system just checks everything.
Sure, that’s how the flagging process works. It’s also beside the point. Listing technical details doesn’t change the core issue: this system performs scanning on the user device, which is what makes it problematic.
> If the client doesn't flag a file, it can never be decrypted on the server by anyone except the device owner. [...] If your device never talks to the cloud in both scenarios nothing happens.
Correct, but not relevant here. No one is arguing that airgapped devices leak information. The issue is what happens when devices are online.
> [On the structural problem of inability of independent oversight] They can verify it by the chain of custody and documentation that is stored about that hash.
What specific documentation would allow actual evaluation? And who can access it? The process is opaque by design: The list of neural hashes is private, matching and flagging happen silently, and escalation logic like threshold levels or safety voucher generation is not open to inspection. Whatever theoretical accountability might exist, it’s irrelevant in a system of systematic secrecy that cannot be independently observed or audited.
> CSAM is a UN protocol [...] countries [...] work with NCMEC on ensuring the CSAM hashes are correct. Germany is also one of the strictest countries in relation to CSAM.
Yes, Germany has police and ofc works to fight CSAM. That doesn’t change the concern: the system design is extensible and unverifiable. If a U.S. administration wanted to expand the scope (say, for terrorism, extremism, drugs, or IP enforcement), who exactly stops them? Not a German agency. Certainly not NCMEC.
> [On the obvious loophole of policy change] If the US government did do that. - It would first be challenged in the courts.
That is... optimistic. What legal mechanism exactly would allow a challenge to a (as an example) classified National Security Letter expanding the hash set? What court has even the standing to hear that? What precedent makes you believe such a challenge would surface in time?
> - They would not be able to hide the fact they have changed it.
Why not? The hashes are not reversible. The list is not public. The matches are not auditable. Gag orders are legal. What in this system ensures visibility or accountability?
> - This would lead to service providers not assisting with the corrupted CSAM. - As this is a worldwide initiative the rest of the world can just disconnect the US from the CSAM until what is put in is confirmed.
The Apple proposal is not a worldwide initiative, but a US-driven proposal involving a handful of US orgs. EU involvement in the whole issue has been comparatively lacking and is often dependent on US lobbying and funding. The idea that the world could or would opt out assumes a degree of transparency and technical independence that simply does not exist on this planet right now.
If you want to argue that the system is technically robust against political misuse, then please do. If there are decent guardrails in place, I'd really truly do like to know about them. But so far, it mostly reads like a wish list.
The posturing about supposed expertise adds nothing. If you want to make an argument, make it. Vague appeals to technical depth are just noise.
> The client puts a flag on a match. It is [...] verified on the server [...] The current system just checks everything.
Sure, that’s how the flagging process works. It’s also beside the point. Listing technical details doesn’t change the core issue: this system performs scanning on the user device, which is what makes it problematic.
> If the client doesn't flag a file, it can never be decrypted on the server by anyone except the device owner. [...] If your device never talks to the cloud in both scenarios nothing happens.
Correct, but not relevant here. No one is arguing that airgapped devices leak information. The issue is what happens when devices are online.
> [On the structural problem of inability of independent oversight] They can verify it by the chain of custody and documentation that is stored about that hash.
What specific documentation would allow actual evaluation? And who can access it? The process is opaque by design: The list of neural hashes is private, matching and flagging happen silently, and escalation logic like threshold levels or safety voucher generation is not open to inspection. Whatever theoretical accountability might exist, it’s irrelevant in a system of systematic secrecy that cannot be independently observed or audited.
> CSAM is a UN protocol [...] countries [...] work with NCMEC on ensuring the CSAM hashes are correct. Germany is also one of the strictest countries in relation to CSAM.
Yes, Germany has police and ofc works to fight CSAM. That doesn’t change the concern: the system design is extensible and unverifiable. If a U.S. administration wanted to expand the scope (say, for terrorism, extremism, drugs, or IP enforcement), who exactly stops them? Not a German agency. Certainly not NCMEC.
> [On the obvious loophole of policy change] If the US government did do that. - It would first be challenged in the courts.
That is... optimistic. What legal mechanism exactly would allow a challenge to a (as an example) classified National Security Letter expanding the hash set? What court has even the standing to hear that? What precedent makes you believe such a challenge would surface in time?
> - They would not be able to hide the fact they have changed it.
Why not? The hashes are not reversible. The list is not public. The matches are not auditable. Gag orders are legal. What in this system ensures visibility or accountability?
> - This would lead to service providers not assisting with the corrupted CSAM. - As this is a worldwide initiative the rest of the world can just disconnect the US from the CSAM until what is put in is confirmed.
The Apple proposal is not a worldwide initiative, but a US-driven proposal involving a handful of US orgs. EU involvement in the whole issue has been comparatively lacking and is often dependent on US lobbying and funding. The idea that the world could or would opt out assumes a degree of transparency and technical independence that simply does not exist on this planet right now.
If you want to argue that the system is technically robust against political misuse, then please do. If there are decent guardrails in place, I'd really truly do like to know about them. But so far, it mostly reads like a wish list.