I thought privacy was really well handled for a high-level overview. Basically it seems like anything thing it can't do on device uses ephemeral private compute with no stored data.
Any data sent to 3rd party AI models requests your consent first.
The details will need to emerge on how they live up to this vision, but I think it's the best AI privacy model so far. I only wish they'd go further and release the Apple Intelligence models as open source.
if the servers are so private, why is on-device such a win? here are some irrelevant distractions:
- the cpu arch of the servers
- mentioning that you have to trust vendors not to keep your data, then announcing a cloud architecture where you have to trust them not to keep your data
- pushing the verifiability of the phone image, when all we ever cared about was what they sent to servers
- only "relevant" data is sent, which over time is everything, and since they never give anyone fine-grained control over anything, the llm will quietly determine what's relevant
- the mention that the data is encrypted, which of course it isn't, since they couldn't inference. They mean in flight, which hopefully _everything_ is, so it's irrelevant
Considering they spent the first half of that segment throwing shade at people who claim privacy guarantees without any way to verify them, Apple hopefully will provide a very robust verification process.