The rumor: Apple is working on new software to scan your photo library for images of child abuse, according to the Financial Times (not yet tracked).
- The upcoming tool, dubbed neuralMatch “would proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified. The scheme will initially roll out only in the US…Apple’s neuralMatch algorithm will continuously scan photos that are stored on a US user’s iPhone and have also been uploaded to its iCloud back-up system. Users’ photos, converted into a string of numbers through a process known as ‘hashing’, will be compared with those on a database of known images of child sexual abuse” FT explains
- Numerous security researchers, while supportive of ideas to help stop child exploitation, have voiced grave concerns with the means to justify the ends and how this could be exploited in malicious ways by governmental entities in the future
Our take: I don’t think there’s a single rational person who think that less child abuse imagery in the world is a bad thing. But how Apple plans to get there is concerning.
Apple has publicly shown their dedication to privacy and security time and time again, but this move jeopardizes that in a way we haven’t seen before. We’ll have to wait for the official word from Apple before making a final decision, but those who study this kind of a thing for a living don’t seem to think it’s a good precedent to set in any way.