Apple announces major AirTag updates amidst tracking scrutiny

The news: After nearly a year of AirTag being available for purchase, Apple has more seriously responded to broad concerns about tracking and privacy.

Here’s the details from the press release…

  • As seen above, there will be a more stern privacy warning when setting up AirTag for the first time
  • When an unknown pair of AirPods has been found near you, a “Unknown Accessory Detected” alert would previously show up…now AirPods will be mentioned specifically
  • Updated support documentation is available now on Apple’s website
  • Precision Finding, which works on iPhone 11/12/13, will now work to find AirTags that could be tracking you (so that you can locate the threat as quickly as possible)
  • An unwanted AirTag near you that is emitting sound to notify you of its presence will now show an alert on your display, as well (in case the speaker has been tampered with)
  • Apple is updating their unwanted tracking alert logic to notify users even sooner if there is an AirTag following them
  • The sound emitted from AirTag is being tuned so that the loudest tones are even louder

Our take: Wow. This is incredibly comprehensive and I’m honestly impressed. I see what Apple waited so long to formally reply to these AirTag concerns: they’ve been working on all of this. I think they nailed it and there’s no doubt in my mind that this will have a positive effect against bad actors.

Apple announces controversial new child safety features coming later this year

The news: Following rumors of an imminent announcement, Apple has officially confirmed new plans to scan users’ iCloud Photos and bolster child safety on their platforms.

The new plan takes a 3-pronged approach:

  • Messages app: When sending or receiving sexually explicit material, there will be new pop-ups to help users (children) navigate this with notifications sent out to parents letting them know whether or not their child viewed the content.
  • iCloud photo scanning (US only): Apple is rolling out technology later this year that will scan your iCloud Photos for known Child Sexual Abuse Material (CSAM). The system searches for existing content that can be matched to a database of CSAM provided by partners. If illegal content is found after a manual review, a user’s account will be reported and disabled. Apple says there is a one-in-one-trillion chance of a false positive, and will have an appeals process for these cases.
  • Siri and Search guidance improvements: “Apple is also expanding guidance in Siri and Search by providing additional resources to help children and parents stay safe online and get help with unsafe situations. For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report. Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.”
  • These new features are being rolled out later this year in updates across Apple’s platforms

Our take: All of these features sound great, and the first and third features are. As for the second feature, we all agree that CSAM is terrible and has no place anywhere, ever. But Apple’s method of finding CSAM is what I’m concerned about: specially, the precedent this sets and the door this opens for finding other illegal content in user’s iCloud Photos down the line.

Years from now, will we see this photo scanning feature expanded to other illegal content? This is a slippery slope.

New report says Apple plans to scan your iPhone for images of child abuse

The rumor: Apple is working on new software to scan your photo library for images of child abuse, according to the Financial Times (not yet tracked).

  • The upcoming tool, dubbed neuralMatch “would proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified. The scheme will initially roll out only in the US…Apple’s neuralMatch algorithm will continuously scan photos that are stored on a US user’s iPhone and have also been uploaded to its iCloud back-up system. Users’ photos, converted into a string of numbers through a process known as ‘hashing’, will be compared with those on a database of known images of child sexual abuse” FT explains
  • Numerous security researchers, while supportive of ideas to help stop child exploitation, have voiced grave concerns with the means to justify the ends and how this could be exploited in malicious ways by governmental entities in the future

Our take: I don’t think there’s a single rational person who think that less child abuse imagery in the world is a bad thing. But how Apple plans to get there is concerning.

Apple has publicly shown their dedication to privacy and security time and time again, but this move jeopardizes that in a way we haven’t seen before. We’ll have to wait for the official word from Apple before making a final decision, but those who study this kind of a thing for a living don’t seem to think it’s a good precedent to set in any way.