Apple has unveiled the specifics of a technology that would detect child sexual abuse material (CSAM) on devices in the United States.
Before a photo is saved to iCloud Photos, the system looks for CSAM matches.
Apple says a human reviewer will assess the user and report them to law enforcement if a match is found.
However, privacy fears have been expressed regarding the technology’s ability to scan phones for unlawful content or even political speech. Experts fear that authoritarian governments would use the technology to spy on their citizens. Apple has revealed that new iOS and iPadOS versions will be released later this year. “new applications of cryptography to help limit the spread of CSAM online while designing for user privacy”.
The technology compares photos to a database of known child sexual abuse images collected by the National Center for Missing and Exploited Children (NCMEC) and other child safety groups in the US.
These images are transformed into “hashes,” which are numerical values that may be “matched” to an image on an Apple device. The technology, according to Apple, will also detect altered but similar copies of original photos.
“Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes,” Apple said.
The company claimed the system had an “extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account”.
Apple claims that each report will be rigorously examined to verify that it is accurate. It can then delete a user’s account and notify authorities about the event.
Apple says that the new method delivers “significant” privacy enhancements over prior methods since it only learns about customers’ photos if their iCloud Photos account has a known CSAM collection.
Experts in the field of privacy, on the other hand, have voiced misgivings.
“Regardless of what Apple’s long-term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content,” Matthew Green, a security researcher at Johns Hopkins University, said.
“Whether they turn out to be right or wrong on that point hardly matters. This will break the dam — governments will demand it from everyone.”