Apple Confirms Existing iCloud Photos Will Be Scanned For Child Abuse

Apple may only begin scanning iCloud Photos libraries for potential child abuse images later in 2021, the company has confirmed, though the controversial system won't be limited to new uploads. Announced last week, the upcoming feature will rely on AI to automatically flag possible child sexual abuse materials (CSAM) in a move that has left some privacy advocates concerned.

Advertisement

Part of the controversy came through Apple's decision to announce two child-protection-focused launches at the same time. In addition to the iCloud Photos scanning system, Apple will also be offering parents the ability to have potentially offensive images blurred automatically in their children's Messages conversations.

The scanning and recognition will take place on the phone itself, in a process which seems to have been misunderstood in some quarters. For the iCloud Photos CSAM scanning, Apple will use unreadable hashes – strings of numbers representing known CSAM images – to compare them to images that a user chooses to upload to the cloud gallery service.

"This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations," Apple explained in a new FAQ about the system. "Using new applications of cryptography, Apple is able to use these hashes to learn only about iCloud Photos accounts that are storing collections of photos that match to these known CSAM images, and is then only able to learn about photos that are known CSAM, without learning about or seeing any other photos."

Advertisement

According to Apple, the system won't go into operation until later this year, when iOS 15, iPadOS 15, watchOS 8, and macOS Monterey are released. However, it seems that doesn't mean images uploaded to iCloud Photos between now and then, or indeed uploaded to the service prior to the new system's announcement, won't be scanned.

Images that have already been uploaded to iCloud Photos will also be processed, an Apple representative told CNBC today. However that will still rely on local, on-iPhone scanning. Photo libraries not marked for upload to iCloud Photos will not be examined for CSAM content by the new tool, and "the system does not work for users who have iCloud Photos disabled" the company adds.

As for concerns that the same approach could be used to target someone with fraudulent claims, Apple seems confident that's impossible. The company does not add to the existing CSAM image hashes, it points out, with that database created and validated by experts externally. "The same set of hashes is stored in the operating system of every iPhone and iPad user," Apple adds, "so targeted attacks against only specific individuals are not possible under our design."

While the system may be designed to spot CSAM content automatically, it won't be able to make reports directly to law enforcement. While "Apple is obligated to report any instances we learn of to the appropriate authorities," the company highlights, any flagged occurrence will first be checked by a human moderator. Only after that review process confirms the match will a report be made.

Advertisement

Recommended

Advertisement