It has been a busy week for Apple after unveiling its new tool, CSAM monitoring system, to protect minors. The announcement of Apple plans to scan the photos of iPhone and iPad users for child pornography before they are uploaded to iCloud did not go all well with the public.
In a background paper published on August 13, 2021, the company explains the details of the planned monitoring function. Among other things, the question of which databases should be used to compare users’ photos with known images of abuse.
According to the document, the photos should be scanned directly on the users’ devices and not on the iCloud servers. For this purpose, hashes are created from the existing files and compared locally with a database that contains hashes of known child sexual abuse material (CSAM). Also, the hash values of user photos should be compared not just with one but with two or more databases.
“The on-device encrypted CSAM database contains only entries that were independently submitted by two or more child safety organizations operating in separate sovereign jurisdictions, i.e. not under the control of the same government. Mathematically, the result of each match is unknown to the device. The device only encodes this unknown and encrypted result into what is called a safety voucher, alongside each image being uploaded to iCloud Photos. The iCloud Photos servers can decrypt the safety vouchers corresponding to positive matches if and only if that user’s iCloud Photos account exceeds a certain number of matches, called the match threshold.” — Apple writes.
Apple assumes that this threshold should initially be 30 hits. This value contains a high safety margin, which reflects a worst-case assumption about the actual performance of the system. The value can still be changed after an empirical evaluation. However, a false positive should only occur in a user account in one out of a trillion cases.
The company assured that the same software for the procedure would be distributed worldwide on all devices. Therefore it is not possible to change the hash algorithm, the threshold value or comparison software individually for certain users.
If the hit threshold is exceeded, Apple carries out a further database comparison to rule out possible false hits on the devices. Only then should Apple employees check whether the photos complained about actually contain abuse material. “In which case they disable the offending account and refer the account to a child safety organization – in the United States, the National Center for Missing and Exploited Children (NCMEC) – who in turn works with law enforcement on the matter.”