Apple Defends CSAM Technology And Clears The Miscommunication About New Child Protection Measures

It has been a busy week for Apple after unveiling its new tool, CSAM monitoring system, to protect minors. The announcement of Apple plans to scan the photos of iPhone and iPad users for child pornography before they are uploaded to iCloud did not go all well with the public.

In a background paper published on August 13, 2021, the company explains the details of the planned monitoring function. Among other things, the question of which databases should be used to compare users’ photos with known images of abuse.

According to the document, the photos should be scanned directly on the users’ devices and not on the iCloud servers. For this purpose, hashes are created from the existing files and compared locally with a database that contains hashes of known child sexual abuse material (CSAM). Also, the hash values ​​of user photos should be compared not just with one but with two or more databases.

The on-device encrypted CSAM database contains only entries that were independently submitted by two or more child safety organizations operating in separate sovereign jurisdictions, i.e. not under the control of the same governmentMathematically, the result of each match is unknown to the device. The device only encodes this unknown and encrypted result into what is called a safety voucher, alongside each image being uploaded to iCloud Photos. The iCloud Photos servers can decrypt the safety vouchers corresponding to positive matches if and only if that user’s iCloud Photos account exceeds a certain number of matches, called the match threshold.” — Apple writes.

Apple assumes that this threshold should initially be 30 hits. This value contains a high safety margin, which reflects a worst-case assumption about the actual performance of the system. The value can still be changed after an empirical evaluation. However, a false positive should only occur in a user account in one out of a trillion cases.

The company assured that the same software for the procedure would be distributed worldwide on all devices. Therefore it is not possible to change the hash algorithm, the threshold value or comparison software individually for certain users.

If the hit threshold is exceeded, Apple carries out a further database comparison to rule out possible false hits on the devices. Only then should Apple employees check whether the photos complained about actually contain abuse material. “In which case they disable the offending account and refer the account to a child safety organization – in the United States, the National Center for Missing and Exploited Children (NCMEC) – who in turn works with law enforcement on the matter.”

Avinash A
Avinash A
Meet Avinash, a tech editor with a Master's in Computer Science and a passion for futuristic tech, AI, and Machine Learning. Known for making complex tech easy to understand, he's a respected voice in leading tech publications and podcasts. When he's not deciphering the latest AI trends, Avinash indulges in building robots and dreaming up the next big tech breakthrough.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More from this stream