Is Apple’s Controversial Child Safety Plan Dead? | CSAM Detection MIA

CSAM guidance in Siri and Search Credit: Apple
Text Size
- +

Toggle Dark Mode

A few months ago, Apple found itself mired in controversy after announcing several new child safety initiatives, and now it looks like it may want us to forget that one of them was ever on the table in the first place.

In August, Apple unveiled a plan to scan photos being uploaded to iCloud Photo Library for child sexual abuse materials (CSAM) alongside another feature that would warn kids about inappropriate photos and even notify their parents in some cases.

These two larger initiatives were joined by proposed enhancements to Siri and Spotlight searches that would help users get guidance on how to deal with CSAM, how to report child exploitation, and warning users of the potential legal consequences of searching for CSAM and providing links to anonymous helplines for “adults with at-risk thought and behaviour.”

Within hours of Apple’s announcement, numerous privacy advocates began vocally criticizing the plan, particularly as it pertained to CSAM detection, calling it a slippery slope that could lead to abuse by authoritarian regimes.

While the CSAM Detection algorithm was designed to scan only for known images of CSAM, based on a database supplied by the U.S. National Center for Missing and Exploited Children (NCMEC), opponents of the plan raised the valid concern that once the technology existed, it could be easily subverted simply by replacing the database.

For instance, if a government agency wanted to track down political dissidents, it could theoretically slip in a database of well-known memes and similar photos that are commonly shared within those groups, allowing them to identify users who have those photos in their libraries.

Apple insisted that numerous checks and balances in the system would prevent this from happening, including the vetting of the database by multiple, independent agencies, and a set of checks and balances that would have Apple staff review suspect material with a set of human eyeballs to verify that it was actually CSAM before reporting it anywhere outside of Apple’s walls.

In our opinion, Apple’s CSAM Detection was a very noble idea with solid technology behind it, but unfortunately, Apple’s self-assured hubris caused the whole initiative to blow up in its face.

Fear, Uncertainty, and Doubt

Apple’s precautions weren’t enough to quell the fears of privacy advocates, and the backlash against its plans intensified as many others misunderstood the technical details, blowing things even farther out of proportion.

Even though the two main child safety initiatives — CSAM Detection and Communication Safety — were totally distinct from each other, many folks conflated them, believing that Apple’s plans were considerably more insidious.

For example, the proposed CSAM Detection would have only scanned photos being uploaded to iCloud Photo Library against a database of known images. No machine learning or AI analysis would have been involved here. This means there was basically zero risk of a cute picture of your toddler in the bathtub triggering this algorithm — at least not unless that photo had previously been stolen from your personal library and circulated among child predators widely enough to have ended up in the NCMEC database.

Further, although CSAM Detection occurred on the device, the stated goal was only to scan photos that were being uploaded to iCloud Photo Library — before they arrived at Apple’s servers. This scanning would not occur unless iCloud Photo Library was enabled on a user’s device, and it would only apply to those photos in the process of being uploaded.

On the other hand, Communication Safety in Messages was designed to use machine learning to analyze photos being sent and received in the iOS Messages app to detect if they contained sexually explicit nudity.

These algorithms would be triggered by any suspect photo, but they had nothing to do with the CSAM Detection feature.

With Communication Safety, the algorithm would simply blur out the photos for users under the age of 18 — and only if the feature had been enabled by a parent as part of Family Sharing and Screen Time. Kids would have the ability to override this to view the content of the photo, but only after receiving a warning that they may want to think twice about doing so.

As originally proposed, Communication Safety would have also notified parents when a child under the age of 13 chose to view or send a sexually explicit photo. This would only have happened after the child acknowledged a second warning, telling them that their parents would be notified. This was the only scenario under which any information about the photo would be sent out to anybody outside that particular conversation in Messages.

Still, we couldn’t blame people for confusing the two features, as they sound so similar at a glance. Many misunderstood what was happening, believing that Apple would be scanning all the photos on their devices, and monitoring their messaging conversations for CSAM.

As Apple’s Senior VP of Software Engineering, Craig Federighi, candidly admitted, Apple blew it in the way that it handled the announcement, creating “a recipe for this kind of confusion.”

Gone But Not Forgotten

In early September, Apple announced it would be delaying its plans to implement all these features so that it could “take additional time […] to collect input and make improvements.”

At the time, Apple didn’t offer a timeframe for when any of the features would be coming, or what changes it might consider making, but it clearly wanted to make sure it wasn’t seen to be making these policy decisions in a vacuum.

It’s not the first time Apple has made this kind of error, either, and sadly, it probably won’t be the last.

When Apple unveiled its AirTags in the spring, it faced criticisms from domestic safety advocates, who feared that Apple wasn’t doing enough to prevent its tracking tags from being used by stalkers. When Apple was asked whether it had consulted any organizations that specialize in domestic violence when developing AirTags, company executives declined to comment.

So, it’s probably not surprising that Apple similarly developed these CSAM Detection and Communication Safety initiatives without much outside consultation. Major advocacy groups like the Electronic Frontier Foundation (EFF) were clearly blindsided when Apple made its announcement, and the company could probably have avoided much of the controversy if it had gotten organizations like these on its side in the first place.

Nonetheless, even though Apple should have arguably listened to these experts in the first place, it’s shown a willingness to make course corrections when necessary. In the case of AirTags, it reduced the time it takes to sound an audible alert when an AirTag is left behind by its owner, and just this week released an app to let Android users scan for unknown AirTags nearby.

Apple also turned the key on Communication Safety in iOS 15.2, with one small but important change. After child safety advocates suggested that parental notifications could put children at risk of parental abuse in some homes, Apple nixed that aspect of the feature.

As it’s now been implemented, Communication Safety will treat all users under 18 years of age in the same way. Potentially explicit photos will be blurred, and kids will be warned that they probably don’t want to be looking at them. They will also be offered guidance on how to get help from a trusted adult if they’re receiving photos that are making them uncomfortable.

No matter what the youngster chooses to do with the photo, the system will not send out any notifications to parents or anybody else. In this sense, the feature will work in much the same way as other Screen Time restrictions.

Now that Communication Safety is out in iOS 15.2, Apple has updated its Expanded Protections for Children page, removing all references to the CSAM Detection feature.

In fact, a search of Apple’s website reveals no mention of “CSAM Detection” on any public-facing pages. The white papers that Apple released back in August are the only indications that the feature ever existed in the first place. However, while these can be turned up in a targeted Google search, they’re basically just historical documents from Apple’s archives, much like you can still find platform security guides for now-defunct versions of iOS.

This strongly suggests that Apple has abandoned its plans for CSAM Detection entirely — at least for now. While it’s not impossible that the initiative could rear its head again at some point in the future, it’s highly unlikely that Apple would make it vanish this way if it was still actively working on it.

Not only did the idea continue to face controversy, but it’s likely that Apple was never able to find a middle ground that would satisfy privacy rights groups and get them on board. For a company that prides itself on privacy being a “fundamental human right,” it was much more politic for Apple to simply make the whole idea go away and hope that we’ll forget it ever suggested it in the first place.

Sponsored
Social Sharing