What followed — outraged tweets, critical headlines and an outcry for more information — put the tech giant on defense just weeks ahead of the next iPhone launch, its biggest event of the year. It was a rare PR miscalculation for a company known for its meticulous PR efforts.
Many child safety and security experts praised the intent, recognizing the ethical responsibilities and obligations a company has over the products and services it creates. But they also called the efforts “deeply concerning,” stemming largely from how part of Apple’s checking process for child abuse images is done directly on user devices.
“When people hear that Apple is ‘searching’ for child sexual abuse materials (CSAM) on end user phones they immediately jump to thoughts of Big Brother and ‘1984,’” said Ryan O’Leary, research manager of privacy and legal technology at market research firm IDC. “This is a very nuanced issue and one that on its face can seem quite scary or intrusive. It is very easy for this to be sensationalized from a layperson’s perspective.”
Apple declined to comment for this story.
How Apple’s tool works
“There is rightful concern from privacy advocates that this is a very slippery slope and basically the only thing stopping Apple [from expanding beyond searching for CSAM images] is their word,” O’Leary said. “Apple realizes this and is trying to put some extra transparency around this new feature set to try and control the narrative.”
The messaging, however, comes at a time of increased distrust and scrutiny of tech firms, coupled with hyper sensitivity around surveillance or perceived surveillance. “The messaging needs to be airtight,” O’Leary said.
The lack of details on how the full operation would work contributed to the muddled messaging, too. When asked about the human review team on one press call, for example, Apple said it wasn’t sure what that would entail because it will need to learn what resources are required based on a testing phase.
Mary Pulido, executive director of the New York Society for the Prevention of Cruelty to Children (NYSPCC), called these technologies important, noting they can “help the police bring traffickers to justice, accelerate victim identification, and reduce investigation time.” She’s also in the camp that believes “protecting children from any potential harm trumps privacy concerns, hands down.”
Where Apple went wrong
While no one is disputing Apple’s motivation, Elizabeth Renieris, professor at Notre Dame University’s IBM Technology Ethics Lab, said the timing was “a bit odd” given all of its privacy-focused announcements at its Worldwide Developer Conference in June. Apple declined to share why the new tool was not presented at WWDC.
Renieris also said Apple erred by announcing other seemingly related though fundamentally different updates together.
The new iMessage communication feature, which has to be turned on in Family Sharing and uses on-device processing, will warn users under age 18 when they’re about to send or receive a message with an explicit image. Parents with children under the age of 13 can additionally turn on a notification feature in the event that a child is about to send or receive a nude image. Apple said it will not get access to the messages, though people still expressed concerns Apple someday might do so.
Threading the needle of protecting user privacy and ensuring the safety of children is difficult, to say the least. In trying to bolster protections for minors, Apple may have also reminded the public about the potential control it can wield over its own products long after they’re sold.
“Announcements like this dilute the company’s reputation for privacy but also raise a host of broader concerns,” Renieris said.