Apple’s approach to finding the worst predators is both a blessing and a problem. They will be scanning across cloud services for pictures that should never exist, and alerting law enforcement. This is good and has been done by others like Dropbox, Google, and Microsoft, but that happened 100% in the cloud. Now this is moving to the device, therein lies the problem. It feels like the first step in a long journey towards monitoring our phones for less mundane, or malicious third party attacks through text messages. I can’t argue with the initial goal, but Pandora’s box is infamous.

Later this year, Apple will roll out a technology that will allow the company to detect and report known child sexual abuse material to law enforcement in a way it says will preserve user privacy. Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM.