Google has stepped in to clarify that a newly introduced Android System SafetyCore app does not perform any client-side scanning of content.
“Android provides many on-device protections that safeguard users against threats like malware, messaging spam and abuse protections, and phone scam protections, while preserving user privacy and keeping users in control of their data,” a spokesperson for the company told The Hacker News when reached for comment.
“SafetyCore is a new Google system service for Android 9+ devices that provides the on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users are in control over SafetyCore and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.”
![Cybersecurity](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6e4c8i_pkXRCFnrtqVIygOrARiVnU3_KUgU5mhPl5V4uj8R1KcQOxRLdZ0xm1Rf5AX_cviUAeiiRkTJCe8HXzOeB363590NBXAMv92N9e7zr4m7aKtDq-Q_gpP9QFWecL0oxcVtmqSg9qrGEGqlDbzwNNFKGJe2nlup4tuL7AZzTm0U501YxPGodOc2Fq/s728-rw-e100/zz-d.jpg)
SafetyCore (package name “com.google.android.safetycore”) was first introduced by Google in October 2024, as part of a set of security measures designed to combat scams and other content deemed sensitive on the Google Messages app for Android.
The feature, which requires 2GB of RAM, is rolling out to all Android devices, running Android version 9 and later, as well as those running Android Go, a lightweight version of the operating system for entry-level smartphones.
Client-side scanning (CSS), on the other hand, is seen as an alternative approach to enable on-device analysis of data as opposed to weakening encryption or adding backdoors to existing systems. However, the method has raised serious privacy concerns, as it’s ripe for abuse by forcing the service provider to search for material beyond the initially agreed-upon scope.
In some ways, Google’s Sensitive Content Warnings for the Messages app is a lot similar to Apple’s Communication Safety feature in iMessage, which employs on-device machine learning to analyze photo and video attachments and determine if a photo or video appears to contain nudity.
![Cybersecurity](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhc0hgq4JZKi-PJjUZ4kdb5ficmXr3IPOg6noFF558_qZ-gXm7vb0OzXU0NzsPAxaqca2tLI5j8NgJW731W0_CuPrUerOmSrZSt4IeANQp6VAQsIAQUzv6aJsxBD6poxHfELq0bcbeevSVy5AyOb9ganALMoA140nZoLOtSb0ck2AZ5rZgb9mWDEyVsbvqK/s728-rw-e100/saas-security-v1-d.png)
The maintainers of the GrapheneOS operating system, in a post shared on X, reiterated that SafetyCore doesn’t provide client-side scanning, and is mainly designed to offer on-device machine-learning models that can be used by other applications to classify content as spam, scam, or malware.
“Classifying things like this is not the same as trying to detect illegal content and reporting it to a service,” GrapheneOS said. “That would greatly violate people’s privacy in multiple ways and false positives would still exist. It’s not what this is and it’s not usable for it.”