The New Era of Social Media Looks as Bad for Privacy as the Last One


When Elon Musk took over Twitter in October 2022, experts warned that his proposed changes—including less content moderation and a subscription-based verification system—would lead to an exodus of users and advertisers. A year later, those predictions have largely borne out. Advertising revenue on the platform has declined 55 percent since Musk’s takeover, and the number of daily active users fell from 140 million to 121 million in the same time period, according to third-party analyses.

As users moved to other online spaces, the past year could have marked a moment for other social platforms to change the way they collect and protect user data. “Unfortunately, it just feels like no matter what their interest or cultural tone is from the outset of founding their company, it’s just not enough to move an entire field further from a maximalist, voracious approach to our data,” says Jenna Ruddock, policy council at Free Press, a nonprofit media watchdog organization, and a lead author on a new report examining Bluesky, Mastodon, and Meta’s Threads, all of which have jockeyed to fill the void left by Twitter, which is now named X.

Companies like Google, X, and Meta collect vast amounts of user data, in part to better understand and improve their platforms but largely to be able to sell targeted advertising. But collection of sensitive information around users’ race, ethnicity, sexuality, or other identifiers can put people at risk. For instance, earlier this year, Meta and the US Department of Justice reached a settlement after it was found that the company’s algorithm allowed advertisers to exclude certain racial groups from seeing ads for things like housing, jobs, and financial services. In 2018, the company was slapped with a $5 billion fine—one of the largest in history—after a Federal Trade Commission probe found multiple instances of the company failing to protect user data, triggered by an investigation into data shared with British consulting firm Cambridge Analytica. (Meta has since made changes to some of these ad targeting options.)

“There’s a very strong corollary between the data that’s collected about us and then the automated tools that platforms and other services use, which often produce discriminatory results,” says Nora Benvenidez, director of digital justice and civil rights at Free Press. “And when that happens, there’s really no recourse other than litigation.”

Even for users who want to opt out of ravenous data collection, privacy policies remain complicated and vague, and many users don’t have the time or knowledge of legalese to parse through them. At best, says Benvenidez, users can figure out what data won’t be collected, “but either way, the onus is really on the users to sift through policies, trying to make sense of what’s really happening with their data,” she says. “I worry these corporate practices and policies are nefarious enough and befuddling enough that people really don’t understand the stakes.”

Mastodon, according to the report, offers users the most protection, because it doesn’t collect sensitive personal information or geo-location data and doesn’t track user activity off the platform, at least not on the platform’s default server. Other servers—or “instances,” in Mastodon parlance—can set their own privacy and moderation policies. Bluesky, founded by Twitter cofounder and former CEO Jack Dorsey, also doesn’t collect sensitive data but does track user behavior across other platforms. But there are no laws that require platforms like Bluesky and Mastodon to keep their privacy policies this way. “Folks can sign on with particular privacy expectations that they might feel satisfied by a privacy policy or disclosures,” says Ruddock. “And that can still change over time. And I think that’s what we’re going to see with some of these emerging platforms.”



Source link