How X Is Suing Its Way Out of Accountability


On July 19, Bloomberg News reported what many others have been saying for some time: Twitter (now called X) was losing advertisers, in part because of its lax enforcement against hate speech. Quoted heavily in the story was Callum Hood, the head of research at the Center for Countering Digital Hate (CCDH), a nonprofit that tracks hate speech on social platforms, whose work has highlighted several instances in which Twitter has allowed violent, hateful, or misleading content to remain on the platform.

The next day, X announced it was filing a lawsuit against the nonprofit and the European Climate Foundation, for the alleged misuse of Twitter data leading to the loss of advertising revenue. In the lawsuit, X alleges that the data CCDH used in its research was obtained using the login credentials from the European Climate Foundation, which had an account with the third-party social listening tool, Brandwatch. Brandwatch has a license to use Twitter’s data through its API. X alleges that the CCDH was not authorized to access the Twitter/X data. The suit also accuses the CCDH of scraping Twitter’s platform without proper authorization, in violation of the company’s terms of service.

X did not respond to WIRED’s request for comment.

“The Center for Countering Digital Hate’s research shows that hate and disinformation is spreading like wildfire on the platform under Musk’s ownership, and this lawsuit is a direct attempt to silence those efforts,” says Imran Ahmed, CEO of the CCDH.

Experts who spoke to WIRED see the legal action as the latest move by social media platforms to shrink access to their data by researchers and civil society organizations that seek to hold them accountable. “We’re talking about access not just for researchers or academics, but it could also potentially be extended to advocates and journalists and even policymakers,” says Liz Woolery, digital policy lead at PEN America, a nonprofit that advocates for free expression. “Without that kind of access, it is really difficult for us to engage in the research necessary to better understand the scope and scale of the problem that we face, of how social media is affecting our daily life, and make it better.”

In 2021, Meta blocked researchers at New York University’s Ad Observatory from collecting data about political ads and Covid-19 misinformation. Last year, the company said it would wind down its monitoring tool CrowdTangle, which has been instrumental in allowing researchers and journalists to monitor Facebook. Both Meta and Twitter are suing Bright Data, an Israeli data collection firm, for scraping their sites. (Meta had previously contracted Bright Data to scrape other sites on its behalf.) Musk announced in March that the company would begin charging $42,000 per month for its API, pricing out the vast majority of researchers and academics who have used it to study issues like disinformation and hate speech in more than 17,000 academic studies.

There are reasons that platforms don’t want researchers and advocates poking around and exposing their failings. For years, advocacy organizations have used examples of violative content on social platforms as a way to pressure advertisers to withdraw their support, forcing companies to address problems or change their policies. Without the underlying research into hate speech, disinformation, and other harmful content on social media, these organizations would have little ability to force companies to change. In 2020, advertisers, including Starbucks, Patagonia, and Honda, left Facebook after the Meta platform was found to have a lax approach to moderating misinformation, particularly posts by former US president Donald Trump, costing the company millions.





Source link