On Saturday, the official Israel account on X posted a picture of what looks like a child’s bedroom with blood covering the floor. “This could be your child’s bedroom. No words,” the post reads. There is no suggestion the picture is fake, and publicly there are no notes on the post. However, in the Community Notes backend, viewed by WIRED, multiple contributors are engaging in a conspiracy-fueled back-and-forth.
“Deoxygenated blood has a shade of dark red, therefore this is staged,” one contributor wrote. “Post with manipulative intent that tries to create an emotional reaction in the reader by relating words and pictures in a decontextualized way,” another writes.
“There is no evidence that this picture is staged. A Wikipedia article about blood is not evidence that this is staged,” another contributor writes.
“There is no evidence this photo is from the October 7th attacks,” another claims.
These types of exchanges raise questions about how X approves contributors for the program, but this, along with precisely what factors are considered before each note is approved, remains unknown. X’s Benarroch did not respond to questions about how contributors are chosen.
None of those approved for the system are given any training, according to all contributors WIRED spoke to, and the only limitation placed on the contributors initially is an inability to write new notes until they have rated a number of other notes first. One contributor claims this approval process can take fewer than six hours.
In order for notes to become attached to a post publicly, they need to be approved as “helpful” by a certain number of contributors, though how many is unclear. X describes “helpful” notes as ones that get “enough contributors from different perspectives.” Benarroch did not say how X evaluates a user’s political leanings.
“I don’t see any mechanism by which they can know what perspective people hold,” Anna, a UK-based former journalist whom X invited to become a Community Notes contributor, tells WIRED. “I really don’t see how that would work, to be honest, because new topics come up that one could not possibly have been rated on.” Anna asked to only be identified by her first name for fear of backlash from other X users.
For all the notes that do become public, there are many more that remain unseen, either because they are deemed unhelpful, or in the majority of cases reviewed by WIRED, they simply didn’t get enough votes from other contributors. One contributor tells WIRED that 503 notes he had rated in the last week remained in limbo because not enough people had voted on them.
“I think one of the issues with Community Notes at its core, it’s not really scalable for the amount of media that’s being consumed or posted in any given day,” the contributor, who is known online as Investigator515, tells WIRED. They asked to only be identified by their handle because of fears of damage to their professional reputation.
All of the contributors who spoke to WIRED feel that Community Notes is not up to the task of policing the platform for misinformation, and none of them believed that the program would improve at all in the coming months if it remains in its current form.
“It’s much harder to deal with misinformation when there isn’t the top-down moderation that Twitter used to have, because accounts willfully spreading misinformation would get suspended before they could really do a lot of harm,” the longtime contributor says. “So a reliance on Community Notes is not good. It’s not a replacement for proper content moderation.”