The horrors of October 7th and the war that ensued did not occur in an ideal information ecosystem. Unfortunately for all of us, this war is being waged at a time when X, formerly Twitter, has become a chaotic information platform stripped bare of safety mechanisms and content controls while continuing to serve as an important source of news around the world.
While other social media companies have made concerted efforts to remove posts inciting terror, X’s intervention has been minimal at best, even when it comes to terror-related content.
Two months into Israel’s war with Hamas, X has become a central platform for disseminating anti-Israeli and antisemitic messages worldwide.
Elon Musk, who purchased the platform last year, met yesterday with key leaders in Israel, including the Prime Minister. The lack of content monitoring and X’s transformation into a platform for spreading anti-Semitic messages have prompted major advertisers in the United States to announce that they would stop advertising on it. If the advertisers can’t stomach it, why does the State of Israel?
Our leaders are calling for an end to disinformation in the war while shaking hands with the man behind so much of it. Such meetings with Musk won’t make X a more friendly platform for the State of Israel. The harm he has generated is baked into the platform through features that encourage the spread of harmful messages. Banning the expression “from the river to the sea,” as Musk did earlier this month, will not change this.
These frightening developments at X are the result of three processes that have taken place on Musk’s watch. First is the almost complete destruction of X’s content monitoring operation, which has allowed for the proliferation of blatantly antisemitic, anti-Israeli, pro-Russian, and neo-Nazi content.
A report published by X in October reveals that the company employs only two content reviewers responsible for eight million Hebrew speaking potential users. That number has not increased during the war, despite a 1000% increase in requests to remove content inciting terror, according to a report from the Israel State Attorney’s Cyber Unit.
The report notes that Meta (Facebook) responded to around 90% of these requests, while TikTok addressed 85% of inquiries. It did not share data from X (Twitter), instead sharing a statement that they are currently working to improve contact with the company in order to improve its responsiveness to requests.
For instance, when Roger Waters claimed on X that Hamas did not kill Israeli civilians but rather the IDF shot them all, he echoed The Grayzone, an account owned by Max Blumenthal, an avowed Israel hater who equates Israelis with Nazis and ISIS. X has become a key platform for the publication and promotion of propaganda, while financially rewarding those behind it.
The second process involves the emergence of new accounts that have overtaken those of the established media, which are still committed to fact-checking.
Earlier this year, Musk dismantled the institution of “Verified Accounts,” allowing anyone to purchase the “blue checkmark” for money, thus increasing the perceived legitimacy of rumor-mongers and hindering the dissemination of news from reliable sources. Musk also introduced an ad-reward program tied to exposure, incentivizing creators to generate shocking, viral content, even if it’s fake, and decided to remove headlines and article summaries from links attached to tweets, making it even more challenging to identify reliable sources.
A study from the University of Washington reveals that a small group of seven accounts, mostly unknown a year ago, gained hundreds of millions of views daily, significantly influencing the discourse around the war. Musk has directly interacted with six of these accounts, drawing more public attention to them and exposing them to his 162 million followers.
Three of these accounts belong to a young British man who posts antisemitic content, an American soldier in Georgia who sources his content from dubious pro-Russian news sites, and an extreme right-wing group in Poland. Musk recommended two of them (@sentdefender, @warmonitors), a recommendation which he later deleted when it became clear they were spreading disinformation, though he still follows both accounts.
These accounts use X in a way that disorients its users, posting new information regularly with extreme graphics and images that are, in theory, forbidden by X, but the content is not removed. Their posts do not include links to additional information, thereby achieving even greater dissemination because Musk has reduced the visibility of tweets with links.
The third process involves X’s recommendation algorithm, whereby it suggests accounts full of disinformation or recommends tweets with the expression “Hitler was right” in English to Israeli users.
X’s conduct in the war has been one of the main factors reinforcing the connection between the information crisis, the weakening of nation-states, and extreme identity politics. A direct line can be drawn between this and Musk’s dismissive response to the European Union’s request for X to implement its responsibilities under European legislation; his threat to connect Gaza to the internet through Starlink, thereby harming Israeli interests in the war; and his use of over 160 million followers to amplify messages and accounts with false and anti-Semitic information. The current war has turned X into a human behavior experiment, and we are the lab rats. Those who sought out a photo-op with Musk yesterday must not be fooled into believing otherwise.