Can AI Decode the Language of Modern Antisemitism?
Online antisemitism is a slippery entity. It no longer reveals itself through blatant slurs or overt hate speech but hides behind cryptic symbols, seemingly harmless memes, and coded language that only the trained eye can decipher. Think of the notorious “triple parentheses” (((echo))), used to mark Jewish names, or references to “global bankers,” which insinuate antisemitic stereotypes without ever stating them explicitly. It’s a sophisticated linguistic game, and AI algorithms—advanced as they are—always seem one step behind in deciphering it.
In my previous article, I highlighted how digital antisemitism is in constant flux, able to adapt perversely to the rules of social media platforms. But the question remains: Can AI, with all its computational power and Natural Language Processing (NLP) models, truly learn to recognize these codes? Or are we doomed to a never-ending chase, where hate reinvents itself faster than technology can predict it?
Take the case of the “triple parentheses.” Born in the darkest corners of the web, among far-right forums and digital subcultures, it became a silent sign of recognition: devoid of offensive words, yet unmistakable to those who know its meaning. For a human with even minimal contextual knowledge, it’s a clear signal. But for an algorithm? It’s just a string of characters, meaningless unless specifically programmed to recognize it. Even the most advanced NLP models, like BERT and its successors, can analyze massive amounts of text and detect patterns, but without targeted training and clear cultural context, they remain ineffective.
The problem isn’t just technological—it’s conceptual. Modern antisemitism thrives on ambiguity. A meme featuring Pepe the Frog—originally an innocent cartoon but later co-opted by the alt-right—or a phrase like “who controls the media?” may be harmless in some contexts and toxic in others. AI excels at finding statistical correlations but struggles with cultural nuance and hidden intent. Training a model to recognize these signals requires datasets curated by human experts who can teach the machine what to look for. But who decides what to include in these datasets? And how can we keep pace with a language that constantly evolves?
AI alone isn’t enough. Platforms like X (formerly Twitter) use automated moderation systems to filter problematic content, but with mixed results. While a swastika or explicit slur is quickly flagged, hate disguised as irony or buried in wordplay often slips through. A study by the Anti-Defamation League (ADL) showed that many automated moderation tools fail to detect up to 40% of coded antisemitic content. Human intervention is still needed to bridge the gap and guide AI in understanding the problem. But the volume of data is overwhelming: billions of posts, images, and comments every day, making this race against time increasingly complex.
Another recent example is the use of the watermelon as a political symbol. Seemingly innocuous, it echoes the colors of the Palestinian flag—red, green, white, and black—and carries a history of creative resistance. After 1967, it was used to circumvent the Israeli ban on displaying the Palestinian flag. Today, it appears on X and TikTok, often as an emoji, to express solidarity with the Palestinian cause. But when paired with conspiracy theories about “hidden powers” or antisemitic stereotypes, its meaning shifts. Is it just a fruit, a political statement, or a coded message? Algorithms, lacking human intuition for context and ambiguity, struggle to interpret it—once again revealing how elusive digital antisemitism can be.
Is there hope? Perhaps. The newest AI models, developed by companies like xAI and other cutting-edge labs, are beginning to integrate more complex signals—not just text, but also images, emojis, and user behavior patterns (who posts, who shares, who amplifies). In theory, they could learn to connect symbols like (((echo))) or a watermelon to patterns of hate, if fed with sufficiently contextualized data. But the limitation remains: digital antisemitism is a human creation, and understanding it requires a sensitivity that no machine yet fully possesses.
So perhaps the real question isn’t whether AI can decode these codes, but whether we’re willing to give it the tools and time it needs to do so. Online antisemitism is not just a technological challenge, but a cultural and social one. Until society confronts the root of the issue, AI will remain an imperfect ally in a fight that keeps evolving. And the beast, for now, will keep running.