search
Carmen Dal Monte
A minority is compelled to think

Online anti-semitism is a beast

Online antisemitism is a constantly evolving beast. From social media posts to viral memes, hate rhetoric spreads at an astonishing speed, adapting to new digital languages and evading automatic controls. But can artificial intelligence really be the answer to stopping it? It’s a question I often ask myself, especially when I see certain content circulating freely on platforms while others are removed without any apparent logic.

For some time now, major platforms have relied on sophisticated algorithms to detect offensive content, removing thousands of hate-filled posts in real-time. However, reality is far more complex than it seems. Online antisemites are becoming increasingly cunning, using cryptic symbols and wordplay to evade automatic detection. And this is where technology’s limitations emerge: AI, no matter how advanced, struggles to distinguish between an ironic post and a veiled insult, a legitimate denunciation and a conspiracy theory. The result? Sometimes, harmless content gets censored, while more insidious material continues to circulate unchecked. And anyone who spends time on social media knows this all too well.

But the problem isn’t just technological. The databases used to train these algorithms are often incomplete and outdated, especially for languages other than English. Then there’s the issue of disinformation, a fertile ground for spreading antisemitism. Here, some AI tools attempt to combat the problem by analyzing sources and flagging misleading content. But can they really replace human fact-checking? Probably not. Verifying information requires critical thinking and cultural awareness that machines, at least for now, do not possess. Anyone who has ever had to explain why certain conspiracy theories about Israel are absurd knows how difficult it is—now imagine entrusting this task to an algorithm.

Beyond detecting hate speech, AI is also used to counter antisemitic disinformation, which often takes the form of conspiracy theories or historical revisionism. Tools like Natural Language Processing (NLP) models analyze textual content to detect falsehoods and highlight reliable sources. However, these tools’ effectiveness depends on the quality of the data they are trained on. The most advanced models are increasingly precise in identifying fake news, but their understanding of context remains a weak point. No matter how sophisticated, algorithms lack the historical and cultural background needed to interpret certain linguistic nuances, making their intervention ineffective in some cases.

I am well aware of AI’s strengths and weaknesses because, for over twenty years, I have been analyzing, dismantling, and countering online disinformation. It’s not just a matter of code or computing power—recognizing and exposing toxic narratives requires experience, critical thinking, and a deep understanding of how hate infiltrates public discourse.

The fight against online hate, therefore, cannot be left solely to technology. Jewish organizations, along with institutions such as the ADL and the Simon Wiesenthal Center, collaborate with platforms to improve algorithms and raise public awareness. But we, as users, can also play our part. Reporting problematic content, avoiding the spread of dubious posts, and engaging with reliable sources can help algorithms recognize and reduce the visibility of online hate. In a world where artificial intelligence is becoming increasingly pervasive, real change does not depend solely on code but also on the choices made by those who navigate the web daily. Machine learning algorithms rely on engagement: the more a piece of content is reported or ignored, the less visibility it will have. Likewise, commenting on and sharing verified information helps boost reliable content, training AI to recognize it as trustworthy.

AI is a powerful tool, but without widespread digital education and a clear strategy, it risks becoming little more than a flawed filter. And like many others, I would love to believe it can truly work—but I know that’s not enough. A collective effort is needed.

And most importantly, let’s remember: don’t feed the trolls.

About the Author
Carmen Dal Monte (PhD), is an Italian entrepreneur and Jewish community leader. Founder and CEO of an AI startup, she is also president of the Jewish Reform Community Or 'Ammim, in Bologna.