search
Carmen Dal Monte
A minority is compelled to think

Antisemitism in the Machine

Generative artificial intelligence is rapidly transforming how we communicate, learn, and produce content. But what happens when these models—trained on massive datasets scraped from the internet—absorb and reproduce both explicit and coded forms of antisemitism? The issue isn’t just technical; it’s cultural, ethical, and deeply political.

Generative models—whether they create images, text, or video—learn from the associations, visuals, and language patterns they find online. When antisemitism is present in the training data, even in the form of ironic memes, conspiracy narratives, or persistent stereotypes, the AI tends to absorb it as part of its “understanding of the world”—and may replicate it uncritically.

Today’s antisemitism often hides in plain sight. It’s coded, ironic, wrapped in pop culture references or ambiguous symbols. At present, AI systems struggle to distinguish context or intent. They can reproduce, remix, or even “generate” antisemitic content without recognizing its cultural weight or historical significance.

At the same time, there’s a risk in the opposite direction: in trying to avoid controversy, some systems may erase references to Jewish identity altogether, flagging words like “Jewish” as sensitive or problematic. This overcorrection may result in the disappearance of Jewish culture and history from digital representation.

This isn’t about malevolent intent—it’s about systemic bias embedded in data. If the web includes antisemitic narratives, then AI trained on that data may reflect and even amplify those narratives, regardless of how subtle they are. That’s why we urgently need a broader conversation: not just about moderating online hate, but about which data we are feeding into the technologies that increasingly shape what we see, read, and believe.

What’s needed is an alliance between developers, historians, antisemitism scholars, educators, and users—one that can help build AI systems that are not only powerful, but culturally aware and ethically responsible.

An algorithm that replicates prejudice isn’t making a mistake. It’s simply giving back what we gave it.

About the Author
Carmen Dal Monte (PhD), is an Italian entrepreneur and Jewish community leader. Founder and CEO of an AI startup, she is also president of the Jewish Reform Community Or 'Ammim, in Bologna.