search
Lev Topor

AI-Generated Imagery of Hate

AI-Generated Image of the Antisemitic "Happy Merchant" Image
AI-Generated Image of the Antisemitic "Happy Merchant" Image (Source: X)

In recent years, artificial intelligence (AI) has revolutionized many fields, from healthcare to entertainment. However, alongside its positive contributions, AI has also been harnessed for more sinister purposes, particularly in the realm of online hate speech and antisemitism. One of the most concerning developments is the use of AI tools to create graphic illusions that embed antisemitic content, visible only when human eyes are de-focused or when viewed from a particular angle or distance.

AI tools, especially those designed for graphic manipulation, have become increasingly accessible and sophisticated. These tools allow users to create intricate images and visual content with minimal effort. Unfortunately, this technology is being exploited by hate groups and individuals seeking to spread antisemitic messages covertly. How do they do this? Well, they upload an age-old antisemitic image and then prompt the AI tool to generate a similar-looking image made of different things like coins, food, a graphical combination of people, and so on.

One disturbing trend is the creation of AI-generated images that appear harmless at first glance but reveal antisemitic symbols or messages when viewed in a certain way. For instance, an image might depict a simple scene, such as a man hugging a woman, but when the viewer’s eyes are de-focused, a hateful caricature, such as the notorious “Happy Merchant” stereotype, emerges from the background (See the attached image, the featured image, and view them in full size).

For instance, in the attached examples, several AI-generated images of the antisemitic “Happy Merchant” image can be seen, presenting a greedy Jewish person as if he is conspiring a sinister plan. This “Happy Merchant” was made by users online using such AI tools, while they created it from snakes and apples, coins, food, nature, space, and a combination of people.

This use of AI for creating antisemitic graphic illusions is an extension of what I refer to as “Memetic Antisemitism”—the spread of antisemitic ideas through memes and other easily shareable content online. Memes have long been used to propagate hate speech because of their viral nature and ability to convey complex ideas succinctly. AI enhances this by enabling the creation of content that can bypass initial scrutiny, embedding hateful messages that may not be immediately recognized.

Such AI-generated illusions take memetic antisemitism to a new level, making the content even harder to detect and counteract. The subtlety of these images means they can be shared widely on social media platforms, chat groups, and forums without triggering content moderation algorithms that typically flag overtly hateful symbols or language. Thus, mainstream monitoring tools are not useful until they are updated. Further, counter-speech is also problematic as online users cannot mark something as hate speech if they do not understand it.

The danger of AI-generated antisemitic content lies in its ability to spread without detection, subtly influencing perceptions and reinforcing negative stereotypes among those who encounter it. The use of graphic illusions also creates a sense of exclusivity or “inside” knowledge among those who recognize the hidden messages, potentially strengthening group identities within hate communities.

Moreover, the spread of such content is not limited to niche platforms. As AI tools become more user-friendly, even those with little technical expertise can create and disseminate these illusions, contributing to a broader culture of online hate.

Combatting AI-generated antisemitic content presents a significant challenge. Traditional methods of detecting hate speech, such as keyword filtering, linguistic analysis, and image recognition, may be inadequate for these sophisticated graphic illusions. New approaches are needed, including the development of AI-driven tools capable of identifying subtle manipulations in images and recognizing patterns of antisemitic content that are not immediately apparent. AI could potentially counter AI.

In addition to technological solutions, there must be a concerted effort to educate users about the presence of such content online. Awareness campaigns can help individuals recognize when they are being exposed to harmful material, even if it is not immediately obvious.

As AI continues to evolve, so too will its applications—both positive and negative. The use of AI tools to create antisemitic graphic illusions online is a stark reminder of the dark potential of this technology. While the battle against online hate speech is ongoing, understanding and addressing the ways in which AI is used to spread antisemitism is crucial. By staying vigilant and developing new strategies to counteract these threats, we can work towards a safer and more inclusive digital world.

About the Author
Dr. Lev Topor is the co-author (w/ Prof. Jonathan Fox) of 'Why Do People Discriminate Against Jews?' Published by Oxford University Press in 2021 and the author of 'Phishing for Nazis: Conspiracies, Anonymous Communications and White Supremacy Networks on the Dark Web' Published by Routledge in 2023. Lev also published 'Cyber Sovereignty - International Security, Mass Communication, and the Future of the Internet' with Springer in 2024. Lev publishes scholarly works and reports on the topic of antisemitism, anti-Zionism, racism, and cyber. Previously, Dr. Topor was a research fellow in ISGAP, at the Woolf Institute (Cambridge), at the CCLP (Haifa University), and at the International Institute for Holocaust Research at Yad Vashem (Jerusalem).
Related Topics
Related Posts