Big data is helping us to understand the world. Though there are important caveats and notable drawbacks, these large and complex datasets can and already are helping us to get better at identifying cancer, combating climate change and even understanding and fighting racism.
In 2019, colleagues from the Community Security Trust (CST) and I worked with big data guru Seth Stephens-Davidowitz to examine Google, using its information to probe and understand anti-Jewish racism on the platform. Seth found that the most common antisemitic Google searches in the United Kingdom are for jokes mocking Jews. There is a direct correlation between these searches, the report found, and those mocking other minorities. Someone who searches for “Jew jokes”, for example, is 100 times more likely to also search for n-word jokes.
Google has taken some steps to address the discoverability of hate materials through its search index. We know, for example, that when it removed the prompt “evil” for those typing the words “Jews are…”, searches for “Jews are evil” reduced by 10% from the previous year.
However, significant problems remain, as the shocking story of Google’s images search function returning barbeques to a search for “Jewish baby stroller” demonstrated.
We have just released further analysis. Working together with the CST again and the Woolf Institute at Cambridge University, we’ve found that Google’s current tools for filtering offensive images are not fit for purpose.
If, as a user, you wish to protect someone you live with, a child perhaps, from encountering offensive and harmful content on Google’s search result pages, you can use the ‘SafeSearch’ function to filter content. Sadly, that filter has no impact on the amount of antisemitic images returned in Google’s image searches, despite the tool being promoted to parents as one that can block most inappropriate content. In fact, for “Jewish jokes” and “Jew jokes” we found a high proportion of images returned were antisemitic irrespective of whether the SafeSearch function is switched on or off.
Worryingly, the impact of these system deficiencies is potentially wider than for Google users alone. The company has software that enables developers to identify explicit or offensive content – sometimes categorized as “spoof” – but that software is not yet capable of accurately identifying antisemitic images either.
So, whether it be building a website, a new platform, or some other tool, if a designer is seeking to use Google to help them, they won’t be able to accurately identify and filter antisemitic content (and one could presume many other forms of racism) for their users or audience, should they choose to do so. It isn’t just designers that are impacted, academics and others seeking to study, and to probe hate online are restricted by these system deficiencies. If a company as large as Google is failing to provide the tools to identify and filter antisemitism, it is highly likely other tech firms are failing too.
Our findings underline the importance of the Online Safety Bill, announced in this year’s Queen’s Speech. Technology companies, and particularly those housing user-generated content, like Google and social media companies, have still not proven themselves in the fight against online hate. Despite many great efforts, unbridled hate – both illegal and legal but harmful – remains easily accessible online.
Risk assessments and safety by design should be central to any technology company’s thinking, establishment, and development. The proposed statutory duty of care in the Online Safety Bill is likely to focus minds and should help to ensure that problems like those we have highlighted are not put in the ‘too difficult’ box by companies large or small. This duty will need to be accompanied by robust Codes of Practice and appropriate penalties, properly enforced the proposed regulator, Ofcom. For Google, its image filtering and developer tools require urgent improvement, and until the changes are made, Unsafe Search is perhaps a better description of what is on offer.