-
NEW! Get email alerts when this author publishes a new articleYou will receive email alerts from this author. Manage alert preferences on your profile pageYou will no longer receive email alerts from this author. Manage alert preferences on your profile page
- Website
- RSS
TikTok: The Moral Arc of the Social Media Universe
In October of 2021, Frances Haugen, the “Facebook whistle-blower” went wide with the Facebook Papers – showing that Facebook and Mark Zuckerberg were well aware of the harm created on the ubiquitous social media platform. None of this was news to anybody, but there was a great deal of hope that this incontrovertible evidence would be the shot heard around the world for social media platforms. It wasn’t.
On a lark, I created a TikTok account for Gidon Lev, my beloved life partner, a Holocaust survivor about whom I co-authored a book. Gidon was the second Holocaust survivor on the wildly popular (very) young-skewing social media platform, preceded by Auschwitz survivor Lily Ebert. After Gidon “onboarded,” Tova Friedman, another Auschwitz survivor, also appeared on TikTok, with the help of her grandson. The presence of all three Holocaust survivors on this wildly popular social media platform has opened up opportunities for Holocaust education that had been untapped. In fact, the three survivors inspired a program launching in 2022 to help other elderly Holocaust survivors tell their stories on social media.
Predictably, as the engagement and followers went up on Gidon’s account, the antisemitic trolls showed up like mushrooms after the rain. Things like “I hope you enjoyed your stay at Imaginationwitz” and “Did you enjoy the showers?”
Perhaps as a way of coping, I began keeping track of the comments and categorizing them – who were these keyboard warriors hiding in anonymity? I noticed significant differences in the level of sophistication trolls use, ranging from their use of language to particular emojis and dog-whistle numbers and symbols to other tells” – things like whether they had a profile picture of a human being or, instead, an overtly antisemitic symbol like Pepe the Frog, or the SS insignia (both of which violate the community guidelines of TikTok.) I spoke to a former neo-Nazi about the use of social media to influence, persuade and recruit. I noticed that the antisemitic trolls fall into categories of behavior ranging from attention-seeking to genuine ignorance to active recruitment into the white supremacy movement. Many are just idiots, in other words, teenagers hoping to shock others. But some are deadly serious.
I was able to share my experiences on TikTok with scholars at Hebrew University. Through that relationship, I received a direct line of communication to a representative of TikTok in Germany. (TikTok, it seems, has offices all over the world, with various people doing – something.)
Ahead of the appointed hour, I collected several antisemitic comments and profile “bios” that I had reported for hate speech – and which had been returned to me with “no community violations found.” Some of these were my own experiences, and others were shared with me by other Jewish TikTok users and creators. I was so curious to speak to an actual human at TikTok to find out just what these “community guidelines” entail and how and why they are – or are not – enforced.
I became discouraged from the conversation; I learned exactly what I suspected. Namely, that weeding out online hate on a social media platform as huge as TikTok (which has one billion monthly users – you read that right – one BILLION) isn’t something simple or easy on such a vast, automated system. Yes, there are some actual human moderators at TikTok. Still, they simply look at a reported comment or profile, refer to the checklist of complaints one could report for, see what is allowed and disallowed, and make their decision from that. Thusly, a profile that I reported, which has “stop reporting me ya k!k3$” in the profile bio, was found not to be violating community standards because I hadn’t clicked the correct box of complaint when reporting. (I had reported the content of the account, not the user profile.) Thusly, the post of a different Jewish TikTok user, which someone “dueted” with a photograph of an oven next to it – was not found to violate community guidelines because it was an “appliance.” And the AI, not to mention the moderator – were unable to come to the logical conclusion of the intent. The moderators, TikTok told me, are vetted and trained, but they can’t know every nuance of hate for every “protected group.”
As I listened to the clearly intelligent, thoughtful, committed TikTok representative explaining to me over Zoom how challenging this type of reporting and moderation is – especially, as he put it, “on the scale” of TikTok, I couldn’t help but think to myself – hold on – isn’t TikTok (owned by Bytedance) making mountains of money from this platform with – let me remind you – a billion users a month?
Gun advocates say, “guns don’t kill people; people kill people,” a logic that social media platforms seem to be in agreement with, if not in word, in deed. Haters gonna hate. But where is their responsibility in this brave, new metaverse, given that they pocket unimaginable amounts of money?
If social media platforms pay lip service to the idea that they genuinely care about countering online hate – what is their effort really amounting to? How much in the way of resources are they actually putting toward this, and with what results? Who is accountable for the gap between the intention to do better and the daily harm?
Look, I get it. Hate is not new, and as the technology we use gets more sophisticated, spreaders of hate adapt to new means – and memes. TikTok is a thriving, sprawling platform used by billions of people. Data privacy laws prohibit requiring users to register using their real names. “Doxing” individuals, a common occurrence on social media, is not something TikTok can get behind for ethical and legal reasons. Trying to block and report trolls on Tiktok is an endless and sometimes futile exercise, so quickly do the truly committed trolls (and they are legion) simply create new profiles and have another go.
The other day, an ominous-looking TikTok profile left me a message. It was one of those online vigilante types with the Guy Fawkes mask that Anonymous uses. He swore, he said, to “protect” my account. He even made a nice TikTok about it. He didn’t ask me for money, but I found myself thinking, wildly, that maybe I could just hire someone like this so that Gidon and I can keep making educational, fun TikToks in peace and not suffer the psychological harm of dealing with trolls every day. And why not? We hire a content creator to help us with our content – maybe now we need a virtual bodyguard too.
I was assured by the TikTok representative that TikTok is very cognizant of the problem of online hate and is working to do a better job of mitigating, moderating, and otherwise discouraging the hate directed toward protected groups on their platform. I showed him a TikTok video that has been posted only the day before, in which a young blond woman holds her blond baby before the camera, looking oddly confrontational. The text on the screen: “All you do is say I am conflicted and act like you are smart (laughing emoji) the 6 bajillion who got gassed were rightfully punished for their crimes.” The post was reported for hate speech. TikTok replied that it had “not violated community guidelines.” The TikTok representative frowned and peered at the screen. “That should not have happened,” he said.
Even if we take the good intentions of TikTok, Facebook, Twitter and others at face value, it doesn’t match up with any concrete actions to make social media platforms a safe space for marginalized groups. This missing safety net has caused some creators to want off the platform. And then what? Voices that are already marginalized become even more so. The moral arc of the social media metaverse bends and bends.
Related Topics