The fog of war, a term often attributed to Prussian military theorist Carl von Clausewitz, describes the chaos that clouds a battlefield commander’s vision. But in the digital age, this fog obscures everyone’s vision. Just ask Joe Kahn, the executive editor of The New York Times.
For the first time since assuming the helm in June 2022, Kahn recently issued an editor’s note (read: apology) for a headline stating that an Israeli airstrike was responsible for 500 deaths at Gaza’s al-Ahli Arab Hospital, with an attribution to the “Palestinian Health Authority.” “Times editors should have taken more care with the initial presentation,” he wrote.
In the week between the Times’ headline and Kahn’s statement, it has become clear that practically everything about the initial story was off. The blast hit the parking lot, not the hospital. There were far fewer deaths than initially reported. And video analyses point to the damage coming from a misfired rocket, aimed at Israel, launched from Gaza itself.
Along with my colleagues at Stanford University, I have spent the last seven years studying how people learn to make better decisions about what to believe online. We can partially ascribe the Times’ confusion to conflicting reports in the immediate aftermath of this event.
However, not only should the Times have known better than to swallow whole the claims of the Palestinian Health Ministry and its Hamas overlords. It did know. Before the banner headline went up, a senior editor on the International desk warned colleagues on the Times internal Slack channel, leaked to Vanity Fair, that “we can’t just hang the attribution of something so big on one source without having tried to verify it.”
The senior editor’s warning fell on deaf ears. It’s a warning we would all be wise to heed.
Clausewitz’s metaphor of fog aptly fits the challenge of seeing straight in an information war. The current conflict is being waged not only between Israel and Hamas but by a host of foreign actors, all of whom take to cyberspace to advance their interests. Add to this brew the supercharged capacities of AI to spew disinformation. This results in much more than fog, it creates a situation of near zero visibility.
Here are four guidelines for seeing through the fog and staying sane in the midst of this current information war.
First, be honest with yourself. Can you really tell whether a video that appears in your feed is genuine footage or a deep fake? Most of us can’t. If you’re not familiar with geolocation and metadata, what special powers do you have that make you more adept than the average person at distinguishing truth from lies? Can you be certain that the video claiming that the US has boots on the ground in Gaza, viewed more than 700,000 times on TikTok, was actually shot in Gaza? How many viewers took the time to investigate the source before passing it on, only to learn that the footage came from the 2019 pullout of American troops from Kurdish-controlled Northern Syria?
Second, ask yourself if the post that pops up in your feed is actually from someone in a position to know. Increasingly, leading commentators on X, formerly Twitter, earn their status not because of their expertise or deep knowledge, but because their incendiary messages, filled with lightning bolt emojis and “breaking news” headings, have been elevated by Elon Musk and broadcast to his 160 million followers.
Before forwarding a message ask what its author has to lose if the message turns out wrong. The reputation of someone like the BBC’s preternatural fact checker Shayan Sardarizadeh rests on being accurate and documenting his sources. On the other hand, scores of random rage merchants go from one act of online arson to the next, gaining followers with each post.
Third, go beyond the headline. The Times fiasco is a good example why. Their initial headline, “Israel Strike Kills Hundreds in Hospital, Palestinians Say,” tortures both grammar and attribution. As internet theorist Parker Malloy pointed out, had the order been switched starting with “Palestinians Say,” we would immediately be cued about the source and better able to weigh its possible interest. But the whole point of clickbait headlines is to circumvent rational processes and engage our reptilian brain. Rage sells.
Fourth, if you do search for confirmation, check multiple sources in a process known as lateral reading. Remember, Google is a search engine, not a truth engine. Its algorithms pick up on the slightest scent of bias in your search terms and will, accordingly, issue what it thinks you want—reliable or not. Don’t just click on the first result. Scan the full set of results and make a wise first click.
Search engine optimization, the process of gaming search results and kicking them to the top of the list, is a $46 billion industry, supported not just by advertisers but by lobby groups and, yes, foreign governments and non-state actors. If you’re unsure about an image, plug it into Google Lens or TinEye and see if it’s actually from the event you’re reading about or from something that happened years before.
Here’s the most important piece of advice. Take a deep breath and ask yourself why you feel compelled to share a particular post. Is doing so really helping the cause? Or are you retraumatizing people who are already struggling to cope with grief?
It took the New York Times seven days to issue its correction. In the interim, various congresspeople retweeted the story, further brutalizing raw emotions. By the time the story was finally amended, the truth lagged miles behind. Sadly, only rarely, does it ever catch up.