“Misinformation” is Dictionary.com’s word of the year. The site defines it as “false information that is spread, regardless of whether there is intent to mislead” and is careful to distinguish it from disinformation, which does require a deliberate intent to mislead. Note that that the word of the year is not “fake news.” That’s SO 2016.
For anyone concerned about the varieties of false information, the recent U.S. midterm elections were seen as a test of whether or not, in the past two years, we’ve learned anything about how to deal with them.
Good news: we kind of have! Unlike in 2016, this election cycle did not have a huge spike in misinformation, according to media researchers at the University of Michigan. Facebook and Twitter were much more vigilant this election cycle. (The night before the election, Facebook shut down 115 accounts for suspected “coordinated inauthentic behavior” linked to foreign groups trying to interfere with the midterms.)
But the nature of false information itself has fundamentally changed in the past two years. As CNN’s Brian Stelter wrote a few days before the election: “Are midterm voters being fooled by made-up stories? I’ve been talking with experts and scouring social media websites for answers. My impression is that the specific ‘fake news’ problem is less pronounced this election season. But the threats have morphed and multiplied.”
About a week before the election, I decided to go hunting for some of these threats. I aggregated the most-shared articles about 2018’s midterm elections and used the Newstrition web browser extension — a tool developed by my organization, the First Amendment Center — to quickly fact-check them. Who were the publishers behind them? What kind of content was being shared? News? Opinion pieces? Full-blown hoaxes?
Here’s what I found:
Content from lesser-known publishers can rack up a lot of engagement, even when users have no idea what kind of publisher it is.
Unsurprisingly, a lot of the most-shared content was from well-known national media outlets like CNN, Fox News, The Washington Post and Breitbart. But social media still affords plenty of opportunities for articles from lesser-known media outlets to go viral. That’s not necessarily a bad thing, but it does mean that users aren’t always fully aware of what they’re sharing. Case in point, satirical news site “The Babylon Bee” published a story with the headline, “Dems: Trump’s Refusal To Admit Caravan Into Country Is An Egregious Act of Voter Suppression,” which was shared thousands of times on Twitter and Facebook. Just to be clear, The Babylon Bee isn’t a fake news or hoax site; it has plenty of fans who immediately knew that this wasn’t a real headline, and if you look at its website, it doesn’t really hide the ball about being satirical. But given some of the outraged reactions on Facebook, my guess is that not everyone knew that.
Outright hoaxes going viral have been dramatically reduced, but misleading headlines are still going strong.
One thing I came across quite a bit were headlines that were a lot more salacious than the actual articles. I’d click on a link titled, “Joe Biden THREATENS Republican Candidate — Tells Union GOONS to ‘Show Him a Threshold of PAIN!’ and find a story about Biden expressing his support for unions while campaigning in North Dakota. A quick fact-check revealed that the “Threshold of PAIN!” quote was accurate but taken out of context, but anyway, the article wasn’t a case for Biden being a homicidal maniac so much as a piece of political commentary about Democrats “pretending they’re just as bada** as Trump, while simultaneously pretending they are the party of civility.” Which is fine, but...the headline promised me an ARMY of GOONS!
This bait-and-switch also cropped up in articles with fewer all caps in their headlines. In most cases, the full article would provide nuance and context that the headline didn’t. The only problem is that nobody actually reads full articles anymore; research shows that 59 percent of the links shared on social media have never actually been clicked.
That leads to another related phenomenon.
Hyperpartisan news may be the toughest problem for platforms, and for all of us.
Hyperpartisan news is an interesting thing. It’s not fake news, per se — the events aren’t fabricated, although they’re often sensationalized and viewed through a very specific lens. You can argue with the underlying point of view, but you can’t really debunk something that’s essentially just opinion. As Claire Wardle, the head of First Draft says, “[C]urrently there is little the platforms can do with this type of content. It can not be fact-checked in a formal sense and some would argue that this type of content is ‘politics as normal.’ What we don’t know is how to measure the drip, drip, drip of these divisive hyperpartisan memes on society.”
My guess would be that the impact of these divisive hyperpartisan memes on our society isn’t great. And it looks like Russia agrees with me! According to a former NSA official (now a cybsersecurity threat analyst), “Russian accounts have been amplifying stories and internet ‘memes’ that initially came from the U.S. far left or far right. Such postings seem more authentic, are harder to identify as foreign, and are easier to produce than made-up stories.”
These types of articles don’t have misleading headlines. Their headlines are perfectly in sync with what’s in the full article. You can probably predict exactly what they have to say without even clicking on them.
And that might be the point. We’re increasingly swapping the sort of stories that aren’t really meant to be read. Instead, this kind of content is designed solely to be shared on social media, as a kind of badge of who are you are and a signal to others about where you stand.
Lata Nott is executive director of the First Amendment Center of the Freedom Forum Institute.