Skip to content Skip to footer

A contrarian take on the disinformation panic

Jake Angeli, 33, a.k.a. Yellowstone Wolf, from Phoenix, wrapped in a QAnon flag, addresses supporters of US President Donald Trump as they protest outside the Maricopa County Election Department as counting continues after the US presidential election in Phoenix, Arizona, on November 5, 2020. | Olivier Touron/AFP via Getty Images

Joe Bernstein on what we know — and don’t know — about disinformation.

If you’ve followed the news over the last few years, you’re probably convinced that we’re living in a golden age of conspiracy theories and disinformation.

Whether it’s QAnon or the January 6 insurrection or anti-vaccine hysteria, many have come to believe that the culprit, more often than not, is bad information — and the fantasy-industrial complex that generates and propagates it — breaking people’s brains.

However, I read an essay recently in Harper’s magazine that made me wonder whether the story was as simple as that. I can’t say that it changed my mind in any profound way about the real-world consequences of lies, but it did make me question some of my core assumptions about the information ecosystem online. It’s called “Bad News: Selling the Story of Disinformation,” and the author is Joseph Bernstein, a senior technology reporter for BuzzFeed News.

Bernstein doesn’t deny that disinformation is a thing. The problem is that we don’t have a consistent definition of the term. What you find in the literature, Bernstein says, is a lot of vague references to information “that could possibly lead to misperceptions about the state of the world.”

A definition that broad, he argues, isn’t all that useful as a foundation for objective study. And it’s also not that clear how disinformation is distinct from misinformation, except that the former is considered more “intentionally” misleading. All of this leads Bernstein to the conclusion that even the people researching this stuff can’t agree on what they’re talking about.

But the bigger — and much less understood — issue is that certain interests are invested in over-hyping disinformation as an existential crisis because it’s good for business and because it’s a way of denying the real roots of our problems.

I reached out to him for this week’s episode of Vox Conversations to talk about where he thinks the disinformation discourse went wrong and why it’s not all that clear whether the internet broke American society or merely unmasked it.

Below is an edited excerpt from our conversation. As always, there’s much more in the full podcast, so subscribe to Vox Conversations on Apple Podcasts, Google Podcasts, Spotify, Stitcher, or wherever you listen to podcasts.


Sean Illing

I’ve spent a lot of time the last few years making noises about disinformation and misinformation and what a great problem it is, and I have to say, you’ve really made me pause and think hard about how easily I’ve bought into the conventional wisdom on this stuff.

But let’s just start there: Do you think people like me, that have been worrying publicly about disinformation, have been part of a panic?

Joe Bernstein

I think that the idea of bad information on the internet is a poorly understood and at times poorly discussed topic. That is a huge topic. That is a new topic. That is a very important topic, but that like many problems, it helps to define them. And if you have trouble defining them, it helps to think about why. And when you start thinking about why, it helps to think about who is trying to define the problem and why.

And so, I’m not comfortable even necessarily calling it a panic because I think, especially as we’ve seen with this series of revelations in the Wall Street Journal over the past couple of weeks, and then the testimony of the Facebook whistleblower, these are real problems. It’s just not clear to me that we understand completely what’s at stake or that we understand completely how these categories that are being kind of tossed around — and I’ve at times tossed them around too, mis- and disinformation — how they’re being used.

And that’s really what I wanted to do: not to say that several private companies having monopoly power over the flow of information is a thing we should just be happy with and live with, but that when we talk about the problem, we should understand who wants to address it and why.

Sean Illing

It might surprise people to learn that even the researchers studying disinformation can’t come up with a coherent or consistent definition of the term.

Joe Bernstein

This is one of the things that I played for laughs in the piece. What scholars would say is that they have a lexical problem. Everyone knows there’s an issue, but everyone is attacking this issue using the same word, with a different idea in their head.

So the most comprehensive survey of the scholarly field is from 2018. It’s a scientific literature review called “Social Media, Political Polarization, and Political Disinformation.” And the definition they give of disinformation — and this is a good, broad survey of the field — this is the definition they give: “Disinformation is intended to be a broad category describing the types of information that one could encounter online that could possibly lead to misperceptions about the actual state of the world.”

Now, as far as I can tell, that definition basically applies to anything you could come in contact with online. And Sean, I should make the point, this trickles down to the definitions that tech companies use when they define mis- and disinformation. So — I’m not going to get this exactly right — but TikTok’s definition of misinformation is something like, “information that is not true or information that could mislead or is not true.” There’s just not a lot of there there. There’s a lot of good research, but for something that aspires to be kind of an objective science, there’s not a good objective foundation.

Sean Illing

A big problem here is that we’re desperate for some kind of neutral definition of disinformation so that it’s possible to call something “disinformation” without it appearing political, but that doesn’t seem possible.

Joe Bernstein

Yeah. And then, one of the interesting things to me was when I looked up the etymology of the term — it’s actually a borrowing from a Russian word that was popularized in the early years of the Cold War: dezinformatsiya. It was initially defined in the 1952 Great Soviet Encyclopedia, which was kind of a propaganda encyclopedia meant for English consumption. Its definition was as follows: “dissemination in the press or on the radio of false reports intended to mislead public opinion. The capitalist press and radio make wide use of dezinformatsiya.”

I don’t mean to be a complete relativist and say there aren’t things that are true or false. Of course there are. But on the internet especially, context is very, very important, and it’s very hard to isolate particular nuggets of information as good or bad information.

Sean Illing

What’s a better definition of “disinformation”? How’s it distinct from “misinformation” or “propaganda”?

Joe Bernstein

I like the word propaganda better than I like the words mis- and disinformation because I think it has a stronger political connotation. I think there is a broad understanding among the people who study and the people who talk about mis- and disinformation in the media, that disinformation is more intentional than misinformation, and misinformation tends to be poorly contextualized but nevertheless true or “truthy” information.

What I wanted to do with this piece is make it clear that these definitions have politics behind them, in the way people who use them have politics behind them. I don’t even think there’s necessarily anything wrong with using these terms, as long as it’s clear that there are interests.

And I’m not implying some kind of broad conspiracy. I take pains to say — maybe I didn’t say it enough in the piece — that there are people who are operating in utter good faith, who care deeply about public discourse, who are studying this problem. I just want some recognition that the use of these terms has a politics behind it, even if that’s a centrist or kind of a conventional liberal politics. I would like that to be a feature of the discussion.

Sean Illing

A big claim in your piece is that the disinformation craze has become a vehicle for propping up the online advertising economy, and it might sound counterintuitive to say that Big Tech companies like Facebook would enthusiastically embrace the idea that “disinformation” is a major problem.

What does a company like Facebook stand to gain here? Why are they selling this so hard?

Joe Bernstein

Well, one of the things that got me thinking about this was, I started with kind of a buzzword that I have used; the “information ecosystem.” It just kind of makes intuitive sense. We have a world, the natural world of information, and then something’s polluted it. And so then I started thinking about other industries that pollute, and that have gotten in trouble for polluting.

So like the tobacco industry — which has been a major point of comparison to big tech recently — well, cigarettes give people cancer. Or the fossil fuel industry, it pollutes and it’s contributing to climate change. And there’s good science behind that. And yet these industries have spent years fighting the science, trying to undermine the science.

And I was very surprised when I thought about the timeline of how long it took Facebook to be blamed, for throwing the 2016 election in Trump’s favor and for Brexit, to when Mark Zuckerberg essentially publicly admitted misinformation was a problem. And we intuit that’s true, but I don’t think the science is necessarily there. I don’t think the study of media effects on politics is necessarily there yet.

I mean, we’re still getting the political science on the effect of Father Coughlin on, I believe, the 1936 election. These are questions that are going to be resolved over time. But you had Mark Zuckerberg out there in public basically saying, “We’re going to fight misinformation.”

Partially, that’s because I think Facebook has never had a particularly coherent press strategy. But part of it, I think, is that Facebook realized very quickly, as did the other big tech companies, that rather than in a kind of blanket way say, “This isn’t true. These claims, there’s no empirical basis behind them,” I think they realized that co-opting, or at least sort of putting their arms around the people who are doing this research, was a better strategy.

And I started to wonder why. From a public relations perspective, it makes good sense. But also, I started to think about the nature of the claim itself, that people being exposed to bad information are necessarily convinced by that information. And then, that’s when I kind of had a “eureka” moment, which was that’s exactly the same way that Facebook makes money. What Hannah Arendt calls the “psychological premise of human manipulability,” which is kind of a mouthful.

And so, if we accept that people are endlessly convincible by whatever bullshit they see on Facebook, on the internet, in some ways we’re contributing to the idea that the ad duopoly, Facebook and Google and just online ads in general, works.

I’m kind of going on, but there’s a terrific book that I read around that time by a guy who’s now the general counsel of Substack. He’s a guy named Tim Hwang, who worked at Google for a long time. The book is called Subprime Attention Crisis. And it’s basically about how much of the online ad industry is a house of cards.

One very interesting fact about the Facebook whistleblower disclosures to the SEC, and one that got almost no press attention, is that she claims, based on internal Facebook research, that they were badly misleading investors in the reach and efficacy of their ads. And to me, the most damaging thing you could say about Facebook is that this kind of industrial information machine doesn’t actually work.

And so that kind of flipped everything I thought about this on its head. And that’s when I started to write the piece.

To hear the rest of the conversation, click here, and be sure to subscribe to Vox Conversations on Apple Podcasts, Google Podcasts, Spotify, Stitcher, or wherever you listen to podcasts.

Leave a comment

0.0/5