Warning: your google search is “customized,” for the narrative you prefer
Simple, common sense steps to personally filter out garbage info
The quality of your conclusion is influenced by the quality of your data analysis
- CLICK HERE for Dr. Justin McBrayer’s latest book
The McAlvany Weekly Commentary
with David McAlvany and Kevin Orrick
Dr. Justin McBrayer: Being A Steward Of Truth In A Fake News World
November 11, 2020
“It’s not just that we lack truth-based incentives. That would be bad enough, but we also have perverse incentives. You have incentives to be entertained, to fit in with the people around you, to cheer for the tribe that you identify with and so forth. And those incentives are pushing against whatever limited incentive you have to get to the truth. And it makes us, unfortunately, irresponsible consumers of information.”
– Justin McBrayer
Kevin: Our guest today, Dr. Justin McBrayer, has written a book on fake news. That’s sort of the vernacular now, Dave. We all have heard this, and think we know what fake news is. But he’s actually calling us to personal responsibility and saying, “Hey, you had better check out your sources and understand why you think the way you do.
David: He is a philosophy professor at Fort Lewis College, a liberal arts college here in the state Colorado system, and he’s written a number of books, co-edited the companion to The Problem with Evil, which is a Blackwell’s published book. He has done “Introducing Ethics” and “Skeptical Theism”, new essays. So Beyond Fake News: Finding the Truth In A World of Misinformation is hot off the press.
Kevin: Well, the timing of this interview was well chosen by you, Dave, with what’s been going on, not only with the election but everything that surrounds the election.
David: What matters to us is tools and engagement and perspective, and we want our listeners to be able to engage in a way that is winsome. And I hope the conversation today enables you to cut through some of the chaos of the current moment and see that there are practical things that can be done to improve the situation that we see unfolding in the political sphere. And as Justin says, there is a hopeful aspect to this.
* * *
Justin, you’ve studied and taught philosophy for several decades. You’re a Fulbright scholar. You’re the Executive Director of the Society of Christian Philosophers, and play a role in administration at the liberal arts college where you teach. Apparently, ideas matter to you. I’ll assume this is one of the underlying motivations for writing the book.
Obviously the topic of fake news is relevant given the political context we find ourselves in. So perhaps you could start by telling us what you think is at stake today, with a 24-hour news cycle, with a constant bombardment with information from both traditional and nontraditional sources, and an electorate that is stepping into the fray either more or less informed.
Justin: Sure. Yes, I’m interested in ideas because ideas have consequences. What people believe alters the actions they take every day. It changes how people vote. It changes how people buy. It changes how people interact with their family members. So as a philosopher I’m intensely interested in ideas, and it turns out that it’s getting harder and harder to sort ideas or to evaluate ideas, and in particular, harder and harder to try to figure out what’s true because our world is getting ever more complex.
This matters in our personal lives, but it also matters for the political body as a whole. When we’re thinking about a democracy, a place where people are supposed to have the power, if you strip those people of knowledge, you’re stripping them of power. And so if we want our democracy to be functional, we have to make it the kind of place where citizens can find the truth, where they can make arguments pro and con for various ideas, and then go to the ballot box being well informed.
David: I like that you set this up as a market-based problem. There may be a market-based solution as well, but you start with the littered landscape of information. Then you look at how we operate as consumers with certain blind spots and preconditions that lead us towards error. And then toward the end of the book you consider how we can do better. The last part you call applied epistemology. So for the non-philosophers out there, what does that mean?
Justin: Epistemology, more generally, is just the study of knowledge. What does it mean to know something? In what cases do we know? And in what cases should we be skeptical? How is justified or reasonable belief different than just a mere guess? At the highest level of abstraction, philosophers who are working in epistemology or trying to answer these theoretical questions about our cognitive grasp of the world apply to epistemology, then take the lessons from that more theoretical domain and apply them to some concrete problem.
For example, you could look at the epistemology of medical diagnoses. When a medical doctor is trying to get a handle on what kind of condition ails the patient, what is she looking for? What kind of things count is evidence for this rather than that, and so forth. This project is really an applied epistemology project when it comes to public consumption of information.
When you’re trying to figure out whether some political party holds some particular platform, when you’re trying to figure out how an economic market works, when you’re trying to figure out whether a vaccine is effective, those are all tough questions. We all want to try to figure out the answers to those questions. And so my project is an applied epistemic project in trying to think of how we, as individuals, can sort through all of the information that bombards us every day to draw reasonable conclusions to those big questions.
David: We look at things from both a qualitative and quantitative standpoint, and it would seem that, in addition to that, there has been a shift in tone through the years. I don’t know if this is the last five or 10 years or if we could stretch back even further. But it seems that for civil discourse to occur, we need both civility and quality content upon which to linger in conversation. How do we move toward both of those, the civility aspect as well as the qualitative aspect?
Justin: The fake news problem is also intertwined with this problem that you pointed out, this rise in partisanship and this rise of a kind of animosity toward people with whom we disagree. And both the fake news and the partisanship have been rising steadily for at least 40 years. My thought is that they’ve been rising in tandem because they’re both part of a positive feedback loop. Getting fake news makes you more partisan, partisans then seek out more fake news, and so forth. So it’s a kind of cycle. I could talk more about why I think we’re in this cycle in just a moment, but let me say briefly, what I think we need to break the cycle is A) as you pointed out, some measure of civility, and B) as you pointed out, we need some kind of access to the facts.
But C), maybe more importantly than either of those things, we need a measure of intellectual humility. Right now, I think we’re too arrogant when it comes to our grasp of the facts. We just assume that we have it all right and that our opponents have it all wrong and so they must be evil or ignorant or whatever. And I think we need a serious dose of epistemic humility as we’re trying to figure out what the world is like. And once we realize that our grasp on the world is fragile, it’s much more easy to be civil with other people who have drawn different conclusions than we have.
David: So we’re in the cycle, and it seems that one of the issues, too, is that we can’t agree on what truth is, or that it exists. Some say we live in a post-truth world. How do you go about anchoring a civil conversation if we can’t even agree on what is factual?
Justin: Yes, this is a stunning problem that’s really gotten worse and worse over the last 40 years or so. Think of how different the world is now than, say, in 1970. In 1970, if you wanted access to news, you had to pick up a newspaper, maybe your local daily. If you lived in a big enough city, you could get a national newspaper, and then you had to wait for the news to come on at six o‘clock and you had three stations that you could turn to. It was a pretty good bet that if you were to take any two people across the country, they would have been exposed to roughly the same sorts of news content. They would have tuned into one of the three legacy broadcasters, or they would be reading major American newspapers.
In other words, our exposure to the facts was much more limited and much more consistent. It was much easier for religious people and non-religious people, or conservatives and liberals to agree on the same basis of fact. They might disagree about what to do about these facts or how to respond to them, so they would have policy disagreements and that sort of thing. But they at least had the shared basis of fact. That era is gone. Watching the news at six o’clock has become a thing of the past. We now have 24-hour news channels, and furthermore, you can tune to the news channel that tells you what you want to hear. News is being catered to partisans in a way that was not happening in 1970.
You probably don’t have a local newspaper anymore. Newspapers are drying up. Instead, you’re relying on blogs or you’re relying on satire shows like The Late Show to get your information. So it’s just a radically different environment. And in fact, what’s happened is this information marketplace has been fragmented in a way that was just not possible in 1970. And that’s what’s driving a lot of this partisanship that we see today.
David: We have a new phrase for it. We call it fake news, but it’s as old as any form of public communication. We would have referred to it in past periods as propaganda. And in the past, the flow of information through propaganda channels, those were controlled channels. You’re talking about, primarily, government controlled or government approved channels. Propaganda was largely limited to official state approved channels.
Cable and the Internet have obviously changed that. So quality, actually, in some respects has improved. And in other respects it’s massively deteriorated. And this is a part of our issue, how do you sort out the good information from the bad information as it proliferates, and it doesn’t have those earlier era filters?
Justin: David, first let me say something about the nature of fake news. We use this moniker, fake news, and it’s been around a decade or so. But fake news is kind of a sloppy term that covers a wide range of different epistemic sins, so to speak. So here’s what I think is a clearer way to think through our informational environment.
First, there’s information. Information is just a true proposition, something that’s true and not misleading we can call information. But note that information can also be misleading. There’s a second category, true propositions that mislead you in some way. We could call this misleading information. Just the other night, my kids were eating cereal and I asked my youngest son whether he left me any Captain Crunch. He said, “Yes, there’s some left.” Well, I pick up the box and there are two pieces of Captain Crunch in the bottom of the box. So okay, yes, he spoke truly. He provided me with something that was correct, but it was misleading. He provided me with misleading information that really deceived my overall world view.
So there’s information, and there’s misleading information, and next there’s misinformation. Misinformation is false information, false propositions that are being conveyed unintentionally. When you read, for example, some liberal blogger’s case for why a city should impose rent control, that’s a case of misinformation. It’s stuff that this person really believes, but it turns out that it isn’t true.
And lastly, there’s disinformation, and this is the thing that you were mentioning, David, with regard to propaganda. Disinformation is when false propositions are conveyed with the purpose of deceiving someone else. So fake news covers this wide range of different things. Sometimes newspapers or blogs or whatever give you information, but they frame it in a misleading way. Sometimes they tell you what’s false through no fault of their own. They’re trying to get to the truth, but they gave you misinformation instead.
And then finally, sometimes we’re subject to disinformation campaigns. When we talk about a foreign government, for example, interfering in American democracy by purposefully spreading lies, that would be an example of disinformation. So there’s fake news all around us, and it’s coming in all different guises and different shapes.
David: And we have fake news being tied, in part, to the Internet, and this raises an issue of jurisdictional responsibility. If we were, again talking about a past period of time where the television channels had to have licenses and radio stations also to operate had to comply with certain mandates from the FCC, there was a structure of accountability involved, and even things like the Fairness Doctrine were there up through the 1980s to govern what was said and how it was presented.
But here’s something very different. The Internet is transnational, and if you wanted to govern the space, who does it? And it sounds like it actually points us to the conclusion of your book, which is that we ultimately have to take responsibility and do better.
Justin: Yes, I think that’s right. I think what has happened is our regulatory environment has shifted, as you indicated, and our technology has shifted, and both of those have created a sort of Wild West for the media. Again, in the 1980s, to contrast what’s happening now with what happened earlier, we lived in a very structured environment, and if you were in the market for selling news, then you had particular strictures, particular rules, that you had to work within, things like the Fairness Doctrine and so forth.
And technology made it very difficult to create material to disperse it abroad. This is why there were only three legacy TV networks in the United States for so long. With the advent of the Internet, with the advent of things like smartphones, we now all of a sudden can produce content very cheaply. We can distribute it around the world with the press of a button, and we no longer are bound by the confines of federal regulation like the Fairness Doctrine. And this has now opened up the information and misinformation marketplace in a way that was previously unconceivable.
Right now, a hacker can open her laptop in suburban Atlanta, create fake stories complete with faked images, complete with faked video called deep fakes, complete with a storyline that has a lot of elements of outrage and emotional, jarring verbiage, and she can post this to a site, she can send it to an email list, she can circulate it around the world with a push of a button, and she can get paid by the clicks.
That was something that was impossible 30 years ago, and now that it’s possible, our news market has opened up itself to all kinds of abuse that we didn’t see earlier. And the fact of the matter is, since there’s no one to serve this role of global overseer, there’s no ministry of truth, this leaves us on our own to try to sort through all of the information that is out there and try to figure out what’s true and what’s not.
David: I think of the Arab spring and how powerful it was for people to be able to send messages to each other around the traditional channels of communication. And how news and a social phenomenon occurred to upset power that had been in place for decades. It seems to me there is a danger for established political power, and perhaps we look at both the positive aspects and the negative aspect of this accessibility. As you say, it’s very inexpensive to produce information and put it out there, distribute that information, and there’s a pro and a con to that in terms of public and political change.
Justin: I think that’s exactly right. Our technology, combined with the Internet, holds both promise and peril. In the early part of the 2000s, there were democratic activists on both the right and left who were hailing the Internet as this kind of force that would level the playing field.
Long ago, it was the people with the guns or the money or the political power who controlled the messaging. And now all of a sudden ordinary Joes could access information on the Internet, ordinary people could see through the bollocks of official messaging from government entities, and people hailed the Internet as this democratizing force. All of a sudden, information is sort of an egalitarian distribution of information, and the thought was, as a result, democracy would flourish.
And of course, in some cases it has. The Arab spring is a good example of that. But in other cases it’s been worse. And here’s why. Technology in general, and the Internet in particular, have become incredibly personalized. And far from just leveling the playing field and providing this egalitarian access to information, instead what’s happened is that our access to information has been fragmented by our own personal interests.
Let me just give you an example of what I mean. If I were to search climate change on my computer, I would get very different results from someone else who lived in Nebraska and searched climate change. Literally, we could both go open a Google browser, we could both type in the words “climate change,” hit Go, and we would get very different returns. That’s because Google personalizes your returns based on everything that it knows about you, and it knows a hell of a lot. It knows where you live. It knows your gender. It knows your age. It likely knows your political profile. It knows what other pages you’ve looked for and so forth.
Now, when it comes to things like making good on our desires or our preferences, this is a really good thing. We want Google to be sensitive to our interests or our desires when it comes to our preferences. When I search best pizza place, I want it to bring up places where I live and not places in New York City.
But personalization is a terrible thing when it comes to things like facts. Those are different than values. We don’t want Google to reach out to the websites that it thinks I’m likely to click on. It would be better if it reached out to websites that were more reliable or accurate or whatever. This is just a stunning way in which the Internet has fragmented our access to information. When someone who believes in climate change does a search, they’ll get different results from someone who denies the human impact on climate change. So it’s no wonder that these two sides can’t find a kind of shared agreement. And so I think you’re right, on the one hand, the Internet has this promise of leveling the informational playing field and taking power out of the hands of the political elites, so to speak.
But on the other hand, given that our technology is so personalized, it also renders us vulnerable to being manipulated and exploited by the same political powers who can then tailor their message to exactly what they think we need to hear in order to get our votes or our money or whatever else.
David: Justin, it seems like downstream from the egalitarian voice that we have through the change in technology is people struggling for power, where all of a sudden we come back to this issue of civility and a part of the outgrowth, again, downstream from the egalitarian benefits, is the fact that our voice has to get louder and louder. Hyperbole becomes more common as we try to have our voice heard. Perhaps that’s just part of what’s baked into the cake.
One of our past guests was a cybersecurity expert out of Princeton, and he has said that if you’re getting something for free on the Internet, you are the product. Social media is free, except that our profiles and our histories, as you said, these are all things that go into customizing. And it’s detailed data which is sold for the purpose of advertising, which is the business model of the social media and search giants. It’s advertising. What are the implications of these curated news feeds and customized information for our ability to engage civilly? Does it become an impossibility when we can’t agree to what the facts are?
Justin: There’s no doubt it’s an impediment to our ability to be civil. Just think of it this way. Take all the things that you think are true and then find someone else who thinks that you should vote differently on the basis of those. If they know all the things you do but they vote differently, that must mean they value different things, it must mean that they’re evil people, or whatever. If you both start from the same basis of knowledge, but then you respond to those in different ways, the only inference you can draw is that the other person has some kind of character or motivational defects that you lack. After all, you have all the same facts, but you’re responding to them in different ways.
The problem is, as you point out, we don’t share all the same facts, and we don’t share all the same facts because we’re all doing different Google searches and watching different news shows and that sort of thing. So I think the cure to this is twofold. One, it’s to seek out better information, and two, it’s to be willing to grant this kind of epistemic humility in your life and in your grasp of the facts, to realize that we’re deeply fragile creatures that miss the truth all the time and therefore hold our views more tentatively, and be willing to seriously engage with someone else with the understanding that you might have it wrong.
In the past, we might have called us being openminded. I prefer to call this being epistemically humble. Just because you get it right some of the time doesn’t mean you get it right all of the time. And when you recognize that, it’s much easier to sit down with someone across the table knowing that they were exposed to different facts, knowing that they processed those facts in different ways, and that just because they come to a different conclusion than you, it doesn’t mean that they’re wicked, or evil or anything else.
David: It’s a great starting point. And as for better information, you have a pyramid of strategies in the book, which graphically lays out how news sources compare to each other. If you could explore that.
Justin: I think the key, if you’re interested in knowing why we face this kind of fake news epidemic now that we didn’t face 40 years ago, there are two ingredients to that puzzle. One ingredient is the technology piece, which we’ve already briefly discussed, and the second ingredient just comes down to incentives. The key thing to note is that the information market is just that. It’s a market.
When we buy toasters, we realize that we’re in a market. We have incentives that we’re acting on. The manufacturers have incentives that they’re acting on. We’re trying to meet in the middle to get a toaster that is a good enough quality and a good enough profit for the manufacturer. And that’s how the free market works.
The free market for information is no different. The people who are producing and disseminating information have an incentive. They want to make a profit. But the fact of the matter is, there are a number of different ways to make a profit in the information market, and many of those ways don’t require you to tell the whole truth and nothing but the truth.
So really, you can think about strategies for people who are in that information business as a kind of pyramid. Toward the top of the pyramid are the business strategies that require you to tell the whole truth, nothing but the truth, in an unbiased way. And then sloping down on either side of that pyramid there are all these different business niches that allow you to get away from telling the whole truth as long as you tell it in some kind of partisan direction.
Let me give you an example of what I have in mind. Suppose you’re the Associated Press. The Associated Press sells new stories to news companies and blogs all across the world. They want to make money by having sales to these retailers, so the AP is a kind of news wholesaler.
Given that they have an incentive to make money, they now have an incentive to sell their stories to as many places as possible. But given their incentive to sell stories to as many places as possible, they had better make darn sure that the stories they’re selling are accurate and that they’re not biased. They want to be able to sell their stories both to Fox News and to MSNBC. So the Associated Press has a kind of incentive that puts it at the top of this pyramid. They have an incentive to be unbiased and to sell high quality or high accurate stories.
As you slide down from there, you start moving out on one side of the pyramid of the other. USA Today, national newspaper, has to sell the both sides, so it’s pretty high on this pyramid. MSNBC, definitely down on the left side of the pyramid. Their goal is to boost not just a general audience, but a kind of liberal audience. If you can attract just some subset of the market and make them reliable consumers of what you’re selling, then that’s a business model that will make you money. MSNBC has staked out the business model to do that on the left. Fox News has staked out that business model on the right.
In neither case, does it require them telling the whole truth and nothing but the truth. In fact, telling the whole unvarnished truth in ways that aren’t framed to appeal to your partisan audience would actually go against their business strategy. They would lose readers, they would lose viewers, and thus they would lose money if they sold news in that way.
So if you’re interested in this idea of why we face this fake news epidemic, part of the story is fake news is distributed in a free market, and in a free market there are a lot of different strategies for making money.
David: It seems to me that there is a degree of bias in any news, and even with the AP, whose business model is to sell as close to the unvarnished truth as they can get, you mentioned Ad Fontes Media and the study of various news outlets and AP News would be at the top of your pyramid of strategies, but also Ad Fontes Media would put them at the pinnacle of reliable news sources for and with original fact reporting.
Yet they still have a stylebook which determines how they frame the news they report. Even the outlets like AP that focus on facts still have biases in how they frame those facts. So there’s still a degree of discernment. We’re talking about degrees here of bias, not that there is anyone who doesn’t have a lens through which they see the “facts.” Everyone does. How colored is the lens?
Justin: Yes, that’s exactly right. So to put it slightly differently, there’s no God’s-eye view of what’s going on for anyone but God. Anyone else who’s trying to tell you what’s going on has a particular background, a particular perspective, a particular way of framing things. And that’s going to affect two things in particular. One, it is going to affect which stories get reported. Just imagine you’re a journalist or a blogger or whatever. There are thousands of things that happen on any given day that would be of interest. You can report on a mere handful.
Well, you can bet, if you’re a political liberal, the handful that you pick is going to look very different than if you were conservative. If, say, you are religious, the handful you would pick are going look very different than if you were non-religious, and so forth. So part of this kind of explanation for how we get bias in reporting is just in which stories get selected. What you find interesting or important is going to be a function of your background.
But then second, even on stories where both conservatives and liberals agree it’s an important story to report, they frame stories differently. Framing has to do with how you present some bit of a fact to an audience.
Let me just give you an example of how framing can affect your presentation of the news. Suppose the economy doesn’t grow or shrink from one quarter to the next. Suppose it’s 2% in the first quarter and suppose it is 2% in the next quarter. So there’s the unvarnished fact.
How might you report that bit of information? Well, suppose you’re a liberal blogger and suppose the sitting president is a Republican. Well, here’s how you might frame that data. “In the 2nd quarter the economy fails to grow yet again.” So you’re taking this bit of information, the economy was 2% in quarter 1, 2% in quarter 2, but you’re framing it as the economy failing to grow.
Now, consider a different perspective. Suppose you’re a conservative blogger reporting that same fact. If the sitting president is Republican and you want to tell this economic news, here’s how you might frame the story. “The economy remains strong in 2nd quarter.” It’s the exact same fact, but it’s framed in very different ways. And there are all kind of really interesting studies showing that how information is framed changes how people interpret it, and changes how people respond to it.
So I think for any of us, when we’re reporting on any kind of information or passing along it to our friends, A) there’s this issue that we select the stories that are important based on our own background, and B) there is this issue that we frame stories in ways that tell the story that we want told.
David: This is a lot of what you cover in the second part of the book, which is not so much that there are alternative facts, but there are biases that we bring that allow us to see things differently and maybe even come to different conclusions. And so that’s an important piece to realize that there’s the supply side, but there is also the demand side, and we are the consumers of information, and we are the people who are filtering further what is being provided to us. So maybe you could say something about the demand side of the equation and the potential missteps that we make in the consumption of information.
Justin: Right. If we think about the information and misinformation system as a free market, and I’ve said something about the supply side of that market, suppliers of information have a profit, profit is tied to readership, but readership is tied to all kinds of things beyond truth. It’s tied to things like entertainment and it is tied to things like telling stories partisans want to hear, and so forth. So the other side then of this free market equation is the consumer side. It’s the demand side for misinformation. And on its face you might think this is crazy. Nobody wants to be misinformed. Everybody wants the truth. So there’s nothing going on here on the demand side of the misinformation market.
That’s a mistake. Let me try to illustrate the mistake by going back to the example about toasters. When you buy a toaster from Walmart, the fact of the matter is, it’s not going to last very long. It’s going to be very different than the toaster you would buy from Williams Sonoma. You might put it crassly by saying that the toaster that you buy at Walmart is going to be of cheap quality, and that’s probably right. And the point is, many of us are okay with that. We know that we could spend three times more and get a toaster from Williams Sonoma that will be higher quality and yet many of us are just perfectly fine with a Walmart-level toaster.
Well, look, the information market is no different. Many of us are just fine with getting something other than the truth and nothing but the truth. We’re fine with getting certain levels of misinformation. We’re fine with even certain levels of disinformation as long as it meets our other needs, our other incentives. So the fact of the matter is people who are consuming information have mixed motives. Economists would say that we don’t have a kind of material cost of error when it comes to much of what passes for fake news.
Let me try to illustrate what they have in mind. Suppose you go to a restaurant and you have to buy every dish that you consume individually. In that case, you have a strong incentive to buy all and only the dishes that you’ll actually enjoy and be able to finish. You don’t want to order extra food and leave it on the table. There’s a kind of material cost to misjudging your appetite in that case.
Compare that with an all-you-can eat buffet. When you walk into the all-you-can-eat buffet and you’re putting things on your plate, you now have a negligible cost of error. If you put that extra piece of chicken on your plate and you’re too full before you get to it, no big deal, no skin off your back. This is why huge amounts of food are wasted in all-you-can-eat buffets. There’s no cost of being wrong, so you might as well put it on your plate and have the option to eat it later.
Well look. News is like that, too. For much of the things that we believe, there’s no material cost for getting it wrong. Think, for example, of the big kerfuffle about whether President Trump’s inaugural crowd was as big as Obama’s. The photograph evidence clearly shows that it was not, and yet there was this huge dispute between conservatives and liberals about whether the inaugural crowd was the same size or larger.
What’s going on here? Why would anybody endorse that, especially in the face of really clear photographic evidence to the contrary. Well look, what’s the material cost of error for misjudging the size of an inaugural crowd? And the answer is precisely zero. If you have a false belief about the size of that crowd, that doesn’t affect you whatsoever. You don’t become poor. You don’t miss out on opportunities. It just doesn’t matter whether that’s true or not.
And if that’s so, there’s no material cost or error for you getting it wrong. And any time there’s not a material cost of error, we should expect people to behave the way they do with all-you-can-eat buffets. In other words, behave irresponsibly and consume things irresponsibly.
So on this demand side of the misinformation market, it turns out for many of these things it doesn’t matter to us whether they’re true or false, we don’t have incentives to believe truly, and so we just consume information in irresponsible ways even though the information that we’re consuming is less than fully accurate.
David: To a degree, you could say that the entertainment value of the news fits into that as well, where we say to ourselves, I like it not because it’s true or false, but I like it because it’s entertaining. I know that I’ve done various interviews with Bloomberg and CNBC where they’ve specifically structured, and this is within the sphere of business journalism, for there to be an organized conflict between two different people they’re interviewing, and they’ve structured it to be like an MMA style, mixed martial arts style, of interview.
The question is, do they care what we’re saying? Are they trying to ferret out the truth, or is this really just an excuse for entertainment? And thus you keep people in their seats long enough to sell the advertising spots that go along with it?
Justin: Yes, that’s exactly right. You think about it from their perspective. Which kind of interviews are likely to go viral, the one where people throw chairs at one another, or the one where they have a reasoned discourse? And the sad fact of the matter is, it’s the former. And if it’s the former, then you can’t blame Bloomberg for setting it up in that way. They’re trying to make a profit. Those were their incentives. And then on the flip side, as you pointed out, you can’t blame the consumer who is interested in watching chairs being thrown.
So again, back to this demand side of the misinformation market, it’s actually worse than I suggested just a moment ago. It’s not just that we lack truth-based incentives. That would be bad enough. But we also have perverse incentives that cancel out whatever truth incentives you might have. So A) you lack an incentive to believe truly, but B) you have incentives to be entertained, to fit in with the people around you, to cheer for the tribe that you identify with, and so forth. And those incentives are pushing against whatever limited incentive you have to get to the truth. And it makes us, unfortunately, irresponsible consumers of information.
David: Also on the demand side, maybe you can contrast how we approach and what assumptions we bring to the news we read as we engage with big questions and little questions, specific meanings to those references, as well as some of the blind spots that we have, our biases. As we read, we don’t do ourselves any favors if we don’t critically look at how we are reading. So errors of bias, errors of quality, but maybe you could start with the contrast between big questions and little questions.
Justin: Right. Part of the overall lesson is that if we want to do better we need to be epistemically humble. We need to shed this know-it-all arrogance that permeates so much of our political discourse or our medical discourse, or even our scientific and empirical discourse when you’re when you’re thinking about things like pandemics or climate change or whatever. And being more humble means in part that you recognize the limits of what you can reliably figure out on your own.
In the book I draw a distinction between two sorts of questions, little questions and big questions. Little questions are questions that you can reliably answer on your own without any kind of special technology or anything like that. For example, you want to know if it’s hot outside. You can walk outside, and you can pretty reliably determine if it’s hot outside, given the kind of cognitive equipment that you have. On the other hand, big questions are questions that we cannot reliably answer just using our own natural faculties.
An example of a big question is this – has the average world temperature gone up by two degrees over the last century? There’s no way you can walk outside and tell that. Answering that kind of question would be really, really hard. And we need to do a better job of sorting the issues that we face into big questions and little questions, and here’s why. If we don’t sort them in that way, we will be tempted to rely on our natural automatic thinking to try to answer really big questions. And when we do that we run into all kind of mistakes because our cognitive equipment, while perfectly suited to answer some questions, leads us astray when we try to answer others.
Fake news often involves those kinds of big questions, but pitches them or frames them in ways that tempt us to answer them with our natural thinking, and that leads us into all kinds of mistakes.
David: So then the second part is, as we’re analyzing big questions and little questions, trying to figure out what we bring to the table, what biases do we have in the area of business and economics? This is behavioral finance, the application of heuristics to understanding how we make decisions for allocating assets. In this book, you actually talk about heuristics as well, and that gets to, again, what we bring to the table, the biases, the lens that we uniquely see the world through.
Justin: That’s right. Human minds are amazing things, but one thing they’re not are perfectly reliable calculators. They don’t perfectly, reliably tell us what’s going on 100% of the time, but they do a pretty darn good job most of the time. The problem is the way our minds are wired. Psychologists call these heuristics, these kind of mental shortcuts that we take when we’re thinking about these things. Even though they work very quickly and they get us the right answer much of the time, they can also lead us astray.
Let me just give you a concrete example so that listeners can see what I have in mind when I talk about a heuristic. It’s just a kind of mental shortcut. One kind of mental shortcut that we take that can lead us astray is something called an anchoring error, or an anchoring bias. Think of an anchor of a boat. Once you drop the anchor, the boat can only move so far around that first spot where it was anchored. It can drift to one side, it could drift to the other, but it’s held in place by that anchor and it can’t get very far away from that.
It turns out human minds are deeply susceptible to things like anchoring bias. Let me give you an example of one of the studies that demonstrated this particular bias.
Suppose I have a bucket that has 500 marbles in it, and suppose I take some random person off the street, and then I ask the person how many marbles are in the bucket. And then I tell them before they guess what my guess is. I bring them in off the street, I say, “I think I think there are 200 marbles in here. How many do you think are in here?” As soon as I say that there are 200 marbles, that number acts like an anchor in that person’s mind. Their guess will wander to and fro off of 200 rather than coming up with a guess that they would have done on their own without my anchoring them.
So suppose I say, “I think there’s 200 in here, what do you think?” The average participant is likely to say something like 350. They start at 200, they realize that’s too low. They push it up a notch and they end up somewhere in the 350 range. Now suppose, same bucket, I call in somebody else off the street and I say, “Hey, how many marbles do you think are in this bucket? I really think that there’s 900 in here.” Now, the number 900 is functioning as an anchor, and the person’s mind will wander off of that. They’ll say, “Well, 900 is too many.” The average guess in this case will be 650.
So look at this. Same bucket of marbles. I take random people off the street. I anchor some of them by saying 200, putting that in their mind. I anchor others by saying 900, putting that in their mind. And their guesses are wildly different from one another, based off of nothing more than the fact that I anchored them. This happens all the time in the news, and it happens all the time in finance, too. If you’re the one making an opening offer on a house and you’re the one who gets to state the price first, then the other person does all of their thinking based on that initial price point offering. You’ve anchored them in a particular way, which is why you’re at an advantage to be the one to throw that price out first.
That’s an example of a heuristic, and it turns out our minds have all kinds of heuristics, and the people who are trying to sell us misinformation know about those, and they pitch their stories in ways that appeal to those shortcuts misfiring, and that gives us this kind of confidence in what they’re saying, even though it’s not actually right.
David: We’ve talked about information, misinformation, disinformation, malinformation, and we’ve talked about the supply side and the demand side. Toward the end of the book, you start discussing ways that we can improve this, both as consumers, what we might consider, and I wonder if you would share some of those ideas with us.
Justin: First we should be clear about what we mean by “improve the situation.” One thing you might do is improve yourself, to make yourself a better learner, or to make it more likely that the stock of beliefs that you have in your head are true rather than false. Another thing you might do is you might try to improve the informational environment in certain ways. There the goal is not so much to help you, but to help other people, or maybe our country as a whole, to think better about these kinds of issues.
So let me say something about the first sense of improvement. If what you’re interested in doing is being a better thinker, if what you’re interested in is having a higher ratio of true beliefs to false beliefs as you wade through all of the stuff that’s out there, here are four things that you can do, in this order. First, when you read some kind of story on social media, you listen to something in a podcast, or you see it in a newspaper, when you consume that information the very first thing you should do is check yourself. See what’s going on emotionally and intellectually with yourself when you engage with that piece of information. If you find yourself getting angry, if you find yourself getting outraged, if you find yourself wanting to take some kind of immediate action, those are signs that the affective side of your mind is being engaged rather than the cognitive side of your mind. That’s a sign that this story is being written or pitched in such a way as to engage your emotions, the reptile side of your brain, rather than the cognitive side of your brain. That should make you worry.
Second, you should ask yourself clearly, who benefits from whether you consume this story? If you come to agree with the author, what hangs on that? Do they sell more papers? If that’s right, does some political party benefit? If that’s right, ask yourself who benefits from this story?
Third, you should decide what the central issue is in this article, or in this podcast, and you should ask yourself whether the question being discussed is a little question or a big one. This is important because it will help you to calibrate your expectations for what would count as a good answer. For example, if you’re listening to a podcast from some mother who is telling you that her son got autism because of a vaccination, then you should ask yourself, the question of whether vaccines cause autism – is that a big question or a little question? And once you’ve decided that’s a big question, you know that this mother’s testimony should not help settle the question. Her natural cognitive abilities are not the kind of abilities that could reliably answer this question. And so you know we need to look somewhere else to get those kinds of answers.
And lastly, you need to fact check it, and by fact check it, what I mean is deploy all the kind of tools that are taught in media literacy classes to think carefully about what you’re consuming. Open up new web pages, do new searches, go into an incognito window and do a search. Go look at the Wikipedia page for what’s going on. Do a reverse image search to see if someone’s found a stock image and changed it in various ways. There’s an awful lot that we can do to evaluate information that we’re consuming before we just readily accept it.
Those are the four guidelines that I suggest. First, check yourself to see whether you’re being emotionally manipulated in some way. Second, try to figure out who benefits from the parsing of this story or your believing of it. Third, try to find out whether the issue under discussion is a big question or a little question so you can adjust your expectations for what a good answer would look like. And lastly, fact check it.
David: You had a subtle reference to Aristotle’s virtue ethics when you’re talking about how we engage in virtuous news consumption, and of course, habits of character are things that you practice, and ultimately the more you practice them, that’s what you become. So the question is, what are the habits that you practice to become a virtuous news consumer? I think that was very insightful in terms of reading widely, reading quality and limiting your exposure to bad sources. Are any of those ones that you’d like to expand on?
Justin: The basic idea is this. If we could train ourselves to respond automatically and intuitively to the information with which we come into contact, we’re much more likely to get it right. And a kind of disposition to respond properly is what philosophers would call a virtue. So there are ethical virtues. For example, if you’re an honest person, what that means is it just comes natural to you to tell the truth. It’s just easy. You don’t even think about it. You don’t wrestle with it. It’s just a natural disposition that you have to tell the truth when you’re asked about some particular thing.
If you have the virtue of being courageous, it means that you’re willing to put yourself at risk, naturally and easily for the things that you care about. The courageous person doesn’t sit there and deliberate or think twice, she’s the one who jumps right in and does what needs to be done.
There are also epistemic virtues, virtues that are not aimed at the moral life but aimed at your intellectual life. And so I think, more than following any set patterns of rules, we should just try to develop intellectual character that will allow us to consume information in responsible ways. And, as you know that when it comes to fake news, some of those character traits include reading widely. So don’t just read the Wall Street Journal, read the Wall Street Journal and the New York Times. Don’t just read national newspapers, read national and local newspapers.
It means reading quality. It’s easy to find out whether some particular news source is of one quality or another. That’s a big question, but there are lots of people who have answered that question in a variety of ways. It’s really easy to do Google searches and get information about the overall quality of sources. And once you know that, you should dedicate yourself to reading the ones of higher quality.
And lastly, once you know that a source is making its money by entertaining you, once you know that it’s making its money by outraging you, once you know that it’s making its money by passing on misinformation at the behest of corporate sponsors or whatever, you should avoid those sources. If those kinds of things came naturally to us, then we would be better consumers of the news.
David: Now, there’s the consumption side. There is also the distribution side of fake news, which includes this issue of when you encounter it, what do you do in terms of passing it along? It is fascinating that those over the age of 65 are seven times more likely to share fake political news. That age is a factor I thought was interesting. But what might we do to not perpetuate that trend?
Justin: This is a great question because the distribution of fake news often, not always, but often runs through social media. So, for example, if you look at the top 20 fake stories that were shared on Facebook and Twitter before the months leading up to the 2016 election, you’ll see that almost all the shares happened through social media, as opposed to like someone reading on a website and then forwarding it. And as it turns out, the fake ones actually spread farther and wider in those months leading up to the 2016 election, than genuine news sites. So our sharing behavior has an awful lot to do with this.
Let me just say two things about this. One, the convention of sharing a story on social media is poorly defined in the contemporary age. By poorly defined, what I mean is, it’s not clear what your intentions are or what signal you’re sending when you forward a story. Are you saying that you agree with it? Are you saying that it’s interesting? Are you saying that it’s nonsense? Are you just trying to point out what someone else is doing? It’s opaque what it is you’re trying to do when you share a story or re-tweet a story.
And in fact, we’ve seen politicians rely on this ambiguity when they get called out for forwarding or spreading fake news. They will forward a story that comes from some bogus site but makes their opponent look bad. The opponent’s political campaign will call them on it and they’ll say something like, “Oh, I was just re-tweeting it. I didn’t say it was true, I was just re-tweeting it.”
Well look, our convention for sharing it for re-tweeting is ambiguous, and we need to clear that up. So one thing we could do is when we share a story, we could say things like, “I think this is true.” Or, “this has really good evidence for such and such.” Or, “here’s the best argument I can think of against my position.” Or, “this is fake, and I can’t believe people are sharing it.” We should be explicit about our intentions when we share a story.
And then, second, we could just read it. As it turns out, most stories shared on social media, at least the ones that are inflammatory, are shared without people actually clicking on the link and reading the story first. They’re reading the headline and immediately sharing it. In fact, there was a study that demonstrated this by tracking how many times people liked stories or forwarded them versus how many times they clicked on them. And the scientist released this story and it made the news.
And one really clever news agency picked up on it and they wrote a headline that said, “Study finds that people share news without reading more than the headline.” And then in the body of the story they literally just included Latin nonsense. So there’s just a long string of Latin prose for the story. But of course, no one would be able to read it, and even if they could, it would be nonsense. Well, they posted it to their new site and guess what? It was forwarded a huge number of times. The very story telling about the survey that demonstrates that people forward news stories without reading them was itself forwarded without being read. So one, we could be clear about what it is we share. That would play some role in the distribution of fake news. And two, we could just not share things until after we’ve read and digested them.
David: You raise two really interesting points about speed and volume having a huge impact on the amount of information we encounter. And I wonder if that doesn’t factor into almost an information fatigue, where the only thing that really stands out to us is that which is hyperbole and sensationalized. What sells? What gets someone’s attention with an increase in volume and speed? Is the somber, or is it the sensational?
There’s a professor at Harvard University who wrote a book. He’s in the political science department, Graham Allison, and he wrote a book on China/US relations, and the title is called Destined for War. And it’s not actually about the inevitability of war. But that’s what sells. It actually is about learning from Thucydides and asking the question, “Is it possible to find another way?” But what sells is the sensational, not the somber. You could actually quote him as saying, “A political science professor from Harvard predicts that we’re destined for war with China,” just on the basis of the title of his book.
So again, I wonder if there’s not this combination of sheer volume meets how do you get anyone’s attention anymore? Because we’re all growing numb to what we’re inundated with. So the deluge is a part of the issue. The deluge of information is a part of the issue. The only thing that gets our attention anymore is the sensational. So I like your idea of being more discerning as to what we read and that we actually do read before we send something on. But how do we gain more sensitivity once again for content as opposed to living our lives leapfrogging from one hyperbole to the next?
Justin: (laughs) That’s right. Well, it goes back to the fact that we have mixed motives. We’re not pure truth consumers. We want to be entertained. We want to be outraged. We want to cheer for our team. We want to signal our identity. A lot of this is posting and re-posting a story to show that I’m one of you, a conservative or a Democrat or whatever. And as long as humans are going to have mixed motives, you can bet that our sharing patterns are going to continue to be ambiguous.
So the question is, what can we do given that humans are messy when it comes to their incentives? And given that the producers of the news have messy incentives as well, what can we do to make it easier to find the truth? Well, here’s one idea doesn’t come from me. It comes from another philosopher by the name of Regina Rini and she suggests that social media posts could have reputational scores.
So what if in the lower right-hand side of your icon on Facebook you had some kind of reliability rating? And when you pass on a bunch of disinformation that’s coming from some Eastern European hacker your reputational score goes down, and when you pass on high quality stories that have been vetted across a variety of different political platforms, your reliable rating goes up?
Well, in that case, when your crazy uncle shares some story on Facebook, you can see at a glance what kind of stuff he shares in general. If he’s got this kind of red or negative reputation for passing on fake things, then you know what’s happening. Even though he didn’t signal his intention, he’s probably sending you something for some non-truth related reason, outrage or identity or whatever.
On the other hand, if you’ve got people in your life who are really interested in things like evidence and arguments and trying to be clear and trying to sort the true from the false, their reputational scores would be higher and you’ll be able to tell at a glance that when they share a story, it’s likely to be one that’s at least vetted.
David: As we move towards wrapping up I have a question that relates to the difference between fact and theory because some of our conversation today is related to getting to good facts, but are facts independent of theory?
Justin: I think the answer to that question is yes. Here is the simplest way that I know to think about this distinction. Just take the world of facts to be the world of whatever it is that’s true. There are things that are true. There are things that are false. The things that are true are themselves the facts. Then there’s our grasp of the facts. Our grasp of the facts are our beliefs.
Think about a map being in your head. There’s a world out there. In your head you have a map of the world, and you’re doing what you can to make sure that your map is suitably representative of what’s really out there. If there really is a mountain in front of you, then there is a fact. If your map inside your head says that there is a mountain out there, then you have a belief that corresponds with the facts. You’ve gotten it right. So there are facts. Those are in the world. There are beliefs. Those air in your head.
So where does theory come in? Theory is an attempt to unite the beliefs in your head in a way that they make sense. For example, you believe that pens fall when you drop them. You believe apples fall when you drop them. You believe that the moon moves around the earth. You believe that trees fall when you cut them and so forth. These are a bunch of features of your map of the world.
Enter the theory of gravity. The theory of gravity explains all of those different beliefs in your head. It provides a kind of high-level or abstract explanation for why those beliefs are the correct ones. So the facts are the things in the world, the beliefs are the things in our heads, and the theories are these explanations or stories that we make up to unite the beliefs in our heads. So good theories are good precisely because they’re explanatory. They can take a range of data and unite them in a way that we weren’t able to see before.
David: The reason I ask is because as you move toward the end of the book you have this distinction between small questions, big questions, and when you get to big questions and the need for input, expert opinions, experts still are basing their understanding of the facts on a certain set of theories, and the interpretations that they have of those facts seem theory-dependent. So if the theory should change, so those facts have the tendency to shift in the direction of falsehoods.
That’s the nature of Kuhn’s idea of a paradigmatic shift, of a Gestalt swap or shift, where we understood the world to be a certain way, and now we don’t understand the world to be that way. So this is one of the critical aspects I think of when you talk about reading broadly. You may still have a difference of opinion, a difference theoretically about how facts should be treated and dealt with. Is that fair?
Justin: I think it’s fair in two senses. There are actually two different things that going on. One, your background theories can affect what you think the facts are. So the theories that unite and explain your beliefs can also give you evidence for further beliefs or help you to shed other beliefs that you think were mistaken.
And so what happens is your theoretical background impacts your map in your head, so to speak, in certain ways. Some things will show up on that map given your background theories. Other things will not. So it’s not that the theory changes the facts. The theory changes what you think the facts are. And so that’s one important difference.
Then, too, your theories can also affect how you think we ought to respond to certain things on the ground. So it’s not just that they change your picture of what is, they can also change your picture about what ought to be. So in other words, theories are norm-laden. And this was actually part of Kuhn’s criticism, too. It’s not just that there was this paradigm shift where people couldn’t go from thinking about the world in one way to another without this kind of gestalt, but also when you think about how we should respond to the world, our values, those are theoretically laden, too.
So our theories are not things that live out in the world. Those are the facts. They are things that live in our heads. But it’s true that our theories affect the way our maps in our heads line out, and it’s true that our theories affect how we then respond to what we think the facts are.
David: So when I reference a PhD economist from the University of Chicago who has a deep monitarist influence, the background theories which he might use to look at what the facts, and then contrast that with a Harvard Ph.D. who’s running things through a very Keynesian grid, is it fair for us to have disagreement about the facts? And isn’t this the point of bringing civility into dialogue, that it’s not that we’re going to ever be in a place where we’re all going to agree to exactly the same things, but we have to figure out how to engage each other in a way that’s productive and respectful, regardless of our disagreements?
Justin: That’s exactly right. So our background assumptions, in this case macro-economic assumptions about how economies work, are going to shift our maps in certain ways. When the University of Chicago Ph.D. looks at the world, when she looks at earnings data or whatever, she will see a very different picture than someone with a different economic background. You can feed them, in other words, the same kind of information, and they’ll draw very different conclusions.
The reason they draw different conclusions is because they take those facts to signal different things. Some facts will be important for one school of thought, not so important for another, and so forth. So this just goes back to this more general point that philosophers have worried about, really since Plato. Our observations of the world are theory laden. This is the point that no one has this God’s-eye view.
When you look at a computer and recognize it as a computer, you brought a whole bunch of sophisticated theory to bear on the mere light that was impinging on your eyes. And so once we get to the point where we realize that everyone is doing this, surely that takes us some way down the road toward being epistemically humble.
If you’re the economist from University of Chicago, you know full well that had you gone to Harvard, you would have ended up with a different background theory, and you would have seen the world differently. So given that, you should be more tentative about the conclusions that you draw and be more willing than to engage with someone else who sees the macro-economic world differently.
David: As we wrap up, I know your conclusion is quite hopeful. There is an issue here. It’s a big issue. It’s a problem. It is important because it’s redefining relationships. It redefines the way people within a community engage with each other, tolerate each other, love or hate each other. So this is a very consequential issue that you’re talking about in Beyond Fake News: Finding the Truth In the World of Misinformation. At the same time, you’re hopeful. And what is that based on? What are you seeing that is positive in terms of our way forward?
Justin: Well, first, I’m hopeful because this is not a partisan problem, and I’m so delighted that it’s not. This is a problem for you if you’re a liberal. This is a problem for you if you’re conservative. This is a problem for you if you’re an independent. If you live in a democracy and your fate hangs on the votes of your fellow man, then this is a problem for you.
So one reason I’m hopeful is that there’s this kind of overlap across religious, non-religious, different branches of political philosophy, and so forth. There’s this overlapping concern. This is a problem that touches everyone. And in that sense it’s in everyone’s interest to try to clean it up and I find that really hopeful. This is something that all of us should care about and all of us have some skin in the game.
And then, too, I’m hopeful because we’ve recognized this problem in a way that maybe we didn’t 10 years ago. And then third, I’m hopeful because there are lots of cool ideas for how we might do better. And again, I think of the fake news problem as a kind of problem with our informational environment, and I use the analogy of litter to try to describe what our informational environment is like.
If you look at the environmental progress that the United States has made in the last 50 years and then compare that with the kind of informational litter that I think we face right now, I’m hopeful that we can fix the one just like we made so much progress on the other.
David: You describe carrots and sticks, and one of the most powerful elements there is education as a carrot which you say is available to fight against the fake news epidemic. And if we care, and as you say this is not a partisan issue, then education certainly is a part of that.
I would encourage anyone listening who wants to take a step further in that educational process of both understanding the issue and figuring out how to solve it themselves, get a copy. Order Beyond Fake News: Finding the Truth in a World of Misinformation as a way of starting a very helpful conversation. Whether that’s a conversation and dialogue in your head or amongst friends and family, you might find it very constructive in an environment which has become hyper-partisan and even seemingly dangerous.
So thank you adding to the dialogue, Justin, and taking the time to write a book at a very critical point in our national history.
Justin: Thank you, David. It was a pleasure to be here. We can do a lot better on fake news, and I think this is a good start.