Mike Caulfield’s Twitter profile states he is “radically rethinking how information literacy is taught.” He has had a lot of experience doing just that since he first designed educational games, created educational wikis, and co-founded a 5,000-member online community, Blue Hampshire. He took his interests in civic media to positions as an instructional designer at Keene State College and as the director for the OpenCourseWare Consortium at MIT before becoming a national figure in promoting a practical and effective approach to digital literacy. 

Currently, Mike is the director of Blended and Networked Learning at Washington State University Vancouver, and directs the Digital Polarization Initiative (Digipo), a cross-institutional initiative to improve civic discourse by developing web literacy skills in college undergraduates as part of the American Democracy Project. Digipo has reached thousands of students through Mike’s collaboration with both faculty in its formal nine-school, 50+ sections institutional pilot and through the use of its materials at dozens of other institutions.

We caught up with Mike in May 2019 to ask about his approach to learning web literacy, why it matters, and why we should help students use his heuristic for making good decisions in an era when information is abundant and attention is scarce. (Interview posted: May 31, 2019)

PIL: There have been countless initiatives launched since 2016 to address the “fake news crisis.” In a recent interview with the Leading Lines podcast, you talked about the problems with checklist approaches for evaluating sources. Your “four moves and a habit” heuristic focuses more on understanding that “the truth is in the network.” How does practicing these moves change how students think about information? How are you adapting the courses to new developments like video fakes? 

Mike: Let’s start with the first question. Most approaches out there push students to interrogate the thing in front of them by looking deeply at it. In conversations I’ve had with researcher Sam Wineburg he’s talked about these as “recognition heuristics”—does this look like misinformation?

This is a flawed approach on at least three levels. First, when there are only one or two things you need to pay attention to then recognition strategies are useful. But these strategies in use, whether “critical thinking” approaches or library “checklist” approaches, ask students to weigh dozens of different criteria. And that produces what I call the “sleazy car salesperson” effect.

You know when you go in to buy a car and you just want automatic braking, a rear camera, and satellite radio? What happens? Well, they’ve got a car that has all three, but it’s also the luxury sedan and it’s $8,000 more. You do get seat warmers and remote start, which are worth x hundred dollars, and the new side collision detection that’s at least $1,000 as an after-market addition. Or you can get this other car that has two features but not the third, and if you add on the aftermarket cost it’s less expensive than the luxury car, but you don’t get that side collision detection, etc., etc. And…well, it’s exhausting even to read that, right?

Car dealers set up the cars on the lot this way intentionally, so that you can never weigh a couple factors in isolation, which leads to us being cognitively overwhelmed and making bad decisions. The effects of this sort of criteria overload are well-documented, going back decades (see for example, Schwarz’s Paradox of Choice). Yet methodologies like CRAAP set up processes in the same way. So on our pre-assessment, for example, we have students evaluate a statistical claim about firearm background checks. The sort of response we see repeatedly is, “Well, it’s a .org so it’s likely to be trustworthy, but it has an ad on the page so maybe not.” It’s this is “car lot” cognition. There’s a tidal wave of conflicting signals so in the end we throw up our hands and say, “Who can know?”

So that’s thing one—these other approaches address the problem of cognitive overload by giving students techniques that increase cognitive overload rather than reduce it. Students then glom on to the most salient signals and end up, in many cases, making worse decisions than if we had taught them nothing at all.

The second way this fails is all the signals students are taught to look for are easily counterfeited or outright wrong. That answer above is also an example of that. At some point the student was taught—actually taught by someone—that “.org’s are more trustworthy than .com’s” and pages that have ads (e.g., The New York Times, Wall Street Journal, and CNN) are less trustworthy than those that don’t (such as the Nazi site Stormfront.org). And this failure isn’t hypothetical, we see this throughout our pre-assessments.

But even where the signals are a bit more correlated they lead students astray. So one thing students have been taught in the past is to look for markers of professionalism. One of our prompts is from a conspiracy site that presents itself as a news site. And so what the students say is, “Well, there’s no typographical errors and it’s well written with a professional layout, it must be a news site because it looks like a news site.” We also have used the SHEG coal video prompt in the past. This is a video produced by a coal industry group about the benefits of coal. But the students never even notice or comment on that—what they say is, “Look, it has statistics and it’s very professional.” Others say they don’t trust it, and you’re excited to see that until you read the reason: “They are appealing to emotion and the graphics don’t look professional to me.”

Look through a list of the markers that students are currently taught and just ask the question, “How expensive is it for someone to fake?” It’s very easy for someone to set up a nice looking site and use spell-check. It’s easy for someone to use deceptive language on an “About” page.

The third level where this fails is when you ask students to engage deeply with disinformation on a regular basis you are likely harming students. This is probably more complex a point than I can make here, but there’s a lot of evidence backing this up. If something is disinformation, you want to stop reading it as soon as possible, not give it more attention.

So what’s different with our approach? Well, our four moves—which we now refer to by the acronym SIFT—move students from a recognition heuristic to networked reputation heuristics, and from thinking about to doing. The moves are:

  • (S)top.
  • (I)nvestigate the source.
  • (F)ind better coverage.
  • (T)race claims, quotes, and media to the original context.

Initially you might think that these just get us back to the car lot cognition problem. For example, Investigate the Source asks students to do a quick Wikipedia or Google News search to get information about the source they are looking at. But the point is not an hour long investigation. The point is to quickly see if what this source is surprises you. That’s actually a really simple rule of thumb— “I thought this ‘cancer clinic’ was a hospital, but it turns out it’s run by someone known as the ‘Cancer Quack’ using methods that got their originator convicted of manslaughter, maybe my initial impression wasn’t all that sound.” People are lousy at figuring out what signal on a page to pay attention to out of three dozen, but they are pretty good at recognizing things that surprise them. And that surprise can be a powerful key as to what to pay attention to.

I won’t go into all the moves here, but we’ve been happy to find that the moves have been surprisingly good at dealing with new trends in the disinformation space. So, for example, with either deep fakes or shallow fakes, the “T”—trace claims, quotes, and media to the original context—turns out to be the best way to deal with a “shallow fake” like the recent Pelosi video. You know it’s altered because when you look at the original video it doesn’t look like that. Similarly, “find other coverage” encourages students to just ignore the video that reached them and do a Google News search to see what the news consensus is on the video—in this case, that 10 second search quickly alerts you to the fact it is altered.

None of this replaces the work you have to do afterwards. But it’s the important sifting and contextualization you have to do before deeper engagement.

PIL: During the 2018 Project Information Literacy News Study, we found slightly more than half of the nearly 6,000 students surveyed weren’t confident they could tell fake news from reliable reporting, and more than a third said they doubted the credibility of all news. You’ve written about the importance of learning where to put your trust. What’s the best way to approach teaching trust when so much schooling focuses on being critical of sources, and a significant percentage of Americans are convinced mainstream news is by definition “fake”? 

Mike: I’m incredibly glad you asked me this question, because I’ve seen two presentations of this in the media and both drive me a bit batty.

The first is “media literacy” makes people more cynical. Well, fine, but what exactly is “media literacy”? This isn’t how we talk about things in education. It’s like saying, “History makes people more racist,” or, “Sociology makes people more tolerant.” Sure, maybe—but what method of teaching it produces those results? Are there methods out there that do better or worse with this? The object of study when we look at education isn’t disciplines or learning goals, but those things taken together with specific methods of teaching them.

On the other hand, people often defend certain approaches to media literacy by stressing how the learning goals of this or that approach address concerns about cynicism. And that seems to me less misguided, but it still misses the mark. Because what we see in our pre-assessments is what people thought they taught students and what students learned are two very different things. Goals are not enough.

As far as trust, we’ve found that our approach increases trust in trustworthy prompts, and decreases or maintains a low level of trust in untrustworthy prompts.

The reasons for that are partially course design (we make sure that we train students on at least as many true prompts as false prompts), but the biggest reason is we encourage students to limit their investigation to the depth required for their purpose.

We don’t have students look for photoshopping on images, but some of what we’ve seen there applies here, at least as an analogue. Imagine we give two people a photograph and ask them to verify if it is real. But we give one person 30 seconds and one person 20 minutes. Who is more likely to say it is fake, even if it isn’t? It’s the person that is given a long time with it. Why? Because all the stuff that makes it look credible is discovered pretty early on, but as you look at it intensely over 20 minutes there’s a lot of stuff there that just doesn’t seem right. Not slam dunk wrong, mind you, but dubious. Again, this is “car lot” cognition. You get overwhelmed by all sorts of details and you can’t correctly weight them. Stare at anything long and hard enough and it will look fake.

The web works like this, too. People complain about Google, but the truth is it’s good enough where, in general, you find credible stuff fairly early on. “Hey, this is a major newspaper, founded in 1859. They won awards for reporting. They are highly cited.”

If you just want to know, “Did this thing reported likely happen or not?” at a decent confidence level that’s maybe enough for your purpose.

But dig deeper—here’s something they got wrong last year. One of their reporters was fired for sexual harassment. Here’s an article from a blog saying that they are Soros-funded. Here’s an article saying that Soros charge is false. Here’s something weird about Chinese interests in the paper.

It’s not that the above stuff is unimportant. One can advance a very credible claim that a culture of sexual harassment and unfair reporting on female political candidates go hand in hand. Foreign interest in U.S. media ownership is something to keep an eye on. But bit by bit a simple question a citizen has about whether they can trust well-sourced reporting on whether Arctic ice is disappearing turns into cognitive overload. And the usual response to that is cynicism. There’s too many data points, etc. Even worse, the least relevant and most dubious data points are the ones found last, so due to the recency effect, fringe stuff looms larger than your initial finding that this is a well-known national newspaper or a respected Arctic research center.

If you want to reduce the media cynicism you have to reduce the overload. That’s hard for academics to grapple with sometimes, since we want to think of every question as a Master’s thesis waiting to happen. But you start with the environment students will be applying the skills in, then you provide techniques suited to that environment. If you provide techniques ill-suited to that environment the students will either fall back on dis-empowered cynicism, or choose an alternative heuristic like “both sides” or “non-profit equals good, for-profit equals bad.” Everything we know about human behavior says that students will fall back on a rule of thumb, one way or another. It’s just a question of what rule of thumb they adopt.

PIL: Tell us about the Digital Polarization Initiative. In the Leading Lines interview you noted that students not only scored better on the post class assessment, but that they completed the work faster. Do you see a way to scale up this approach? Are there plans for checking back to assess long-term effects?

Mike: So yeah, let’s talk about the good stuff. Educational interventions have been tested with successful results. We ran a high-fidelity one at my school where we doubled student accuracy on prompts in about four hours of instruction plus some online homework. Those numbers are actually better than they seem, because when we looked at why students got it right we found that a lot of students were rating things as trustworthy or dubious on the pre-assessment based on emotion or meaningless things like whether something was a .org or .com. On the pre-test their trust often looks like, “There’s a lot of ads, so likely fake,” or, “It’s believable. There are many terrible things in the world” (an actual quote). On the post-test they’re saying things like, “The group is an advocacy group from the left wing so it can be a little biased. However, the polls are from Public Policy Polling which seems to be a legit organization.” Not all students get to that point, but the majority move in that direction.

We are still finalizing our report, but we did have an independent study of its effectiveness at one of our campuses (CUNY Staten Island). In that instance, researchers Patricia Brooks and Jessica Brodsky found that the intervention increased use of lateral strategies from 0.16 across four prompts to 1.79 across four prompts, with 72% of students showing gains. This was against a control using a traditional media literacy curriculum where only 5% of students showed gains and there was actually a slight decrease in average number of prompts where the students applied lateral reading strategies.

Can it scale? Importantly, the CUNY Staten Island test did use our online materials but the in-class instructors were only lightly trained in the methods. CUNY Staten Island is putting together a test of the online materials alone, to see if they can have an impact. If we see a significant fraction of the impact we saw more generally with the online materials in isolation I have a high degree of confidence that we can scale this by having the online materials work the skills and the teachers work on the dispositional aspects.

We also have a project—the Check, Please! Project, funded by RTI International and the Rita Allen Foundation—that allows people to make short tutorials on how to do this that they can share. So part of scaling is also helping our students educate others, and projects like Check, Please! are making that possible.

We would love to check back for long term effects, but currently we haven’t. We did do the testing three to four weeks after the intervention to provide for some skills decay, so we know that the skills persist at least that long.

PIL: You mentioned on Twitter in April (2019) that there’s no natural disciplinary home for learning web literacy. While librarians claim some ownership of web literacy in their instructional work, their access to students is largely dependent on faculty and administrators who may have other priorities. In an ideal world, how would you like to see higher ed address the need for civic education about non-academic information networks?

Mike: In my ideal world, I’d love to think about web literacy the way we think about writing: Targeted instruction by professionals reinforced through practice in many different disciplines. Students would not only engage with research in their discipline, but also with public and civic information around their discipline. As an example, we have a neuroscience capstone class here at WSU where students choose science clickbait to look into and develop a presentation around it, but we also have embedded more direct training in one of our first-year experience courses. The approaches that have resonated with institutions in our pilot draw heavily from Writing Across the Curriculum (WAC) initiatives, which use this sort of hybrid approach.

In reality, there are some challenges above and beyond what WAC deals with, since most faculty do have training around writing (even if they haven’t been taught how to teach it), but very few faculty have training in the web. And as I mentioned in response to an earlier question, a lot of what faculty do “know” is actually harmful.

For the immediate future, we’re looking at two paths forward. The first is to partner with library staff, who are very supportive of this work and more than willing to put in the curricular redesign work. The second path forward we’re testing is to put a lot of the skills work into freely available online modules and have the faculty deal with the more dispositional aspects. These are stopgap measures, but given where we’re at, stopgaps are incredibly important.

Longer term, we have to make sure every education degree comes with training in these skills. I know the teacher education curriculum is already jam-packed, but graduating teachers who don’t know how to use the web at this point is just not an option. I think we also have to attach these skills to things like the Connected Learning movement.

PIL: Crowning a decade of research on how college students interact with information, PIL embarked on a new study, Information Literacy in the Age of Algorithms. It’s hard for professors and librarians to keep up with the ways our information environment is increasingly influenced by Google, Facebook, and other platforms that rely on trade-secret computer code that shapes what we see. On Twitter and your blog, you reflect often on how bad actors can manipulate our understanding and how algorithmic solutions don’t always help. What are the most critical gaps in people’s knowledge about how our current information environment influences the information we encounter? What can we do about it, not just as educators but as citizens? ​

Mike: Critical digital literacy is—well, critical. Students need to think about where information comes from, and deal with issues of power, bias, and agenda in that. Most importantly they have to realize that no technology is neutral. Technologies are designed and designs encode values—for better or worse. In our classroom discussions, we draw on work from Safiya NobleRuha BenjaminChris GilliardWhitney PhillipsJesse Daniels, and Joan Donovan. For people wanting to discuss these things with students, we have an embarrassment of riches—not only stunning scholarship, but accessible, socially engaged public scholarship. So my first advice to any educator is plug into those communities and listen.

As far as algorithms, I focus less on the algorithms themselves and more on the understandings around environments and incentives that make those black-box algorithms legible.

The biggest one is the centrality of attention in a world where publication is ubiquitous and cheap. Herbert Simon said it best back in the early 1970s—an abundance of one thing creates a scarcity of what it consumes. And information consumes attention. So when we look at something like free speech, for example, is letting disinfo-bots flood out the message of activists’ free speech because its unregulated? Or is it anti-free speech because it denies people access to the only sort of speech that matters—speech that has a chance of capturing attention? Once you understand that attention is the scarcity happening now, these questions look very different.

Framing is another central concept. So what we find more and more with disinformation is some of the 2016 techniques around outright fakery have become harder to pull off. So we’re back to some older techniques, but they are just as damaging. So you’ll have a story or an image or a quote, and maybe it’s about something consequential. But the story that reaches you, via link, or headline, or meme, or Google result has the most inflammatory or deceptive framing of any version. That’s an issue at the intersection of humans and algorithms; as the story moves across the web it gets sharpened and reframed, and algorithms reward that.

Data voids are another concept students need to be aware of and the problem with terms such as “black on white crime” for searching. The way that search is tuned to vocabulary and terms that may be specific to a subgroup (e.g., antivax groups or white supremacist groups) and hence tend to return results disproportionately from those subgroups. Google has gotten much better about this over the past few years, but there will always be a gap. I think we also need to be thinking more about algorithms and influencers, and how those two elements work together to produce a type of “precarious gatekeeping” that looks very different from what we’ve seen in the past.

Regarding what we can do as educators and citizens, I agree with Safiya Noble and others that we should be funding alternative public information architecture, since there’s a good case to be made that the current financial models are too corrosive. That doesn’t mean complete replacement, in my opinion—the establishment of libraries was not the end of bookstores. But there should be options.

I also think we need more algorithmic transparency, and we need students to think about what that transparency should look like, and through what social and political mechanisms we should obtain it. I’m a regulation hawk in this respect, but I’m happy for students to come to any informed opinion.

Finally, a cautionary note. I think people get too obsessed with the secretiveness of the search or recommendation code itself. There’s very good reasons not to make code public or even semi-public, because it makes the platform more easily gamed by bad actors. Looking at how the code is tuned—the QA process, the training sets, and the broader concepts the model incorporates—these are things that are often available to us now, and yet underused by educators.

I can’t tell you how many times I’ve heard someone talking about Google’s black box algorithm, who is somehow completely unaware that the tester protocols that tune that algorithm have been publicly available for years, and are arguably more important to look at than the code itself. It’s important that we talk about algorithms from a stance of empowering students. Otherwise “black box algorithm” simply becomes another term like “deep state” or “shadowy global elites,” which bad actors use to spin deceptive narratives. If there is one thing I have learned in this work, it’s that it’s completely possible for well-intentioned educators to make things much worse. As always, the important thing is not what you teach students, but what they learn, and the two things are often not aligned.


Mike Caulfield works to change how we think about and teach online media literacy; he blogs insightfully at Hapgood and shares his thoughts on Twitter. He is the author of an open access textbook, Web Literacy for Student Fact Checkers … and Other People Who Care About Facts (2017), which NPR has said “is relevant to everyone.”

Mike is a widely recognized expert in civic literacy and digital citizenship, informal learning, online communities, and open educational resources. His commentary and insights have appeared in NiemanLab’s Predictions for Journalism, the Observer, and Teaching in Higher EdThe Digital Polarization Initiative that Mike directs is a broad, cross-institutional project to improve student’s civic digital literacy by teaching them to fact-check and contextualize information they encounter online, as well as alert them to mechanisms bad actors use to sow confusion and create social discord.

Smart Talks are informal conversations with leading thinkers about new media, information-seeking behavior, and the use of technology for teaching and learning in the digital age. The interviews are an occasional series produced by Project Information Literacy (PIL). This interview with Mike Caulfield was made possible with the generous support from the Knight Foundation. PIL is an ongoing and national research study about how students find, use, and create information for academic courses and solving information problems in their everyday lives and as lifelong learners. Smart Talk interviews are open access and licensed by Creative Commons.​

Suggested citation format: “Mike Caulfield: Truth is in the network” (email interview) by Barbara Fister, Project Information Literacy, Smart Talk Interview, no. 31 (4 June 2019). This interview is licensed under a Creative Commons Attribution-Non-commercial 3.0 Unported License.