Disinformation Handbook: A Concise Guide to Countering Disinformation (2)
Part Two: Tips for protecting yourselves and others, plus an introduction to some disinformation researchers
As the Natto Team discussed previously, disinformation – the deliberate spread of false or misleading information – can do grave political, social and psychological harm to those whom it mischaracterizes and to those who are misled by it.
Part 1 provided some concepts associated with disinformation as well as tactics that information operations use.
Part 2, the present section, suggests ways you can detect disinformation and avoid being harmed or manipulated by it. It lists organizations and people who work to detect and counter disinformation and the techniques they use, including developments related to artificial intelligence.
Part 3 provides links to handbooks on disinformation and how to counter it, as well as to the Natto Team’s own postings.
Introduction: Topsy-Turvy World
If you saw the movie “Oppenheimer,” you might remember this scene near the beginning: as a student in the 1920s, young Oppenheimer “reads T.S. Eliot’s ‘The Waste Land’, drops a needle on Stravinsky’s ‘The Rite of Spring’ and stands before a Picasso painting,” as a New York Times (NYT) review describes the scene. In those turbulent years after World War I wiped out whole generations of youth and erased whole empires from the map, these artists, as well as scientists like Sigmund Freud and Albert Einstein, were upending what people thought they knew about the world, exploring the unconscious mind, the vagaries of perception, and the principles of uncertainty. Surrounded by this ferment, the young Oppenheimer in the movie lies sleepless in his student lodgings, “tormented,” in the NYT’s words, “by fiery, apocalyptic visions” of matter swirling, with no solidity.
That image captured the disorienting atmosphere of a century ago. It also resonates with the disorienting present. Many current global events today can leave us confused about what is really going on. We may feel caught in a dilemma between clashing values: for example, disliking China’s human rights violations but not wanting to come to blows with China, or deploring Russian aggression against Ukraine but also fearing ramped-up weapons production in our own society (see “China-Friendly Peace Activists”); or remembering the millennia of persecution against Jews but also abhorring the wholesale bombing of Gaza villages.
At these times of uncertainty and mental conflict, we can be especially vulnerable to disinformation. As NattoThoughts noted previously, a study on Russian propaganda by the Brookings Institution, a Washington DC think tank, found that “Russian narratives seem to resonate with U.S. audiences when they find common cause with domestic concerns, exploit ambiguities, and/or obfuscate highly technical topics.” That is, these messages work especially well when people are already confused by complex situations.
How to Protect Yourself and Others
All this talk about disinformation can leave us paralyzed and suspicious. What can you trust? What can you do? In the previous section we mentioned the “liar’s dividend,” in which awareness of deceit can lead people to distrust even authentic sources of information.
The structure of our information environment can make us lose perspective. Media survive by attracting eyeballs – “if it bleeds, it leads” – and often ignore less-exciting stories of things like step-by-step progress or everyday good news. Short-form social media such as X (Twitter) and TikTok accustom users to reducing complex human situations to 140 characters, making it easy for consumers to jump to conclusions or dehumanize the participants in those situations.
Learn from the Experts:
We can learn from different groups of professionals whose job is to make sense of masses of confusing information from people who might be misinformed or lying. Such professionals include:
Litigators: If you have ever served on a trial jury, you had to weigh the credibility of witnesses. Lawyers and investigators must observe rules banning secondhand hearsay testimony and chain-of-custody guidelines so you know exactly where a piece of evidence came from.
Intelligence analysts: As two CIA analysts explained it in a 2018 commentary, “the surest guardian against deception rests between our ears - in our abilities to resist confirmation bias, think independently, and assess information with rational detachment.” They note that intelligence professionals are trained to “fight groupthink, question assumptions, and ensure the credibility of evidence before making conclusions.”
Fact-checkers: A 2018 Stanford study found professional fact checkers to be highly efficient at evaluating digital content. They “engaged in three practices—taking bearing, lateral reading and click restraint—that allowed them to read less but learn more about the topics they investigated.”
Historians: Just like fact-checkers, historians “interrogate” their sources, asking why someone said something to a particular audience at a particular time and place. Like intelligence analysts, they also look at the context of any words or actions to help understand the mindset of the person who spoke or acted at the time. Even people who work in a more “practical” field, such as accounting or computer science, can benefit from the perspective that history, cultural knowledge and the other liberal arts provide. As historian Brett Devereaux put it, “learning in many different forms of knowledge teaches the humility necessary to accept other points of view in a pluralistic and increasingly globalized society.” In a word, there is a lot we don’t know, but we do our best to think things through.
Where to Start
You can take some steps to help spot disinformation and protect yourself from being manipulated by it.
Deep Cleansing Breath: First of all, step back and resist the surge of outrage or other emotion that you might feel upon reading some provocative posting. Maybe even put down the phone and go for a walk. As disinformation expert Nina Jankowicz put it, in a nod to COVID-era social distancing, we need “informational distancing”—stopping to question sensationalist claims—to slow the “viral”spread of disinformation. As the US Cybersecurity and Infrastructure Security Agency (CISA) put it, “Think before you link.”
SIFT Out Disinformation: Next, as we wrote in our first report, “Putin: The Spy As Hero,” the SIFT (Stop, Investigate, Find, Trace) method can be helpful. Numerous universities have adopted the mnemonic SIFT and an accompanying media literacy instructional unit developed by University of Washington digital information literacy expert Mike Caulfield. He explained the mnemonic as follows, with helpful illustrations, in 2020 on his site https://infodemic.blog/. Natto Team additions or suggestions appear in italics.
Stop: “When you feel strong emotion, surprise, or just an irrepressible urge to share something… stop. Then use your other moves…”
Investigate the source.
First, “Let’s Hover!”: use your mouse to hover over the source link in a desktop browser. “When you hover, ask yourself, “Is this source what I thought it was? Is this source credible enough to share without any further checking?” (Note: Caulfield suggests that if you are on a mobile phone, you simply click on the webpage to investigate it further. Given the risks of landing on a malicious page, it would be best to wait until you can check it on a laptop where you can hover over the link).
Next: “Just Add Wikipedia”: Search for the domain name of the source (in quotation marks) plus “Wikipedia” to find what Wikipedia says about the source publication. “Does this source have the expertise and/or resources to do original reporting in this area?” In another approach that two Stanford researchers have suggested, “Search the organization’s name along with a canny keyword like ‘funding’ or ‘credibility’.”
Also, scroll through the source’s social media account before resharing a posting; their posts over the past few months can give you a sense of their likely biases and whether they seem authentic. In addition, check the creation date of the social media account; if it is newly created, it may be a sock-puppet account created specifically to amplify that posting.
Find better coverage
“Cross-check with news search” to see if another, perhaps more reputable, source, is reporting on the same story.
Reverse image search: Someone may misleadingly post an image that was taken long ago or of something totally unrelated to what they say it is. You can use Google Images, TinEye, or other reverse image search services. A brief guide to Google Images is here and a video with more detail is here.
Trace claims, quotes and media to the original context
Check the date: someone may be citing an out-of-date source that has nothing to do with what they say it proves.
“Click through and find”: if someone points you to a source, go to that source and skim through to check that the source really says what they say it did. You can use Ctrl-F (Cmd-F on a Mac), or the “search in page” tool on your cellphone browser to look for a particular word in the source website.
As you are doing brief research on your sources, here are some tips.
Be careful when opening up new links if they make too-good-to-be-true claims; bad actors sometimes put malware on sites that make appealing claims. Also, be wary of clicking on a domain with .ru, .su, .cn, or .ir at the end; these domains are hosted in Russia, China, and Iran. These domains often show "not secure" which means the connections are not encrypted. “Information you enter on the website, such as passwords, credit card numbers, or personal information, could potentially be intercepted by a third party."
It is helpful to right-click rather than left-click when opening new links. That way, the new webpage opens in a new tab in the same window, so you will have fewer confusing windows piled on top of each other.
When searching for key terms, you will be more efficient if you choose distinctive terms (“COVID-19” rather than “health”) and use quotation marks to enclose a phrase you want to find, so you won’t get all the results for any of the words in the phrase.
How can you detect an audio or video deepfake? Criminals have been using deepfake or other doctored video or audio communications to defraud unsuspecting people. They might pose as a loved one, claiming to be in jail or a hostage, and demanding bail money or ransom. A May 2023 article from govtech.com has additional suggestions for people to detect deepfake video calls. Examples include:
“Have a secret code word that every family member knows, but that criminals wouldn't guess,” in case someone synthesizes the voice of a loved one.
“look for certain clues on video calls, including their supposed paramour blinking too much or too little, having eyebrows that don't fit the face or hair in the wrong spot, and skin that doesn't match their age….”
“Ask the other person in the video call to turn their head around and to put a hand in front of their face….deepfakes often haven't been trained to do them realistically.”
Most alterations of visual images are not deepfakes but “cheapfakes,” using lower-tech tools such as Photoshop. Advice on how to detect cheapfakes run the gamut from simple school lessons to high-tech approaches such as CLIP-based Named Entity Swapping. But the first step is the reverse image search, mentioned above, to look for other places where that same image appears and see if it is borrowed from an unrelated event.
If You Want to Know More:
Many researchers and organizations work hard to monitor and detect disinformation operations. Major media employ fact-checkers, and organizations like Snopes, Politifact, FactCheck.org, and the Washington Post’s Fact Checker verify or disprove claims made online. Over the past several years, major tech companies and social media platforms have detected and taken down thousands of false personas and disinformation networks. This section briefly describes some tools that specialists use and lists of the organizations and researchers who study disinformation.
Disinformation Researchers and Their Tools
Many organizations work in the field of detecting disinformation. They analyze media and social media content, asking questions such as whether a social media account had existed a long time or was created just in time to post a message, whether whole groups of online personas repeat the same verbatim text, and who follows and quotes whom. They also use metadata, image search, and other tools to verify video and images. Technical means to do this analysis at a large scale are ever-improving. Their evolving toolkit to analyze networks and behavior of social media users has included tools such as MentionMapp, the Social Media Analysis Toolkit (SMAT), and VAST (Veracity Authentication Systems Technology) OSINT, now called Hometree Data. Organizations are also increasingly using artificial intelligence in this analysis.
Frameworks
In order to do research at scale and be able to share data, researchers have developed frameworks for systematizing and standardizing disinformation-related data.
They often adapt frameworks that cybersecurity researchers already use, such as the so-called ATT&CK framework that MITRE, a US federally funded research center, has developed to describe and categorize cyber threat tactics. Similarly, Structured Threat Information Expression (STIX) and Trusted Automated eXchange of Intelligence Information (TAXII) provide standard formats for recording and transmitting cyber threat information. Disinformation researchers are drawing on these as they develop frameworks to standardize data so they can compare it and write meaningful analysis.
RICHDATA Framework, a disinformation kill chain.
AMITT, (Adversarial Misinformation Influence Tactics & Techniques) was created in 2019 as a disinformation counterpart to MITRE’s ATT&CK framework and shares the use of STIX and MISP. Superseded by DISARM (Disinformation Analysis & Risk Management Framework), a product combining AMITT with MITRE’s SP!CE framework, which provides “additional computation and scoring functionality.”
The Online Operations Kill Chain from the Carnegie Endowment for International Peace, a US-based think tank, seeks “to be applied to a wide range of online operations…[including] cyber attacks, influence operations, online fraud, human trafficking, and terrorist recruitment.”
Influence Operations Kill Chain, by Bruce Schneier, looks broadly at influence operations and suggests countermeasures.
The Coalition for Content Provenance and Authenticity (C2PA) develops “technical standards for certifying the source and history (or provenance) of media content.” C2PA brings together Adobe, Arm, Intel, Microsoft and Truepic.
OASIS Open, a non-profit standards body that developed the STIX threat intelligence sharing standard, has developed an open-source “data exchange standard for normalizing and sharing disinformation and influence campaigns,” known as the Defending Against Disinformation Common Data Model (DAD-CDM). Respected organizations in this field number among the dozen sponsor organizations for this project.
In the regulatory sphere, in August 2023 the European Union’s Digital Services Act (DSA) went into effect for major social media platforms, seeking to protect the public from harm while preserving freedom of expression. The DSA requires the platforms to assess their services – particularly their recommendation algorithms, content moderation, advertisement policies, and “data related practices” – to assess potential risks to fundamental human rights and public well-being.
The Natto Team notes that any standardized approach risks giving a false veneer of scientific accuracy to facts that may be confusing, cloudy, nuanced, and uncertain. The criteria they use to label a message as disinformation could be subjective. (See more under “Caveats” below).
Truth in a Post-Truth World (or) Disinformation Detectives
The respected investigations organization Bellingcat is associated with the motto “Truth in a post-truth world.” Organizations like Bellingcat, Graphika, the Atlantic Council’s DFR Lab, and the Stanford Internet Observatory [which closed in June 2024] are only a few of the widely trusted names in this area. Below is a non-exhaustive list, in alphabetical order, of researchers and organizations – whether in academia, government, the non-profit sector, or the commercial sector – that analyze disinformation. The Natto Team has not vetted all of these entities, and inclusion in this list does not necessarily imply endorsement. Following this list is a separate list of entities that feature the use of artificial intelligence in their analysis.
The Atlantic Council’s Digital Forensic Research Lab “has operationalized the study of disinformation by exposing falsehoods and fake news, documenting human rights abuses, and building digital resilience worldwide.”
Bellingcat, an internationally recognized investigations group with the motto “Truth in a Post-Truth World”, sets up trainings and provides tools and guides on its website.
Center for an Informed Public (CIP) at the University of Washington, a research center that aims to “resist strategic misinformation, promote an informed society and strengthen democratic discourse.” Headed by Kate Starbird. Mike Caulfield, who developed the SIFT method, has an affiliation with CIP. Its website offers escape rooms and other game-based activities for “building resilience to misinformation.”
Center for Countering Digital Hate, a non-government organization that “works to stop the spread of online hate and disinformation through innovative research, public campaigns and policy advocacy.” It developed the STAR framework, urging legislators to regulate social media on the principles of “Safety by design…Transparency around platform algorithms, rule enforcement and advertising; Accountability to democratic and independent bodies; Responsibility…for omissions that lead to harm.”
Center for Information, Technology, and Public Life (CITAP) at the University of North Carolina Chapel Hill looks at disinformation in the broader context of “how social differences—including race and ethnicity, class, gender, and sexual identity—shape unequal information systems.”
Emmi Bevensee, former University of Arizona PhD student and Mozilla Open Web Fellow, developer of the Social Media Analysis Toolkit (SMAT).
East StratCom Task Force (ESCTF) is a part of the European Union’s diplomatic service and maintains a database of articles it considers to be disinformation.
Tel Aviv-based FakeReporter describes itself as an “Online watchdog group that uses intelligence experts and crowd-sourced research to combat malicious online activity.”
The Global Engagement Center at the US State Department has a “Disarming Disinformation” website with reports on Kremlin disinformation campaigns. They also led a working group in 2022 to develop an International Counter-Disinformation Research Agenda to share with universities and think tanks to inspire research and collaboration.
Kristofer Goldsmith, Chief Investigator and Associate Director for Policy and Government Affairs, Vietnam Veterans of America. Researches “disinfo+extremism among vets+military” according to his bio on X (Twitter).
At Harvard University’s Kennedy School of Government, the Shorenstein Center on Media, Politics and Public Policy says “we study and publish research on the causes of disinformation, how it spreads through online and offline channels, why people are susceptible to believing bad information, and successful strategies for mitigating its impact.” Its publications include “Misinformation Review.”
Between 2019 and 2023, the Shorenstein’s work on disinformation centered on the Technology and Social Change (TaSC) Project. In 2023 TaSC was shuttered, but TaSC publications, including the “Media Manipulation Casebook,” “True Costs of Misinformation,” and “Political Pandemonium 2020,” can still be accessed through links from cached versions of its webpage.
Institute for Strategic Dialogue, a UK-based think tank, says, “We combine anthropological research, expertise in international extremist movements and an advanced digital analysis capability that tracks hate, disinformation and extremism online, with policy advisory support and training to governments and cities around the world.“
International Panel on the Information Environment, patterned on the Intergovernmental Panel on Climate Change, is a new international body dedicated to studying disinformation. This Swiss-based consortium of experts aims to provide “independent scientific assessments on the global information environment by organizing, evaluating, and elevating research, with the aim of providing recommendations for improving the global information environment.”
Nina Jankowicz, American researcher and writer, author of How to Be a Woman Online and How to Lose the Information War, was Executive Director of the short-lived Disinformation Governance Board of the United States.
Marc Owen Jones of Hamad bin Khalifa University, Qatar, researches disinformation and digital media with a focus on the Middle East.
Pekka Kallioniemi provides little biographical information but is associated with NAFO – the North Atlantic Fellas Organization, which Politico called “The shit-posting, Twitter-trolling, dog-deploying social media army taking on Putin one meme at a time.” Pekka creates “Vatnik Soup” postings and “The Soup Central” videos, profiling “pro-Russian actors and propagandists from around the world, be they so-called ‘independent journalists’, politicians, military personnel or just regular grifters looking to get some easy money.”
Dr. Benjamin Lange of Ludwig-Maximilians-Universitaet in Munich serves as a consultant on digital ethics and responsible innovation, including the ethics of AI and disinformation.
Network Contagion Research Institute offers a proprietary platform to identify and forecast “emerging threats that threaten the economic, physical and social health of civil society,” including disinformation-fueled vaccine reluctance; disinformation operations targeting American businesses; and pro-Iranian anti-Semitic trolling during a May 2021 Israel-Hamas conflict. They offer an interactive tool called Mapping Mistrust, showing COVID-19 related civil unrest across the US.
Nisos, a managed intelligence company, has conducted research on topics such as election disinformation and tracking potential disinformation domains.
Stanford Internet Observatory at Stanford University has published studies such as “Potemkin Pages and Personalities,” a whitepaper on Russian online operations, produced on request of the United States Senate Select Committee on Intelligence (SSCI). Update August 5 2024: strong political opposition has silenced the Stanford Internet Observatory and other anti-disinformation researchers.
Stopfake, an NGO founded by Ukrainian journalists in 2014, aims to inculcate media literacy and “western standards of journalism” in Ukraine through trainings and the Stopfake fact-checking website.
University of Maryland researchers focusing on disinformation include Dr. Caroline Orr Bueno, of the Applied Research Laboratory for Intelligence and Security (ARLIS), and Dr. Sarah Oates of the Philip Merrill College of Journalism.
Virality Project: this “global study aimed at understanding the disinformation dynamics specific to the COVID-19 crisis” brought together researchers from the Stanford Internet Observatory, New York University, the University of Washington, the National Council on Citizenship, and Graphika.
Using Artificial Intelligence for Verification
The rapid spread of artificial intelligence has shaken up technology and society. Innumerable commentaries share hopes and fears about what it can do. Officials and experts have warned about misleading artificial intelligence-generated “deepfakes” and discussed whether artificial intelligence will “supercharge the age of disinformation.” The Natto Team has already discussed the use of digitally altered images to influence politics and the stock market here.
For their part, white-hat researchers are also putting AI’s abilities to use for large-scale verification of messages and images. Following is an inexhaustive, unvetted list of organizations that have prominently featured their AI-based approaches. Entry on this list does not necessarily imply endorsement.
Alethea offers disinformation-detection tools to businesses. In October it launched Artemis, “an artificial intelligence platform that uses advanced analytics to fight disinformation, misinformation, and social media manipulation at scale….going beyond traditional mainstream social media sources to include alternative platforms, blogs, and forums…to rapidly identify, investigate, and proactively respond to threats targeting customer loyalty, brand and reputation, stock and shareholder value, market capitalization, and physical safety of employees.”
Graphika, a New York-based company, “leverages the power of artificial intelligence to create the world’s most detailed maps of social media landscapes.” Featured on Time Magazine’s list of 100 most influential companies for 2023
Hometree Data, formerly known as the VAST (Veracity Authentication Systems Technology) OSINT system, describes itself as “an automated global platform content data ingestion and enrichment system. …designed to tag and track disinformation in real time at scale… [with] natural language processing that works across multiple languages…[and] connection to billions of online sites….”
Logically AI says “We combine advanced artificial intelligence with human expertise to tackle harmful and problematic online content at scale.”
Osavul, a Kyiv-based startup, develops software “to explore disinformation threats, assess them and counter malicious actors.”
New York-based Newsguard advertises “the trust industry’s most accountable and largest dataset on news. These data are deployed to fine-tune and provide guardrails for generative AI models, enable brands to advertise on quality news sites and avoid propaganda or hoax sites, provide media literacy guidance for individuals, and support democratic governments in countering hostile disinformation operations targeting their citizens.” Shortly after the outbreak of the Mideast war in October 2023, Newsguard established an Israel-Hamas War Misinformation Tracking Center, tracking dozens of myths about the conflict and the sites that spread those myths.
Caveats
Researchers seek to introduce scientific standards and accuracy to the study of disinformation, but it remains a human endeavor, an inexact science. As the Natto Team pointed out in Part 1, the term is often used against political opponents or hastily applied before the full facts of a situation are known.
AI Tools Fallible: Tools designed to spot AI can make mistakes. In June the New York Times tested five such tools and found they sometimes incorrectly identified an authentic image as a fake, or vice versa. In addition, the very fact that people are aware of the existence of fakes makes them reluctant to believe in authentic images; Natto Thoughts has previously discussed this phenomenon, which the NYT called a “liar’s dividend.”
Tendentious Fact-Checkers: Not all organizations claiming to be fact-checkers are honestly pursuing objective information. Some are, in effect, influencers-for-hire. For example, the Archimedes Group, an Israeli firm whose website boasts that it can “change reality according to our client’s wishes,” ran false fact-checking pages to influence politics in Mali and Tunisia, according to Buzzfeed. In India, a purported fact-checking network called India vs. Disinformation is actually run by Canadian PR firm Press Monitor, and “Nearly all the posts seek to discredit or muddy reports unfavorable to Prime Minister Narendra Modi’s government,” according to the NYT.
Controversies: Disinformation research involves making judgments and calling out other people for what they say. This is a recipe for controversy. In the United States, disagreements over the balance between dangerous misinformation and free expression, and between government oversight and business autonomy, have spawned charges of censorship, lawsuits, and Congressional investigations. Researchers become “lightning rods” for political controversy and have faced dismissals and death threats.
In Europe, efforts to counter disinformation are somewhat less controversial. In 2022, major social media platforms agreed to a voluntary code of conduct to avoid disinformation. In addition, many large platforms have adopted the Global Alliance for Responsible Media (GARM) classifications to help advertisers avoid associating their brands with harmful and sensitive content. In August 2023 the European Union’s Digital Services Act (DSA) went into effect for major social media platforms, seeking to protect the public from harm while preserving freedom of expression. However, a study performed for the European Commission found that large social media companies were failing to comply even with the voluntary code of conduct in countering Russian disinformation; the Natto Team summarized the study here. In addition, as legal scholar Alexander Peukert has written, “the delicate task of identifying disinformation is being undertaken by….private organisations whose place of administration and activity, purpose, funding and organizational structure appear problematic….[these are] the ad industry, fact checking organizations and so-called source-raters.”
Updated August 5 2024 to reflect the June 2024 closing of the Stanford Internet Observatory and silencing of other information research organizations.