23 April 2022
Earlier I said that when a person’s uncertainty is resolved through false, misleading, or irrelevant information they suffer from “Information Disorder.” As an abstraction, that was detailed enough for the introductory video. However, it lacks the precision to be considered a definition. And to produce an algorithmic solution to a computational problem, we will need a very precise problem definition. Therefore, this layer will now apply some CT rigour to decomposing and defining Information Disorder.
Recall the types of disordered information discussed earlier:
MDMI=
Misinformation: false information that is shared without the intention of causing harm.
Disinformation: false information that is intentionally shared to cause harm.
Malinformation: true information that is intentionally shared to cause harm.
Let us refer to these three categories as MDMI. Once we give MDMI unambiguous definitions, we can recognize two patterns that signal a higher-level abstraction:
Transmissible. MDMI is shared. It needs to be disseminated in some way. False, misleading, or irrelevant information that one keeps to oneself may be very bad, but until it is communicated to another person, it is simply a delusion or a false belief. It does not become MDMI until it is communicated to others.
Destructive. MDMI causes harm to the person to whom it is communicated. By “causes harm” I simply mean that it will cause that person to cognize a reality in such a way that could lead them to make a decision that would be injurious to their own interest. For example, if the fire alarm went off right now, you would stop what you are doing to investigate or maybe even flee the building. If another person pulled the fire alarm to get you to leave the building so that they could then burglarize it, you would be a victim of disinformation.
These patterns begin to demonstrate how malicious actors use the whole spectrum of disordered information as identified by FirstDraft:
The Mis/Dis end of the spectrum is easier to explain, so we'll start there. The goal of a disinformation campaign is to communicate destructive falsehoods to an audience so that the audience then propogates those falsehoods believing them to be true. Put more simply, the goal of disinformation is to become misinformation. Similarly, if there is a piece of misinformation circulating in the information environment that fits the aims of that particular disinformation campaign, the campaigner will then amplify it to help it spread. The two techniques taken together- laundering disinformation and amplifying misinformation - can eventually make it impossible to tell where the falsehoods even began and who is amplifying whom. The point is that the information environment is now polluted with disordered information.
Consider this tweet from the Chinese foreign minister in the early days of the Covd-19 pandemic:
The report the foreign minister is retweeting is from a Canadian website called Global Research, the public face of an organization called The Centre for Research on Globalization in Canada. It has been shown by multiple invesitgators to have strong connections to the The Main Directorate of the General Staff of the Armed Forces of the Russian Federation, better known as the GRU (U.S. Dept of State 30). The report itself is a piece of disinformation that intentionally spread a conspiracy theory claiming the coronavirus originated in the US Army Medical Research Institute of Infectious Diseases at Fort Derrick, Maryland.
Perhaps the Chinese foreign minister was genuine in his re-tweet of the report and he was unknowingly laundering disinformation into misinformation. I doubt it. More likely he knew it was false and was amplifying it. He would do this because such a conspiracy theory helps sow confusion and distrust which would effectively degrade narratives calling for an investigation into a potential Chinese-origin of the virus. So, while it is unlikely most audiences were ever going to believe the story about the Medical Research Institute at Fort Derrick, the disinformation still served its purpose, namely, to create apathy amongst the citizens of democratic countries. As Bernal, et. al recognize, “The dissemination of these narratives with little to no concrete evidence at such early stages of the virus only functioned to sow doubt in the minds of American citizens and its allies” (14).
That use of the word “narrative” brings us to the other end of the specturm. Malinformation is the use of truthful information to support a weaponized narrative. Weaponized narrative is still a very new concept but it can be thought of as a type of asymmetric warfare. The Centre on the Future of War, a think tank at Arizona State University, defines weaponized narrative this way:
“Weaponized narrative is an attack that seeks to undermine an opponent’s civilization, identity, and will. By generating confusion, complexity, and political and social schisms, it confounds response on the part of the defender.”
Two examples of malinformation will help illuminate this definition. Perhaps the best known example of malinformation being used to advance a weaponized narrative is the leak of Hilary Clinton's emails during the 2016 US election. Her campaign manager's personal gmail account was compromised by Russian hackers who then released very damaging (and truthful) emails between him and his candidate to WikiLeaks. The ensuing scandal focussed almost entirely on the damning revelations in the emails and not the cyber attack by Russian intelligence services. The political fall-out in the US further supported Putin's narrative that Western-style democracy is hypocritical and cannot be trusted.
A second (and very relevant) example of malinformation is the way the Kremlin exploits historically far-right militia groups (Raghavan) in Ukraine such as the Azov battalion to support the narrative that Ukraine must be de-Nazified. Though Ukraine is governed by a Russian-speaking, Jewish President who won the 2019 election with 73% of the vote (BBC), and there is absolutely no evidence that any kind of far-right movement in the country has any political foothold (Farley), the Kremlin needs to propogate the narrative of Nazi-takeover because that is how Putin frames the invasion for his own domestic audience. To quote analyst Andres Umland, “The primary reason that the Kremlin is doing this is because the defeat of the Nazis is the high point of modern Russian history… It is a major reference point for the Russian national identity. ‘We secured the victory over Hitler’ — is a principal source of Russian pride” (Farley).
Looking at these two examples side-by-side, one can see why mainstream conversations about mis- and disinformation are harmfully parochial. Such conversations conceal that the entire spectrum of MDMI serves to advance weaponized narratives. In that context, the truth or falsehood of a given piece of information is secondary to its ability provoke an emotional reaction. MDMI does not weaponize information. It weaponizes the uncertainty created when reality-based discourse becomes impossible. That is why I favour the vocabulary of “Information Disorder.”
And I propose the following working definition:
Information Disorder is the hostile manipulation of uncertainty.
In the next layer, I will propose a model of the uniquely Russian approach to that manipulation.