Cognitive bias mitigation is the prevention and reduction of the negative effects of cognitive biases - unconscious, automatic influences on human judgment and decision making that reliably produce reasoning errors.
Coherent, comprehensive theories of cognitive bias mitigation are lacking. This article describes debiasing tools, methods, proposals and other initiatives, in academic and professional disciplines concerned with the efficacy of human reasoning, associated with the concept of cognitive bias mitigation; most address mitigation tacitly rather than explicitly.
A long-standing debate regarding human decision making bears on the development of a theory and practice of bias mitigation. This debate contrasts the rational economic agent standard for decision making versus one grounded in human social needs and motivations. The debate also contrasts the methods used to analyze and predict human decision making, i.e. formal analysis emphasizing intellectual capacities versus heuristics emphasizing emotional states. This article identifies elements relevant to this debate.
A large body of evidence has established that a defining characteristic of cognitive biases is that they manifest automatically and unconsciously over a wide range of human reasoning, so even those aware of the existence of the phenomenon are unable to detect, let alone mitigate, their manifestation via awareness only.
There are few studies explicitly linking cognitive biases to real-world incidents with highly negative outcomes. Examples:
There are numerous investigations of incidents determining that human error was central to highly negative potential or actual real-world outcomes, in which manifestation of cognitive biases is a plausible component. Examples:
Each of the approximately 100 cognitive biases known to date can also produce negative outcomes in our everyday lives, though rarely as serious as in the examples above. An illustrative selection, recounted in multiple studies:
An increasing number of academic and professional disciplines are identifying means of cognitive bias mitigation. Notable examples are the field of debiasing, and a model by the NeuroLeadership Institute that categorizes over 150 known cognitive biases into a decision-making framework.
What follows is a characterization of the assumptions, theories, methods and results, in disciplines concerned with the efficacy of human reasoning, that plausibly bear on a theory and/or practice of cognitive bias mitigation. In most cases this is based on explicit reference to cognitive biases or their mitigation, in others on unstated but self-evident applicability. This characterization is organized along lines reflecting historical segmentation of disciplines, though in practice there is a significant amount of overlap.
Decision theory, a discipline with its roots grounded in neo-classical economics, is explicitly focused on human reasoning, judgment, choice and decision making, primarily in 'one-shot games' between two agents with or without perfect information. The theoretical underpinning of decision theory assumes that all decision makers are rational agents trying to maximize the economic expected value/utility of their choices, and that to accomplish this they utilize formal analytical methods such as mathematics, probability, statistics, and logic under cognitive resource constraints.
Normative, or prescriptive, decision theory concerns itself with what people should do, given the goal of maximizing expected value/utility; in this approach there is no explicit representation in practitioners' models of unconscious factors such as cognitive biases, i.e. all factors are considered conscious choice parameters for all agents. Practitioners tend to treat deviations from what a rational agent would do as 'errors of irrationality', with the implication that cognitive bias mitigation can only be achieved by decision makers becoming more like rational agents, though no explicit measures for achieving this are proffered.
Positive, or descriptive, decision theory concerns itself with what people actually do; practitioners tend to acknowledge the persistent existence of 'irrational' behavior, and while some mention human motivation and biases as possible contributors to such behavior, these factors are not made explicit in their models. Practitioners tend to treat deviations from what a rational agent would do as evidence of important, but as yet not understood, decision-making variables, and have as yet no explicit or implicit contributions to make to a theory and practice of cognitive bias mitigation.
Game theory, a discipline with roots in economics and system dynamics, is a method of studying strategic decision making in situations involving multi-step interactions with multiple agents with or without perfect information. As with decision theory, the theoretical underpinning of game theory assumes that all decision makers are rational agents trying to maximize the economic expected value/utility of their choices, and that to accomplish this they utilize formal analytical methods such as mathematics, probability, statistics, and logic under cognitive resource constraints.
One major difference between decision theory and game theory is the notion of 'equilibrium', a situation in which all agents agree on a strategy because any deviation from this strategy punishes the deviating agent. Despite analytical proofs of the existence of at least one equilibrium in a wide range of scenarios, game theory predictions, like those in decision theory, often do not match actual human choices. As with decision theory, practitioners tend to view such deviations as 'irrational', and rather than attempt to model such behavior, by implication hold that cognitive bias mitigation can only be achieved by decision makers becoming more like rational agents.
In the full range of game theory models there are many that do not guarantee the existence of equilibria, i.e. there are conflict situations where there is no set of agents' strategies that all agents agree are in their best interests. However, even when theoretical equilibria exist, i.e. when optimal decision strategies are available for all agents, real-life decision-makers often do not find them; indeed they sometimes apparently do not even try to find them, suggesting that some agents are not consistently 'rational'. game theory does not appear to accommodate any kind of agent other than the rational agent.
Unlike neo-classical economics and decision theory, behavioral economics and the related field, behavioral finance, explicitly consider the effects of social, cognitive and emotional factors on individuals' economic decisions. These disciplines combine insights from psychology and neo-classical economics to achieve this.
Prospect theory was an early inspiration for this discipline, and has been further developed by its practitioners. It is one of the earliest economic theories that explicitly acknowledge the notion of cognitive bias, though the model itself accounts for only a few, including loss aversion, anchoring and adjustment bias, endowment effect, and perhaps others. No mention is made in formal prospect theory of cognitive bias mitigation, and there is no evidence of peer-reviewed work on cognitive bias mitigation in other areas of this discipline.
However, Daniel Kahneman and others have authored recent articles in business and trade magazines addressing the notion of cognitive bias mitigation in a limited form. These contributions assert that cognitive bias mitigation is necessary and offer general suggestions for how to achieve it, though the guidance is limited to only a few cognitive biases and is not self-evidently generalizable to others.
Neuroeconomics is a discipline made possible by advances in brain activity imaging technologies. This discipline merges some of the ideas in experimental economics, behavioral economics, cognitive science and social science in an attempt to better understand the neural basis for human decision making.
fMRI experiments suggest that the limbic system is consistently involved in resolving economic decision situations that have emotional valence, the inference being that this part of the human brain is implicated in creating the deviations from rational agent choices noted in emotionally valent economic decision making. Practitioners in this discipline have demonstrated correlations between brain activity in this part of the brain and prospection activity, and neuronal activation has been shown to have measurable, consistent effects on decision making. These results must be considered speculative and preliminary, but are nonetheless suggestive of the possibility of real-time identification of brain states associated with cognitive bias manifestation, and the possibility of purposeful interventions at the neuronal level to achieve cognitive bias mitigation.
Several streams of investigation in this discipline are noteworthy for their possible relevance to a theory of cognitive bias mitigation.
One approach to mitigation originally suggested by Daniel Kahneman and Amos Tversky, expanded upon by others, and applied in real-life situations, is reference class forecasting. This approach involves three steps: with a specific project in mind, identify a number of past projects that share a large number of elements with the project under scrutiny; for this group of projects, establish a probability distribution of the parameter that is being forecast; and, compare the specific project with the group of similar projects, in order to establish the most likely value of the selected parameter for the specific project. This simply stated method masks potential complexity regarding application to real-life projects: few projects are characterizable by a single parameter; multiple parameters exponentially complicates the process; gathering sufficient data on which to build robust probability distributions is problematic; and, project outcomes are rarely unambiguous and their reportage is often skewed by stakeholders' interests. Nonetheless, this approach has merit as part of a cognitive bias mitigation protocol when the process is applied with a maximum of diligence, in situations where good data is available and all stakeholders can be expected to cooperate.
A concept rooted in considerations of the actual machinery of human reasoning, bounded rationality is one that may inform significant advances in cognitive bias mitigation. Originally conceived of by Herbert A. Simon in the 1960s and leading to the concept of satisficing as opposed to optimizing, this idea found experimental expression in the work of Gerd Gigerenzer and others. One line of Gigerenzer's work led to the "Fast and Frugal" framing of the human reasoning mechanism, which focused on the primacy of 'recognition' in decision making, backed up by tie-resolving heuristics operating in a low cognitive resource environment. In a series of objective tests, models based on this approach outperformed models based on rational agents maximizing their utility using formal analytical methods. One contribution to a theory and practice of cognitive bias mitigation from this approach is that it addresses mitigation without explicitly targeting individual cognitive biases and focuses on the reasoning mechanism itself to avoid cognitive biases manifestation.
Intensive situational training is capable of providing individuals with what appears to be cognitive bias mitigation in decision making, but amounts to a fixed strategy of selecting the single best response to recognized situations regardless of the 'noise' in the environment. Studies and anecdotes reported in popular-audience media of firefighter captains, military platoon leaders and others making correct, snap judgments under extreme duress suggest that these responses are likely not generalizable and may contribute to a theory and practice of cognitive bias mitigation only the general idea of domain-specific intensive training.
Similarly, expert-level training in such foundational disciplines as mathematics, statistics, probability, logic, etc. can be useful for cognitive bias mitigation when the expected standard of performance reflects such formal analytical methods. However, a study of software engineering professionals suggests that for the task of estimating software projects, despite the strong analytical aspect of this task, standards of performance focusing on workplace social context were much more dominant than formal analytical methods. This finding, if generalizable to other tasks and disciplines, would discount the potential of expert-level training as a cognitive bias mitigation approach, and could contribute a narrow but important idea to a theory and practice of cognitive bias mitigation.
Laboratory experiments in which cognitive bias mitigation is an explicit goal are rare. One 1980 study explored the notion of reducing the optimism bias by showing subjects other subjects' outputs from a reasoning task, with the result that their subsequent decision-making was somewhat debiased.
A recent research effort by Morewedge and colleagues (2015) found evidence for domain-general forms of debiasing. In two longitudinal experiments, debiasing training techniques featuring interactive games that elicited six cognitive biases (anchoring, bias blind spot, confirmation bias, fundamental attribution error, projection bias, and representativeness), provided participants with individualized feedback, mitigating strategies, and practice, resulted in an immediate reduction of more than 30% in commission of the biases and a long term (2 to 3-month delay) reduction of more than 20%. The instructional videos were also effective, but were less effective than the games.
This discipline explicitly challenges the prevalent view that humans are rational agents maximizing expected value/utility, using formal analytical methods to do so. Practitioners such as Cosmides, Tooby, Haselton, Confer and others posit that cognitive biases are more properly referred to as cognitive heuristics, and should be viewed as a toolkit of cognitive shortcuts selected for by evolutionary pressure and thus are features rather than flaws, as assumed in the prevalent view. Theoretical models and analyses supporting this view are plentiful. This view suggests that negative reasoning outcomes arise primarily because the reasoning challenges faced by modern humans, and the social and political context within which these are presented, make demands on our ancient 'heuristic toolkit' that at best create confusion as to which heuristics to apply in a given situation, and at worst generate what adherents of the prevalent view call 'reasoning errors'.
In a similar vein, Mercier and Sperber describe a theory for confirmation bias, and possibly other cognitive biases, which is a radical departure from the prevalent view, which holds that human reasoning is intended to assist individual economic decisions. Their view suggests that it evolved as a social phenomenon and that the goal was argumentation, i.e. to convince others and to be careful when others try to convince us. It is too early to tell whether this idea applies more generally to other cognitive biases, but the point of view supporting the theory may be useful in the construction of a theory and practice of cognitive bias mitigation.
There is an emerging convergence between evolutionary psychology and the concept of our reasoning mechanism being segregated (approximately) into 'System 1' and 'System 2'. In this view, System 1 is the 'first line' of cognitive processing of all perceptions, including internally generated 'pseudo-perceptions', which automatically, subconsciously and near-instantaneously produces emotionally valenced judgments of their probable effect on the individual's well-being. By contrast, System 2 is responsible for 'executive control', taking System 1's judgments as advisories, making future predictions, via prospection, of their actualization and then choosing which advisories, if any, to act on. In this view, System 2 is slow, simple-minded and lazy, usually defaulting to System 1 advisories and overriding them only when intensively trained to do so or when cognitive dissonance would result. In this view, our 'heuristic toolkit' resides largely in System 1, conforming to the view of cognitive biases being unconscious, automatic and very difficult to detect and override. Evolutionary psychology practitioners emphasize that our heuristic toolkit, despite the apparent abundance of 'reasoning errors' attributed to it, actually performs exceptionally well, given the rate at which it must operate, the range of judgments it produces, and the stakes involved. The System 1/2 view of the human reasoning mechanism appears to have empirical plausibility (see Neuroscience, next) and thus may contribute to a theory and practice of cognitive bias mitigation.
Neuroscience offers empirical support for the concept of segregating the human reasoning mechanism into System 1 and System 2, as described above, based on brain activity imaging experiments using fMRI technology. While this notion must remain speculative until further work is done, it appears to be a productive basis for conceiving options for constructing a theory and practice of cognitive bias mitigation.
Anthropologists have provided generally accepted scenarios of how our progenitors lived and what was important in their lives. These scenarios of social, political, and economic organization are not uniform throughout history or geography, but there is a degree of stability throughout the Paleolithic era, and the Holocene in particular. This, along with the findings in Evolutionary psychology and Neuroscience above, suggests that our cognitive heuristics are at their best when operating in a social, political and economic environment most like that of the Paleolithic/Holocene. If this is true, then one possible means to achieve at least some cognitive bias mitigation is to mimic, as much as possible, Paleolithic/Holocene social, political and economic scenarios when one is performing a reasoning task that could attract negative cognitive bias effects.
A number of paradigms, methods and tools for improving human performance reliability have been developed within the discipline of human reliability engineering. Though there is some attention paid to the human reasoning mechanism itself, the dominant approach is to anticipate problematic situations, constrain human operations through process mandates, and guide human decisions through fixed response protocols specific to the domain involved. While this approach can produce effective responses to critical situations under stress, the protocols involved must be viewed as having limited generalizability beyond the domain for which they were developed, with the implication that solutions in this discipline may provide only generic frameworks to a theory and practice of cognitive bias mitigation.
One technique particularly applicable to cognitive bias mitigation is neural network learning and choice selection, an approach inspired by the imagined structure and function of actual neural networks in the human brain. The multilayer, cross-connected signal collection and propagation structure typical of neural network models, where weights govern the contribution of signals to each connection, allow very small models to perform rather complex decision-making tasks at high fidelity.
In principle, such models are capable of modeling decision making that takes account of human needs and motivations within social contexts, and suggest their consideration in a theory and practice of cognitive bias mitigation. Challenges to realizing this potential: accumulating the considerable amount of appropriate real world 'training sets' for the neural network portion of such models; characterizing real-life decision-making situations and outcomes so as to drive models effectively; and the lack of direct mapping from a neural network's internal structure to components of the human reasoning mechanism.
This discipline, though not focused on improving human reasoning outcomes as an end goal, is one in which the need for such improvement has been explicitly recognized, though the term "cognitive bias mitigation" is not universally used.
Another study takes a step back from focussing on cognitive biases and describes a framework for identifying "Performance Norms", criteria by which reasoning outcomes are judged correct or incorrect, so as to determine when cognitive bias mitigation is required, to guide identification of the biases that may be 'in play' in a real-world situation, and subsequently to prescribe their mitigations. This study refers to a broad research program with the goal of moving toward a theory and practice of cognitive bias mitigation.
Other initiatives aimed directly at a theory and practice of cognitive bias mitigation may exist within other disciplines under different labels than employed here.