Featured

Cognitive bias and community resilience

Beliefs are not like clothing: comfort, utility and attractiveness cannot be one’s conscious criteria for acquiring them.

It is true that people often believe things for bad reasons – self-deception, wishful thinking and a wide variety of other cognitive biases really do cloud our thinking.

Sam Harris

Have you ever tried to convince your boss, your spouse, or someone else about something?  And found your blood pressure rising as you thought to yourself “Why can’t he / she keep an open mind?”  You may have been a victim of the other person’s cognitive biases (of course there’s always the possibility that you were wrong!).

When we receive new information, we try to fit it into our existing mental models – the patterns that we have formed to help us organize information.  These patterns are important and useful because they help us rapidly respond to threats.  However, sometimes our existing mental models act as barriers to incoming information, especially if the new information doesn’t fit into an existing pattern very well.  This is known as cognitive bias.

Community leaders are human.  They are just as subject to cognitive bias as anyone else.  But that means that they may under- or overestimate risks facing the community, or ignore potential solutions to the community’s problems, or accept “solutions” that simply won’t work.  Thus, cognitive bias can have profound impacts on a community’s resilience.  In this post, I want to explore some common kinds of cognitive bias in a community context.

Perhaps the most important kinds of cognitive bias are what I call “delusions of competence.”  These appear in many different guises.  Sometimes we ignore new information because we don’t trust the source.  The messenger may be our political opponents (For example, a recent paper found that most Republicans who didn’t believe in climate change cited the fact that it’s being touted by liberal politicians as a primary cause of their disbelief.  The state of denial by progressive politicians [now there’s an oxymoron!] of the truth of recent revelations of Iranian nuclear misdeeds may have a similar cause.).  We may think we’re smarter than the messenger.  Or better at making decisions, or at predicting the future.  However it appears, this type of cognitive bias usually causes us to discount or ignore new information.  It introduces blind spots in our thinking.

Another type of cognitive bias arises because humans are social animals.  Most of us want to be part of “the group” (whatever that is).  If (noboby/everybody) thinks X then we should think the same.  Or we let our instincts be overridden by trying to be politically correct, or polite.  Or we respond to the confidence exhibited by a squeaky wheel.  This type of cognitive bias often ends up in a sort of community groupthink and misdirected actions.

A third type of cognitive bias is “the Tyranny of the Status Quo.”  Often, we tend to value what we have so much that we will do almost anything to avoid change.  This kind of bias can be summed up in something my friend Jim Kelley once said to me:  “People will only change when the pain of not changing becomes too great.”  This type of cognitive bias can also show up in more subtle ways.  We may tend to downplay some new information because it either conflicts with or pushes aside what we are concerned with now.  Or, rather than recognizing a new pattern, we may try to force fit new information into an old mould. 

Confirmation bias is closely related.  In this case, we pay attention to new information only if it buttresses previously held opinions.  This is particularly pernicious because we are flooded with so much information and so many studies that come to contradictory conclusions that it is way too easy to fall into this trap.  It seems that Climate Change Zealots on both sides are especially prone to this.

Every one of us as humans will fall prey to cognitive bias at some point – pattern making and matching are important evolutionary advantages.  But the leadership of our communities is made up of more than one person.  Inherent in the types of cognitive biases described above are ways that community leadership can avoid their negative impacts.

  • Diversity.  The best way to counter groupthink is to have people with diverse mental models each grappling with new information.
  • Respect.  If people respect one another, then they are highly unlikely to overweight their capabilities against someone else’s.  They are also more likely to listen to each other.
  • Good governance structures.  Diversity can lead to conflict; respect can lead to a desire to placate everyone.  Both can lead to inaction.  Good governance structures can achieve an appropriate balance as well as adding other checks and balances to avoid cognitive biases.

Our communities need information to gauge the risks they face and to find ways to either adapt to or mitigate those risks.  They need information to find ways to grow healthier and to recognize and seize the opportunities around them.  They need information to strike a good balance among their myriad needs and competing priorities.  Cognitive biases disturb and distort the flow of information.  If our communities are to become more resilient, they must find ways to combat cognitive bias.

Advertisement
Featured

Even Pretty Models Can Give Ugly Results

All models are wrong; some are useful.

George Box

More and more, leaders of every sort of enterprise – from corporations to federal, state and local governments – are using mathematical models to help guide them in decision-making. Clearly, the US and UK governments’ approaches to dealing with the Covid-19 pandemic were greatly influenced by the model developed by Neil Ferguson of the Imperial College in London, and his co-workers. The calls for the Green New Deal stand (or fall) in part on the accuracy (or not) of the predictions of numerous global climate models. Many companies rely on weather models to guide important operating decisions. Most financial institutions (e.g., banks and esp. the Federal Reserve) rely on models to develop strategies for dealing with the future.

Leaders are increasingly relying on models because they are a convenient way to harmonize the cacophony of data that assails all of us daily. But as Mae West once said, “A model’s just an imitation of the real thing.” (For those of you who don’t remember Mae West, think of Dolly Parton smirking Nikki Glazer’s innuendo.). Like a Monet landscape, a model accentuates certain facets of reality, ignores others and, sometimes, fills in blank spaces that can’t be seen. Thus, though produced by scientists, there is a certain art in crafting a model – what to include, what to ignore, how to bridge regions where data may not be available.

The snare facing a decision maker in using the results of a mathematical model is that even the most elegant of models may mislead. The modeler, like Monet, has made choices about what data to include. If the model does not represent all of the data relevant to the decision to be made, then its usefulness is suspect. Decision makers need some sort of user’s guide to avoid that snare.

In my career, I have both developed and used models developed by others (usually successfully!). I have learned that the precision of a model’s results provide an illusion of certainty; i.e., the results may have three decimal places, but sometimes can only be relied upon within a factor of ten. Along the way, I’ve developed a few rules of thumb that have served me well in using the results of mathematical models. I generally use these in the form of questions I ask myself.

What was the model developed for? If the model was developed for a different purpose, then I have to satisfy myself that the model is appropriate for the decision I have to make – e.g., what data were included; what were omitted. If the model was developed for a different purpose, I need to dig into what important facets of my situation may not be represented in the model.

Has the model been successfully used before for my purpose? In the case of the Imperial College infectious disease model, it was developed to look at deaths from SARS and other infectious diseases; thus, presumably it is suitable for its use in the current pandemic. However, the model’s previous predictions of fatalities were off by orders of magnitude. Almost certainly, its predictions are upper bounds; however, they are so high that their usefulness is questionable.

Is my situation included within the bounds of the model? The Federal Reserve’s actions to respond to the pandemic are being driven, in part, by econometric models based on past history. Clearly, however, the usefulness of those models is open to debate – we’ve never been in this situation before – it’s like asking a blind man to paint a landscape. This can be very important when two or more models are coupled, e.g., modeling economic changes based on the results of a climate change model. If the climate change model’s results are based on an implausible scenario (RCP 8.5) then the results of the economic model are highly suspect.

What is the uncertainty associated with the model’s results? In some cases, the uncertainty is so large that the models results are not useful for decision-making. And if the modeler can’t tell me how certain/uncertain the model’s results are, that’s a huge “Caution” flag.

How sensitive are the model’s results to variability in its inputs (e.g., initial conditions)? This is of crucial importance when considering large-scale mathematical models of complex phenomena (e.g., climate change). If the model’s results are very sensitive to its inputs, then the model’s input must be known very precisely. If the model developer has not performed a sensitivity analysis, another “Caution” flag goes up.

Has the model been validated in some way? This can be done in a variety of ways, but my order of preference is:

  1. Showing that model outputs are in reasonable accord with a real-world data set. “Reasonable” means that the agreement is good enough I am convinced I can use the model’s results for my situation to make good decisions.
  2. Showing that each piece of the model is consistent with established principles. In some cases, there are no real-world data for comparison. If not, I want the modeler to be able to demonstrate that the algorithms in the model are consistent with accepted principles. This is fairly straightforward for physical phenomena unless the model assumes that they are coupled. It is much less so when one brings in social science constructs.
  3. (actually down about #22 on my list). Peer review. Sometimes modeling results from peer-reviewed journal articles are offered as guides for decision-making. If the model has not been otherwise validated, I am wary in using its results. Peer review is not what it used to be (if it ever was!) . I see it all too often becoming the last refuge of scoundrels – friends approving friends’ papers with limited review. The failed experiment of replicating some of the most widely accepted results in psychological research (less than half could in fact be replicated); the David Baltimore scandal; and too many others lead me to accept peer review by itself as validation only if I have no other choice.

Our leaders – at all levels – are increasingly relying on the results of a wide variety of models as decision-making aids. Often these are held up by experts as “the science” that must be followed. And yet, even the most elegant – the prettiest – of models may mislead. If a model’s results are accepted without question, the consequences for the community may be quite ugly. The wise leader trusts, but verifies by asking simple questions such as these.