All models are wrong; some are useful.
George Box
More and more, leaders of every sort of enterprise – from corporations to federal, state and local governments – are using mathematical models to help guide them in decision-making. Clearly, the US and UK governments’ approaches to dealing with the Covid-19 pandemic were greatly influenced by the model developed by Neil Ferguson of the Imperial College in London, and his co-workers. The calls for the Green New Deal stand (or fall) in part on the accuracy (or not) of the predictions of numerous global climate models. Many companies rely on weather models to guide important operating decisions. Most financial institutions (e.g., banks and esp. the Federal Reserve) rely on models to develop strategies for dealing with the future.
Leaders are increasingly relying on models because they are a convenient way to harmonize the cacophony of data that assails all of us daily. But as Mae West once said, “A model’s just an imitation of the real thing.” (For those of you who don’t remember Mae West, think of Dolly Parton smirking Nikki Glazer’s innuendo.). Like a Monet landscape, a model accentuates certain facets of reality, ignores others and, sometimes, fills in blank spaces that can’t be seen. Thus, though produced by scientists, there is a certain art in crafting a model – what to include, what to ignore, how to bridge regions where data may not be available.
The snare facing a decision maker in using the results of a mathematical model is that even the most elegant of models may mislead. The modeler, like Monet, has made choices about what data to include. If the model does not represent all of the data relevant to the decision to be made, then its usefulness is suspect. Decision makers need some sort of user’s guide to avoid that snare.
In my career, I have both developed and used models developed by others (usually successfully!). I have learned that the precision of a model’s results provide an illusion of certainty; i.e., the results may have three decimal places, but sometimes can only be relied upon within a factor of ten. Along the way, I’ve developed a few rules of thumb that have served me well in using the results of mathematical models. I generally use these in the form of questions I ask myself.
What was the model developed for? If the model was developed for a different purpose, then I have to satisfy myself that the model is appropriate for the decision I have to make – e.g., what data were included; what were omitted. If the model was developed for a different purpose, I need to dig into what important facets of my situation may not be represented in the model.
Has the model been successfully used before for my purpose? In the case of the Imperial College infectious disease model, it was developed to look at deaths from SARS and other infectious diseases; thus, presumably it is suitable for its use in the current pandemic. However, the model’s previous predictions of fatalities were off by orders of magnitude. Almost certainly, its predictions are upper bounds; however, they are so high that their usefulness is questionable.
Is my situation included within the bounds of the model? The Federal Reserve’s actions to respond to the pandemic are being driven, in part, by econometric models based on past history. Clearly, however, the usefulness of those models is open to debate – we’ve never been in this situation before – it’s like asking a blind man to paint a landscape. This can be very important when two or more models are coupled, e.g., modeling economic changes based on the results of a climate change model. If the climate change model’s results are based on an implausible scenario (RCP 8.5) then the results of the economic model are highly suspect.
What is the uncertainty associated with the model’s results? In some cases, the uncertainty is so large that the models results are not useful for decision-making. And if the modeler can’t tell me how certain/uncertain the model’s results are, that’s a huge “Caution” flag.
How sensitive are the model’s results to variability in its inputs (e.g., initial conditions)? This is of crucial importance when considering large-scale mathematical models of complex phenomena (e.g., climate change). If the model’s results are very sensitive to its inputs, then the model’s input must be known very precisely. If the model developer has not performed a sensitivity analysis, another “Caution” flag goes up.
Has the model been validated in some way? This can be done in a variety of ways, but my order of preference is:
- Showing that model outputs are in reasonable accord with a real-world data set. “Reasonable” means that the agreement is good enough I am convinced I can use the model’s results for my situation to make good decisions.
- Showing that each piece of the model is consistent with established principles. In some cases, there are no real-world data for comparison. If not, I want the modeler to be able to demonstrate that the algorithms in the model are consistent with accepted principles. This is fairly straightforward for physical phenomena unless the model assumes that they are coupled. It is much less so when one brings in social science constructs.
- (actually down about #22 on my list). Peer review. Sometimes modeling results from peer-reviewed journal articles are offered as guides for decision-making. If the model has not been otherwise validated, I am wary in using its results. Peer review is not what it used to be (if it ever was!) . I see it all too often becoming the last refuge of scoundrels – friends approving friends’ papers with limited review. The failed experiment of replicating some of the most widely accepted results in psychological research (less than half could in fact be replicated); the David Baltimore scandal; and too many others lead me to accept peer review by itself as validation only if I have no other choice.
Our leaders – at all levels – are increasingly relying on the results of a wide variety of models as decision-making aids. Often these are held up by experts as “the science” that must be followed. And yet, even the most elegant – the prettiest – of models may mislead. If a model’s results are accepted without question, the consequences for the community may be quite ugly. The wise leader trusts, but verifies by asking simple questions such as these.
John – I read your latest offering with interest. I had a professor in graduate school (mathmatical psychology) who talked of modeling for psychological understanding. I must admit my interest was limited at that time though I have now become more attuned to models, their verification, etc. I sometimes wish I could crawl in your mind and observe the workings of a true thinker like yourself. I look forward to your future thoughts.
John Hutcheson
LikeLike
John:-
I’m afraid you’d get pretty mucky crawling around in there – but thanks for the compliment! Best to you and the inimitable Miss S.
LikeLike