Political Risk Analysis: Qualitative Or Quantitative?

By

John F. Phillips

Americans have been enamored of a nerdy sounding priesthood, using esoteric terms like beta, gamma, sigma and the like. Beware of geeks bearing formulas.”– Warren Buffet*

Qualitative vs. quantitative analysis? This has been a question that has been asked by social scientists for as long as social science has been around.

What got me thinking about this again was a recent article I read touting algorithms and statistical models as the “be all to end all” in terms of analyzing and understanding political risk. As someone who came into the profession in the late 70s and early 80s, this was an important question to consider as political science was attempting to transform itself into, what I believed to be, a quantitative “science” on par with economics, finance, and other quantitatively oriented disciplines.

As someone who has studied voting behavior, conflict resolution, and predictive modeling, I’ve used quantitative tools in order to better understand data sets. I can do a multivariable linear regression with the best of them.

I think what concerns me about the above mentioned article is that it reflects the mentality that seems so prevalent today, that algorithms and models are the only tools, no, the ultimate tools, and that any analysis of political risk that doesn’t worship at the altar of algorithms and models is faulty and, somehow, suspect. As a colleague once said to me during a discussion on this topic, “the algorithms never lie.”

I beg to differ.

In all of the years that I’ve been doing this, I’ve found that political behavior in general, and more specifically, political risk is hard to measure, and difficult to quantify. Some aspects of political behavior such as voting behavior are easier to quantify than others, like trying to measure political motivation and irrational behavior. Earlier in my career, I did work in predictive modeling and found it difficult to objectively define terms and variables, make and incorporate valid assumptions into the model, and eliminate subjectively based preconceived notions about the construction and analysis of the data set. It was difficult as all get out.

In my experience, another problem with algorithms and models is intellectual arrogance on the part of the researcher. I’ve seen this in the physical sciences as well as social sciences. Many times the algorithms and models will show correlation, so researchers will make “the great leap forward” and argue causality. That’s a huge leap, especially when the associated data set is small and the results cannot be reproduced. Many times, the problem is “algorithm bias” that can skew results because of flaws in the model itself. (Thien,A., Mkrtchyan, L., Haesebrouck, T., Sanchez, D., 2020)**

Bottom line, algorithms and models are binary, doing a wonderful job of explaining the “what” but often giving little insight into the “why.” They are an awesome tool, but they are only that, a tool that helps us understand more fully the data sets used in research. They are not, however, useful in trying to understanding the underlying rationale for “why” things occur. It’s difficult to quantify “why” in a way that creates confidence when trying to understand future events.

So the question becomes, why does political risk lend itself to a more qualitative approach? Political risk is really a measure of intent and intent is difficult to assess and even harder to measure. (Rice, C., Zegart, A., 2018, p.95)*** Political risk is hard to measure because it often entails anticipating events that may have a low probability of occurring but would involve major consequences for the business if they ever did occur. (Rice, C., Zegart, A., 2018, p.95)***

So how are these events and possibilities best understood? I believe the best way, based on my experience, is working hard to better understand past patterns of behavior and then use that understanding when attempting to anticipate how an actor may behave in the future. That means trying to understand political, economic, diplomatic, and cultural history and patterns of behavior that serve as indicators of possible future behavior. Remember, intent is really difficult to measures and historical understanding isn’t fool proof. It can, however, help us better understand possible rationales for current behavior and serve as a benchmark in terms of trying to anticipate the future.

The more things change, the more they stay the same.

I also believe that a qualitative approach toward understanding political risk allows for the development of the intuition that is sometimes needed when evaluating and trying to anticipate political risk. Sometimes, for whatever reason, the data just doesn’t “feel right”, that your “gut”, built on a foundation based on experience and knowledge of historical patterns, tells you that the model or algorithm “just isn’t right.” We have seen this in the past two American presidential elections where the algorithms used to analyze polling data were so off of the mark in terms of the actual results. Analysts fell in love with the algorithms that they developed and were shocked when the analysis proved incorrect. It was a challenge for me when I studied Soviet military intervention patterns in former Warsaw Pact countries during the Cold War. The model I developed just didn’t jive with the history, it didn’t correlate to my gut feeling. It’s a problem.

Statistical modeling and algorithms are important research tools and should be utilized when appropriate. I use them in my own work, especially when trying to establish a basic understanding of the relationships between variables in a data set. That being said, it is important to understand that they are a tool, just like a hammer or screwdriver. They are not, however, the “be all to end all.” Understanding qualitative measures and historical patterns serve a purpose with respect to better understanding of the nuances of political behavior and political risk.

Forty plus years of experience has taught me that. It has taught me to “beware of geeks bearing formulas.”

What has your experience taught you?

*Segal, David, “In Letter, Warren Buffet Concedes Tough Year,” New York Times, Feb 28, 2009

**Thien, A., Mkrtchyan, L., Haesebrouck, T., Sanchez, D.(2020), “Algorithm bias in social research: A meta-analysis”, PLoS ONE, 15(6): e0233625. doi: 10.1371/journal.pone 0233625

***Rice, Condoleezza, Zegart, Amy B., Political Risk: How Businesses and Organizations Can Anticipate Global Insecurity, Twelve Hachette Book Group, 2018.

Leave a Reply

Your email address will not be published. Required fields are marked *