Three Thoughts About AI and Political Risk


John F. Phillips

Artificial Intelligence (AI) is here and those who fail to adapt are going to be left behind. As someone who came of age (I’m 70 years young) in the era of card catalogues, typewriters, and room sized computers, as someone who had to adapt to a world of apps, websites, and email, I understand the importance of getting up to speed on AI (yes, I have the ChatGPT app on my phone and I use it)

AI is going to have a significant impact on the generation and distribution of information, perhaps on a greater scale than when the worldwide web became a fact of life in the early 2000s.

As AI becomes more present, there is a race to understand its use, its potential, and the potential for its abuse. Here are three thoughts about the use of AI when trying to analyze political and geopolitical risk.

Accuracy of Information

My initial experience with AI is that it is great for trying to develop a rudimentary understanding of general concepts and ideas. The key seems to be how an inquiry is phrased on the platform. Strait forward inquiries seem to work best as AI seems to struggle with technical jargon or inquiries that may be nebulous or esoteric in nature.

Accuracy seems to be a real issue with AI. Can it be trusted as an accurate source of information? A recent Purdue University study that measured the accuracy of linguistic and sentiment response to a set of questions determined 52% of ChatGPT responses were incorrect and 77% were overly verbose (The Register, “Chat GPT’s Odds of Getting Code Questions Correct are Worse Than a Coin Flip,” 7 August, 2023, Other studies conducted by the University of California-Berkeley and Stanford University seem to support the conclusions of the Purdue study (, “ChatGPT’s Capabilities are Getting Worse with Age, New Study Claims,” 20 July 2023). As a researcher and analyst, this creates real concerns.

AI is a tool, but like all tools, it should be a part of a complete research and analysis toolbox. Caution should be the operative approach if AI is being used to understand complex concepts or sophisticated data sets. Fact checking and multisource verification are essential when AI is being utilized in research and analysis of issues, particularly complex issues.


The abuse of AI to spread disinformation is already a serious problem, particularly in political campaigns where text, voice, and images can be artificially produced to convey messages that are inaccurate or totally false. In the international arena, governments and nongovernmental actors can use AI to create deliberately false information in order to convey false conclusions, inaccurate data, and questionable correlations that might be used to imply false causality. AI can be used to generate propaganda designed to manipulate impressions, perceptions, or opinions.

It is easy to see how governments or other nongovernmental actors could use AI to penetrate social media platforms to spread disinformation. Again, caution has to be the operative approach when using AI as an information source. Nothing substitutes for the blocking and tackling of traditional research and the use of well established research methodologies.

AI Doesn’t Replace Experience

AI is an information system and can be incredibly useful in gaining a basic understanding of general concepts and information. That being said, AI is a “quant” tool much like many other quantitative analysis tools that are used in research,

What AI can’t do is apply qualitative approaches to understanding political and geopolitical risk. It can’t understand or observe human behavior. It can’t analyze current events. It’s not predictive. While it can regurgitate historical facts, it struggles to draw connections between the past and the present, to understand historical trends, and draw conclusions based on historical events. It doesn’t have the ability, based on experience, education, and the wisdom accumulated over a lifetime of studying, observing, and analyzing politics, economics, and history to draw specific and useful conclusions. If the studies that address accuracy are any indication, using AI as a data source is a “craps shoot” at best.

AI doesn’t replace the human element, the “gut feel” that is so essential for the analysis of political and geopolitical risk.

AI is the shiny new object in the tech toolbox. I use it as a means to “kick start” my thought process and as a way to organize my thoughts. That said, I would never pass off AI as my own work. My insight, based on my research, my knowledge, experience, wisdom, observation, and understanding is what makes my work unique and useful.

Each day, AI is becoming more integrated into business, academics, and research. AI is a useful tool, but it cannot, and should not replace human intelligence.

AI cannot become a crutch.

Filed under: