The Struggle To Regulate AI And Why That Matters

The Struggle to Regulate AI is Similar to Efforts to Address Climate Change

By

John F. Phillips

Artificial intelligence (AI) is the new cutting edge technology that is becoming a significant presence in our society.

Log on to any news publication (can’t really call them newspapers or magazines anymore), watch any news or business news program or read any publication related to political and geopolitical risk, and you will see articles and essays about AI. This morning, CNBC was wall to wall ARM IPO and ARM’s presence in the AI industry. Even their interview with Ken Griffin of Citadel, Inc., while focused on his work with Success Academy charter schools in New York, addressed AI and how Success Academy was beginning to integrate AI into its curriculum. The Financial Times has been running a series of articles on how AI works and the efforts to regulate AI. In the most recent issue of Foreign Affairs, Ian Bremmer, President and Founder of the Eurasia Group and Mustafa Suleyman, CEO and Co-Founder of Inflection AI, contributed a compelling article about what they call “the AI paradox” of integrating AI into everyday life while regulating its economic, political, and social impact (you can find this piece at www.foreignaffairs.com).

If there is a common thread in all of this discussion, it is the understanding that the AI train has left the station, it is moving at great speed, and it is imperative that governments figure out the most efficient and effect ways to control the speed of the train before it gets out of control.

Like all technology, AI can be a great tool. It can be incredibly useful in terms of research, the management of data sets, and the organization of ideas (I use AI in my own work to help clarify ideas, but my writing is my own). Much like a scientific or financial calculator helps with mathematical and statistical calculations, AI can be thought of as an “idea calculator” that helps to manage and work with ideas and information.

That being said, the potential abuse of AI is a serious challenge that cannot be ignored. The pace of development of the technology far outpaces the ability of government to move at speed to regulate its use. As we speak, the EU, UK, United States, and China are struggling with the question of how to regulate AI. Each actor is taking their own unique approach to regulation and the piece suggests that ultimate outcome may be that AI regulation becomes an important element of the ongoing competition between China and the United States while other countries many be left on the outside looking in (“The global race to set rules for AI.” Financial Times, 9/13/23, www.ft.com).

Most importantly, many of these efforts are looking at regulatory time horizons of 2-5 years before final regulations are in place, a time horizon that is way behind the pace of AI evolution.

Perhaps the struggle to address climate change can offer some “what not to do” lessons as the work to regulate AI moves forward.

Currently, there are two major international agreements that address climate change. The Kyoto Protocol (part of the United Nations Convention on Climate Change) was ratified in 2005 and was signed by developed industrial countries (China and the United States did not sign). It set specific emission targets that were binding on the signatories. The Paris Agreement, ratified in 2016, was a non-binding agreement that set emission standards and global temperature goals that would work to reduce the impact of manmade factors that were contributing to climate change. It was signed by 193 countries.

These agreements took a long time to negotiate (1997-2016, 19 years). In the case of the Kyoto agreement, not all developed nations were signatories, and the non-binding aspect of the Paris Agreement has contributed to, in my opinion, the lack of progress in addressing climate change. Instead, we have a situation where everyone is often times doing their own thing, with the efforts being cosmetic, counterproductive, near sighted, driven by domestic political and economic pressures, and not really addressing the real problem of climate change that the agreements were intended to address.

Is this really the best approach to dealing with the challenges of AI?

The lesson here is that a piecemeal approach to the challenges of AI isn’t going to accomplish the goal of effective regulation. Instead, like the approach to climate change, countries will work in their own interests, ignore the common problem, and fall into many of the traps that characterized the climate change effort. The end result is a solution that only partially or ineffectively addresses the challenge.

Regulation of AI is a complex challenge that requires comprehensive solutions. The challenges are many (legal, ethical, moral, the balance between innovation and regulation, national interests, and geopolitical competition, among others) that defy simplistic and uncoordinated solutions. A balance has to be found between technological development and the potential harm that this development can impose upon society. This situation is similar to the challenge of finding a balance between individual liberty and societal order in liberal, rule of law driven societies. The balance is difficult to attain and can shift in an instant.

AI, like climate change, know no boundaries. It isn’t a US or China, EU or UK challenge, but rather a common challenge that calls for unified global regulation that allows progress, but not abuse.

Can the international community get its act together and find solutions that address AI in a comprehensive way, or does it stumble its way forward, using bandaids to treat a sucking chest wound?

Only time will tell and the clock is ticking.