Technological rivalry around artificial intelligence is creating limitations on which systems can be deployed around the world and how they can be developed. Oxford Analytica’s Megha Kumar explains what businesses need to know

Businesses and governments risk serious curbs on their use and development of AI unless they better understand the implications of Western and Chinese rivalry over the technology – comparable to the race to deploy 5G but involving much higher stakes.

With the US and the EU vying with China to produce the most advanced AI systems, influence their technical specifications, and regulate their operation, users and developers of the technology must be alert to the geopolitical competition – or risk incurring substantial costs and penalties.

chess competitor artificial intelligence

As well as generating ever more innovative products, the technological rivalry is creating limitations on which AI systems can be deployed around the world as well as how they can be developed.

This will present big challenges for multinationals with operations in both the West and China, and in countries that fall under Western or Chinese influence.

“Purchasers of AI need to consider not only the vendors’ pricing and capabilities, but also their geopolitical orientation”

Given the power and potential impact of generative AI and multi-purpose foundation models, global powers are keen to leverage the technology for economic, social and security purposes.

As such, they want to have a say about who gets to buy, sell, or develop their respective products. Just as with 5G, purchasers of AI need to consider not only the vendors’ pricing and capabilities, but also their geopolitical orientation, arguably more so in the case of AI.

Two ends of a spectrum

The global AI regulatory landscape is essentially a broad spectrum, with the US at one end and China at the other.

The US has no federal policy in this area due to Congressional gridlock – though some states and federal agencies are introducing some measures. They are patchy, however, focusing in the main on regulating certain use cases, but without hindering overall private sector innovation.

“The global AI regulatory landscape is essentially a broad spectrum, with the US at one end and China at the other.”

The EU shares China’s hard-policy approach to regulation, but with a key difference: the EU’s AI Law prioritises the interests of European citizens, while China is prioritising the interests of the state.

Most other advanced countries, such as the UK, Japan, South Korea, Canada, and Australia, lie somewhere in between. In effect, they are adopting a light regulatory touch to evaluate the risks of AI before regulating it. And most emerging markets are still developing their AI approach.

A growing compliance burden

Regulatory divergence will increase the compliance burden on multinational companies, yet even if they pay attention to the spectrum of regulations across the world, they could still be stymied by geopolitics.

For example, US firms with operations in China may be prepared to adhere to Chinese regulations but Washington might not allow them to sell their AI to China. Similarly, emerging markets, for example in South-east Asia, that are orientated towards the US could come under American pressure not to buy Chinese AI, or vice versa.

When deciding whether to buy American, European or Chinese AI, corporates may choose a system aligned with the geopolitical orientation of current and future growth markets, which could leave them having to swallow the risk of being shut out of other markets.

Those preferring to work with both Western and Chinese systems will have to absorb the high cost of completely isolating them.

“Regulatory divergence will increase the compliance burden on multinational companies”

So, for instance, a global company using Chinese AI in Hong Kong would need to insulate the system – and the data that’s fed into it – from its operations in America and Europe. In other words, boards will have to establish ‘geopolitical walls’ across their organisations.

In considering AI capability, geopolitical factors also come into play. Currently, Western AI models are technically superior to China’s, and so will best suit purchasers interested in the most advanced systems.

However, Chinese systems are going to be more readily censorship-compliant than Western ones and might be preferred by authoritarian governments in Africa and the Middle East, enabling them, for example, to control online traffic.

If trade rather than censorship is the priority, emerging market authorities may opt for Chinese systems designed for export that are either cheaper than their rivals or come with packages of support.

Getting to grips with technical standards

Although, perhaps, not an immediate concern for consumers and developers of AI, they must nonetheless keep an eye on efforts by rival AI powers to put their stamp on the development of technical standards. These are the parameters that AI models should conform to, such as on data, model performance and governance – just as there are specifications about how 5G or 4G should operate on a smartphone to ensure cross-border and cross-network interoperability.

“Keeping track of the big power lobbying over technical standards is particularly advisable because of the high cost of regulatory compliance”

China in recent years has made a concerted push to be part of the rule-making establishment on technology at specialist UN committees and at technical standards-setting institutions. That has unnerved the West because whoever sets the rules fosters a market that favours their products or political agenda.

The competition over technical standards could determine what kind of compliance regimes you will have to adhere to as a consumer. Or as a developer, what kind of technical specifications you will have to meet for certain markets.

Keeping track of the big power lobbying over technical standards is particularly advisable because of the high cost of regulatory compliance and because it could influence the many jurisdictions that have yet to decide on the kind of regulations they want.

Future collaboration? 

At the moment, there is no sign of China and the West expressing any interest in the convergence of AI models, regulations, and technical specifications – the technology serves their respective economic, political and security interests too much. But there may come a time when they will need, at the very least, to start talking to each other and be more cooperative over AI. There are two points at which this is likely to happen.

Firstly, when we are on the cusp of limited or full artificial general intelligence or AGI – effectively, when computers approach or exceed human levels of intelligence. Experts suggest this may be between 4 to 20 years away.

The big worry is that AGI systems, although unlikely to have intrinsically malevolent goals, could acquire instrumental goals that create global or existential risks such as repurposing resources or acquiring strategic power. Even near-future AI systems, if acquired by rogue actors, can be used to unleash deadly pathogens or chemical weapons.

“Companies and governments need to recognise that it is in their commercial interest to keep a close eye on the state of this geopolitical rivalry”

Secondly, there’s the matter of energy. AI systems are hugely resource-intensive, which will become an increasing concern for the West and China as both face climate change challenges.

While confident that looming AGI and heightening climate concerns will prompt dialogue and negotiations between the West and China, I am less certain about outcomes.

A willingness to talk is one thing, agreeing actions and policies quite another – witness the slow progress over climate action. AI divergence will very likely persist for some time to come.

Therefore, companies and governments need to recognise that it is in their commercial interest to keep a close eye on the state of this geopolitical rivalry, as this will enable them to better anticipate and respond to ensuing challenges.

Megha Kumar leads Oxford Analytica’s work on the Global Technology, Media and Telecommunications (TMT) sector.