Shaping AI’s future: can the world agree on regulation?
The US is opting for innovation-driven ‘light-touch’ rules with AI regulation while Europe favours stricter safeguards. How will this global regulatory divide shape the future of AI development and trade dynamics, asks David Wood
November 19, 2024
Two years on since the launch of ChatGPT and the growth of AI has been rapid. As rapid, in fact, as our concerns about our ability to control it.
Alongside the promise that it is the next industrial revolution, the gateway to a new era of productivity and economic growth, come fears that it poses at worst an existential threat to humanity or more likely a powerful tool for bad actors to do harm.
This is where AI regulation comes in, to help create a series of rules designed to allow us to reap the benefits of AI in terms of its potential to transform our lives economically and socially but protects us from its dangers.
Major economies and political institutions have been grappling with this challenge for some time and some clear differences have emerged, with the US mostly favouring light-touch regulation with the aim of allowing the industry to grow.
In contrast the EU has gone for top-down, heavier regulation with more emphasis on safety and protection.
The Trump 2.0 approach
While the headlines over Trump’s re-election in the US have gravitated towards the differences between Biden/Harris and Trump, when it comes to AI there remains plenty of overlap.
Under the Biden administration, AI regulation has been fairly light touch with many of the executive orders regarding the threat of AI amounting to voluntary codes rather than the EU’s hard and fast rules with financial penalties.
Trump 2.0, advised by tech figures such as Elon Musk, is likely to advocate a deregulatory stance, contrasting with the EU approach, but with specific areas of concern around AI safety and national security.
Trump likely to continue down dereguation path, but with safety and national security concerns
“I would be surprised if you saw the administration push for federal AI legislation, but I think we will see a stronger embrace around areas such as Defence, National Security and Intelligence,” says US technology expert Bill Whyman, principal of Tech Dynamics LLC and a former senior manager at Amazon Web Services.
“It will be agency-specific regulation, but there will be much more caution around general prescriptive rules.”
Director of the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS) Gregory Allen argues that Trump will be more about continuation than redirection, continuing the broad thrust of policy under Biden.
He will tweak 2023 executive orders about preventing AI technologies from limiting civil liberties but overall maintain the voluntary codes, says Allen, who was formerly director of strategy and policy at the Department of Defense (DOD) Joint Artificial Intelligence Center.
Trump’s focus will be deregulation to enable the US to ramp up to meet the energy needs of the fast-growing industry, enabling Big Tech companies like Amazon and Google to build power stations in the US rather than elsewhere, with all the jobs that that will create.
Trump might be deregulatory but not at the expense of safety, given that one of his major influences will be OpenAI pioneer Elon Musk, who has often expressed concerns about the uncontrolled development of AI on human extinction grounds, added Allen in a recent CSIS AI Policy Podcast.
Last week it was announced that Trump has handed Musk an efficiency role in the next government.
The other area where Trump might take an active interest is in semiconductor export rules. Biden has sought to control Chinese access to some of the US’s most powerful chips and Trump is likely to continue this, perhaps using it as a bargaining chip over the general subject of Chinese imports tariffs.
Stark contrasts
The US approach stands in stark contrast to the AI regulation already in place in the EU and countries such as China, which both offer more top-down regulation.
The likely result will be a fragmented global regulatory environment, creating trade frictions between the US and its trading partners despite early efforts at cooperation. A situation made even more likely by Trump’s promise to introduce trading tariffs.
With the EU’s AI Act likely to influence other major nations, US leadership in AI development could get more difficult, with the likes of Amazon, Apple, Google, Meta, Microsoft, and Nvidia facing multiple AI regimes around the world.
The EU’s AI Act
The European Commission is the organisation which has gone furthest fastest in producing a regulatory framework in its AI Act, billed as being the most comprehensive set of rules to date for managing the many risks that AI could create.
Turing prize-winning “godfathers” of generative AI, Yoshua Bengio and Geoffrey Hinton have warned of the dramatic risks that threaten humanity’s existence. Bengio, Hinton, Bill Gates, top executives from Google, Microsoft, OpenAI and many AI luminaries from China and Russia have signed an Open letter stating “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The precautionary principle says that low-probability, high-impact outcomes should be taken seriously.
European Commission executive vice president Margrethe Vestager, who headed up the EU AI Act’s development, admitted back in 2023 that while the threat of extinction shouldn’t be dismissed, the bigger threat is that AI tools may be used to discriminated against groups of people in society, perpetuating social inequality.
“There is also a significant risk that the criminal sector digitises much faster than the rest of us,” she added. “Developments in voice recognition mean that if your social feed can be scanned to profile you then the risk of being manipulated is enormous. If we end up in a situation where we believe nothing [is real] then we have undermined our society.”
Four-tier approach
The AI Act attempts to differentiate between several types of risk by assessing it into four levels; unacceptable risk; high risk; limited risk and minimal or no risk. The higher the risk the stricter the rules.
‘Unacceptable risk’ refers to AI which deploys subliminal techniques to distort behaviour, or to exploit the vulnerabilities of specific groups based on age, disability or racial grouping. Other high-risk areas are AI systems that manipulate people’s decisions or persuade them to engage in unwanted behaviours or use biometric data to infer things about them.
Generative AI like ChatGPT has additional requirements including disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training.
The Act, which will be fully integrated by 2026, has teeth; those who break the rules risk penalties up to 40m Euros or 7% of annual turnover.
According to Vestager, the EU’s AI Act points the way for a global dialogue on how to regulate. Speaking at an EC workshop on AI this summer, Vestager pointed out that the EC is hot on regulating other forms of risk that go together with AI development, namely the risk of a handful of existing Big Tech companies using their market power to dominate a growing industry.
EC executive vice president Margreth Vestager
“This isn’t like 20 years ago when we were just figuring out the internet and the digital economy was just budding,” declared Vestager. “ We are now dealing with existing market power and all the issues that come with it.”
“Strong competition enforcement is always needed at times of big industrial and tech changes. It is then that markets can tip, that monopolies can be formed, and that innovation can be snuffed out.”
Why Europe has no Big Tech players
Not everybody is supportive of the EC’s AI Act, notably powerful voices in America’s tech world, as AJ Bell investment director Russ Mould points out.
“The US Tech company view boils down to: “ If you are wondering why Europe has no big tech companies, now you know. As soon as a new one appears they try to regulate it reign it in and protect existing industries.”
Nicolai Tangen, CEO of Norges Bank Investment Management, told the Financial Times’ Unhedged podcast recently that heavy AI regulation in Europe meant it was likely that the big tech players would continue to come from the US. “In America, you have lots of AI and little regulation, and in Europe, you have little AI and lots of regulation,” he said.
AI enthusiast, influential venture capitalist and Trump proselyte Marc Andreessen sees the drive to regulate AI as based in hysterical, irrational fear and adds that Big Tech in general is pushing AI regulation to protect their economic interests. That’s because big companies have the resources to cope with extensive AI rules, while small companies and start-ups (that venture capitalists invest in) mostly do not, says Andreessen.
The real risk is that Big Tech companies with an interest in AI are allowed to achieve ‘regulatory capture’, establishing a government-protected cartel that is insulated from market competition, say the venture capitalists.
Top-down or bottom-up
Tech sector expert Bill Whyman, who wrote a report on AI regulation for the Centre for Strategic and International Studies, argues that top-down regulation is not necessarily the way to go, particularly in the US market.
“Yes, you need rules to protect people and make it sustainable, but being first to regulate is not a good or bad thing in itself. People who tend to favour AI regulation (whether in the US or EU) think that strong national top-down rules are good, but I don’t think that necessarily fits the US case.”
“A bottom-up decentralised approach has advantages and that shouldn’t be dismissed, especially at the initial development stage of an emerging technology. So, while it’s attractive to want to regulate, you need to figure out what you want to do first, then figure out the governance structure to implement that.”
Whyman argues that any rules need to reflect the difference between legitimate businesses seeking to comply with rules and malicious actors with bad intent, who need tougher rules and punitive deterrents.
For other lower risk areas, pre-approval or licencing of AI won’t produce the desired innovation, he declares.
“All say they want innovation, but pre-approval or licensing deliberately creates barriers to market entry and hence works against innovation and open competition. Outside of high-risk areas, a lighter-touch pre-release notification system may achieve similar goals.”
Before you go – stay in the know by signing up for our weekly TechInformed Editorial Roundup newsletter.
Amy Stettler
SVP Global Marketing, TechInformed
With over 30 years of global marketing experience working with industry leaders like IBM, Intel, Apple, and Microsoft, Amy has a deep knowledge of the enterprise tech and business decision maker mindset. She takes a strategic approach to helping companies define their most compelling marketing stories to address critical obstacles in the buyer – seller journey.
James Pearce
Editor, TechInformed
As founding editor of TechInformed in 2021, James has defined the in-depth reporting style that explores technology innovation and disruption in action. A global tech journalist for over a decade with publications including Euromoney and IBC, James understands the content that engages tech decision makers and supports them in navigating the fast-moving and complex world of enterprise tech.
Let’s Connect
We’d love to talk about how we can help you build your next project
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.