![Digital blue humanoid face dissolving into fragmented pixels on a dark background, symbolizing AI or technology. Futuristic and abstract.](https://static.wixstatic.com/media/6f60ff_4ab63a9a19b949b196f427cdb886cd5c~mv2.jpg/v1/fill/w_980,h_517,al_c,q_85,usm_0.66_1.00_0.01,enc_auto/6f60ff_4ab63a9a19b949b196f427cdb886cd5c~mv2.jpg)
In a significant development at the Paris Global Summit on AI, both the United States and the United Kingdom have opted out of signing an international agreement aimed at fostering an "open," "inclusive," and "ethical" approach to artificial intelligence development. The move sets them apart from 60 other nations, including France, China, and India, who have committed to the initiative.
A Global Divide on AI Regulation
The UK government justified its decision by citing concerns over national security and the lack of practical clarity on global AI governance. In a statement, the government acknowledged that while it agreed with much of the leader's declaration, it felt that the document fell short in addressing the tougher questions surrounding AI security.
"We felt the declaration didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it," a UK government spokesperson stated.
Similarly, the US remains hesitant about adopting stringent AI regulations. US Vice President JD Vance emphasized that overregulation could stifle AI's potential, aligning the decision with the Trump administration’s pro-business stance.
"AI is an opportunity that the Trump administration will not squander," Vance declared. "Pro-growth AI policies should be prioritized over safety. Too much regulation could kill a transformative industry just as it's taking off."
His comments put him at odds with French President Emmanuel Macron, who strongly advocated for AI safeguards.
"We need these rules for AI to move forward," Macron argued, underscoring the necessity of regulatory oversight to ensure AI's safe and responsible development.
UK's AI Leadership in Question
The UK’s decision to distance itself from the agreement has raised eyebrows, particularly given its recent role in spearheading global AI safety discussions. Just months ago, then-Prime Minister Rishi Sunak hosted the world’s first AI Safety Summit in November 2023, championing ethical AI governance.
Andrew Dudfield, head of AI at fact-checking organization Full Fact, suggested the UK’s credibility on AI safety may be at risk.
"By refusing to sign today's international AI Action Statement, the UK government risks undercutting its hard-won credibility as a world leader for safe, ethical, and trustworthy AI innovation," Dudfield warned.
However, UKAI, a trade body representing AI businesses in the UK, endorsed the government’s approach.
"While UKAI agrees that being environmentally responsible is important, we question how to balance this responsibility with the growing needs of the AI industry for more energy," said UKAI Chief Executive Tim Flagg.
Flagg cautiously welcomed the government’s stance, noting, "UKAI sees this as an indication that the government will explore pragmatic solutions, ensuring close collaboration with our US partners."
A Broader Struggle for AI Governance
The Paris agreement outlines a framework to bridge digital divides, promote transparency, and ensure AI remains "secure and trustworthy." It also acknowledges concerns about AI’s escalating energy consumption, an issue highlighted for the first time at an international summit.
Yet, skepticism remains over whether global regulations can be effectively implemented, particularly when economic and geopolitical stakes are high.
"I’m not surprised that the US and UK wouldn’t sign up to the agreement suggested at the Global Summit on AI in Paris this week," said Adam Marrè, Arctic Wolf’s Senior Vice President, Chief Information Security Officer, and former FBI Special Agent. "It’s early days and world governments are jockeying for influence in this fast-evolving and changing AI debate. Neither government wants to risk losing ground in this critical period of AI adoption and evolution and probably want to keep an open mind on regulations and controls."
Marrè added, "Given the lack of meaningful regulation on technology companies in the U.S., it's no surprise that the U.S. and U.K. didn’t sign this AI declaration. While ensuring safe and ethical adoption of AI is critical, governments often struggle to effectively regulate emerging technologies, especially when economic and national interests are at play. Also, we have to be cognizant of the fact that rogue states and criminal elements will not be constrained by any regulation in their attempts to exploit AI for their own goals."
AI, Trade, and the US-UK Balancing Act
The AI summit occurs against a backdrop of intensifying trade tensions between the US and Europe. The Trump administration recently announced tariffs on steel and aluminum imports, directly impacting UK and EU economies. While the UK has refrained from immediate retaliation, it is delicately navigating its relationship with both the US and the EU.
Downing Street denied that its AI stance was influenced by Washington, insisting that the decision was based on national interests.
"This isn’t about the US, this is about our own national interest—ensuring the balance between opportunity and security," a spokesperson stated.
The Future of AI Policy
As global leaders continue to negotiate the future of AI governance, the refusal of two of the world’s most influential AI powers to sign the agreement underscores a deepening divide. While the UK and US argue for a more flexible, market-driven approach, others, like the EU and France, advocate for stronger regulatory frameworks.
Ursula von der Leyen, President of the European Commission, reiterated the EU's position, stating, "This summit is focused on action, and that is exactly what we need right now." She emphasized that Europe’s AI strategy would champion innovation, collaboration, and the power of open-source technology.
As AI continues to revolutionize industries and societies, the world’s major economies face a critical challenge: how to balance innovation with accountability. The decisions made today will shape the future of AI for generations to come.