AI Governance or AI Dominance? How the U.S. and China Are Weaponizing Regulation

AI Governance or AI Dominance? How the U.S. and China Are Weaponizing Regulation

In 2024, the world witnessed the release of OpenAI's GPT-5, followed swiftly by Baidu’s ERNIE 5.0 in China. Both were hailed as breakthroughs, but behind the scenes, something far more significant was underway: a geopolitical contest over who gets to control the rules of artificial intelligence.

What began as a race to build the most powerful models has become a race to write the laws that will shape how AI is built, deployed, and controlled — not just domestically, but globally.

The U.S. Playbook: AI as a Techno-Democratic Instrument

The United States has embraced a narrative of “responsible AI leadership”, championed by figures like Senate Majority Leader Chuck Schumer and the Biden administration. In late 2023, the Executive Order on Safe, Secure, and Trustworthy AI directed federal agencies to set new standards for model evaluation, red-teaming, and watermarking synthetic content.

But the real power move came when the U.S. Commerce Department began pushing for the adoption of U.S.-style AI standards in international trade and military alliances.

“This is not just about safety. It’s about soft power,” says Wendy Wong, a political scientist at the University of Toronto. “Whoever sets the AI norms gets to shape the future of global governance.”

In February 2024, at the Munich Security Conference, U.S. officials met with European allies to propose a NATO-aligned framework for defense applications of generative AI — a subtle but important signal that AI regulation is now being fused with foreign policy.

China’s Strategy: Regulation Meets State Leverage

China, meanwhile, has taken a radically different approach. Rather than letting industry lead, the Cyberspace Administration of China (CAC) issued sweeping laws mandating that all generative AI systems must reflect “core socialist values” and pass content security reviews before release.

In March 2024, the Chinese government forced Alibaba and ByteDance to delay public releases of new AI chatbots until the CAC approved their training data and output filters. A leaked memo from a Baidu internal meeting (reported by Nikkei Asia) revealed concerns that excessive regulation was “blunting innovation” — but the state pushed forward.

Interestingly, Beijing has begun exporting its AI governance model. The Digital Silk Road initiative has introduced China’s AI content moderation laws to partner countries like Pakistan, Laos, and Egypt, all under the banner of “cyber sovereignty.”

The EU: The Middle Power With Legal Muscle

Caught between Washington and Beijing, the European Union passed the landmark AI Act in May 2024, categorizing AI systems by risk level. High-risk systems (e.g., facial recognition) now face strict compliance demands, including documentation, transparency, and human oversight requirements.

But critics argue that the EU lacks the enforcement muscle to regulate global firms. When Meta released a new AI model trained on European user data without explicit consent, the French data protection authority CNIL issued a warning, but no fines were levied.

“The EU has the right ideas, but enforcement is the Achilles heel,” says Miriam Vogel, CEO of EqualAI.

Still, the EU’s Digital Markets Act and Digital Services Act are influencing regulation beyond its borders. Brazil and South Korea have copied parts of the AI Act’s structure, citing the EU’s GDPR as a global legal template.

The Risks of AI Nationalism

The rise of regulatory nationalism has real consequences. In June 2024, researchers at Stanford published a study comparing outputs from ChatGPT-5 (U.S.), ERNIE 5.0 (China), and Aleph Alpha (Germany) on politically sensitive questions.

The results showed stark ideological divergences:

  • The U.S. model leaned libertarian and pro-market.
  • The Chinese model avoided political topics entirely.
  • The German model emphasized environmental and human rights angles.

What this means is that AI models are becoming nationalized minds — reflecting not just data, but values.

Toward a Fragmented AI Future

In a recent interview, MIT professor Max Tegmark warned that “we're entering a dangerous phase of AI fragmentation, where different power blocs build incompatible systems with no shared guardrails.” He advocates for a UN-style treaty on AI development, akin to the Paris Agreement for climate change.

But talks at the UN AI for Good Summit in Geneva this June collapsed after the U.S. delegation refused to cede any enforcement power to multilateral institutions — and China did not attend at all.

Conclusion: Not Just Code, But Control

AI is not just a technology — it’s a governance arena, a strategic weapon, and a cultural force. Whether we like it or not, global AI regulation is being shaped not by scientists or ethicists alone, but by geopolitical maneuvering and ideological export.

For all the talk of alignment and cooperation, the reality in 2025 is this: the race isn’t just for AI dominance — it’s for AI governance supremacy.

As new models emerge and elections loom in more than 50 countries, the question for democracies, dictatorships, and digital citizens alike is no longer "what can AI do?" — but "who decides what AI is allowed to do?"