Rulemaking for AI: How Global Standards Are Shaping Safety and Interoperability

Rulemaking for AI: How Global Standards Are Shaping Safety and Interoperability
Jeffrey Bardzell / Oct, 28 2025 / Strategic Planning

Artificial intelligence isn’t just changing how we work or shop-it’s reshaping national security, healthcare systems, and democratic processes. But without clear rules, these systems can go off track. Countries are no longer waiting for disasters to happen. They’re building rules now-before AI causes real harm. The question isn’t whether we need rules for AI. It’s how we make them work across borders.

Why Global AI Rules Can’t Wait

In 2024, a hospital in Germany used an AI system to prioritize emergency patients. The algorithm, trained on U.S. data, kept deprioritizing older patients because it learned to associate age with higher treatment costs. It wasn’t biased by design-it was biased by data. That’s the problem with unregulated AI: it doesn’t care about your country’s laws, ethics, or values. It only follows patterns.

When AI systems cross borders-whether through cloud services, global supply chains, or open-source models-they don’t stop at customs. A facial recognition tool built in China can end up in police departments in Brazil. A hiring algorithm trained in India can screen applicants in Canada. Without shared standards, every country ends up playing catch-up.

That’s why the European Union’s AI Act, the U.S. Executive Order on AI, and China’s AI governance guidelines aren’t just national policies. They’re starting points for something bigger: a global framework. The goal isn’t to make everyone think the same. It’s to make sure no one’s AI can hurt someone else.

Three Pillars of International AI Rulemaking

Right now, three core areas are driving international cooperation on AI:

  • Safety-How do we prevent AI from causing physical or psychological harm?
  • Interoperability-Can systems from different countries talk to each other without breaking?
  • Standards-What metrics, tests, and certifications count as proof of compliance?

These aren’t abstract ideas. They’re being tested right now.

In 2023, the OECD launched the AI Principles for International Collaboration. Over 50 countries signed on. They agreed that AI systems should be transparent, accountable, and safe. But signing a document is easy. Making it real? That’s where things get messy.

Take safety. The U.S. National Institute of Standards and Technology (NIST) developed the AI Risk Management Framework. It’s detailed. It’s technical. But if a startup in Kenya uses the same framework, will their test results be accepted in Germany? Not unless there’s a shared way to measure risk.

Interoperability: The Hidden Key to Global AI

Most people think of interoperability as a tech problem. It’s not. It’s a political one.

Imagine two hospitals. One uses an AI diagnostic tool from a U.S. company. The other uses one from Japan. Both need to share patient data to coordinate care. But the U.S. tool only accepts data in HL7 format. The Japanese tool requires DICOM. Even if both systems are safe and accurate, they can’t talk. Patients suffer. Doctors waste time. Costs rise.

That’s why the World Health Organization and the International Telecommunication Union started working on AI Health Interoperability Standards in early 2024. Their first release, called AI-HI 1.0, defines common data formats, API protocols, and audit trails for medical AI. Countries that adopt it can plug their systems into a global network.

It’s not mandatory. But it’s becoming the default. Why? Because companies don’t want to build five different versions of the same tool. Hospitals don’t want to buy incompatible tech. Governments don’t want to pay for siloed systems.

Interoperability isn’t about forcing everyone to use the same AI. It’s about making sure they can work together-even if they’re built differently.

Three symbolic pillars of AI governance built by hands from around the world.

How Standards Are Actually Made (And Why It Matters)

Standards don’t come from politicians alone. They come from engineers, doctors, ethicists, and civil society groups sitting in rooms together.

The ISO/IEC JTC 1/SC 42 committee is the main global body for AI standards. It includes experts from 90 countries. In 2024, they released ISO/IEC 24028:2024-the first international standard for assessing AI trustworthiness. It covers transparency, fairness, robustness, and accountability. Not as a checklist, but as measurable outcomes.

Here’s what that means in practice:

  • A bank in Singapore can use the same test to prove its loan-approval AI isn’t discriminating against women.
  • A city in Mexico can verify that its traffic-prediction AI works reliably in rain, snow, and fog.
  • A university in Nigeria can audit its student grading AI using the same metrics as a school in Finland.

This isn’t theoretical. The University of Cape Town used ISO/IEC 24028 to certify its AI-driven scholarship selection tool. The results were published publicly. Now, 12 other African universities are adopting it.

Standards like this reduce risk. They build trust. And they give smaller countries a voice.

The Real Obstacles: Power, Politics, and Profit

Not everyone wants global rules.

Some tech giants prefer a patchwork of laws. It lets them play countries against each other. If the EU demands explainability, they’ll move development to Singapore. If the U.S. relaxes data rules, they’ll shift training there.

Then there’s the issue of control. China, the U.S., and the EU each have different visions for AI governance. China focuses on stability and state oversight. The U.S. leans on innovation and market forces. The EU prioritizes rights and risk. These aren’t just technical differences-they’re cultural and political.

But here’s the thing: you don’t need global unity to get global rules. You need minimum common ground.

Take the Global AI Safety Summit in Seoul, 2024. The U.S. and China didn’t agree on everything. But they both signed onto a joint statement: “No AI system should be deployed if it can cause irreversible harm without human oversight.” That’s it. One sentence. But it’s now the basis for new laws in South Korea, Brazil, and Canada.

Progress doesn’t require perfect agreement. It requires practical steps.

Diverse team reviewing an AI model card with certification details on a hologram.

What’s Working Right Now

Real progress is happening-not in grand treaties, but in quiet collaborations.

  • The Global AI Certification Alliance, formed in 2024, brings together regulators from 17 countries to recognize each other’s AI safety certifications. If a company gets certified in Canada, it can use that same badge in Australia.
  • The African AI Governance Network helps 30 nations build local capacity to audit AI systems using shared tools and training. No foreign consultants. No expensive software. Just open-source frameworks adapted for local contexts.
  • The Open AI Safety Benchmark is a public repository of test cases for harmful AI behavior. It’s maintained by researchers from MIT, ETH Zurich, and the University of Nairobi. Companies use it to check their systems before launch.

These aren’t perfect. But they’re real. And they’re growing.

What Comes Next?

The next five years will decide whether AI becomes a force for global equity-or deepens existing divides.

Here’s what’s on the horizon:

  • By 2026, the UN is expected to launch a Global AI Governance Observatory to track compliance and report violations.
  • Several countries are testing AI licensing systems-similar to driver’s licenses-for high-risk applications like policing and healthcare.
  • Open-source AI models are being tagged with digital passports that show where they were trained, what data they used, and who certified them.

None of this will stop bad actors. But it will make it harder for them to hide. And it will give smaller players a fighting chance.

The goal isn’t to control AI. It’s to make sure it serves people-not the other way around.

Why can’t each country make its own AI rules?

They can-and many do. But AI doesn’t respect borders. A biased hiring tool built in one country can screen applicants worldwide. A medical AI that misdiagnoses patients in Brazil can be downloaded by a clinic in Nigeria. Without shared standards, companies face confusing, conflicting rules. Governments waste resources duplicating efforts. And ordinary people pay the price with unsafe or unreliable systems.

Are international AI standards legally binding?

No, not by themselves. Standards like ISO/IEC 24028 are voluntary. But they become de facto requirements when governments reference them in laws. For example, the EU AI Act cites specific ISO standards as proof of compliance. Companies that want to sell in the EU must meet them. That turns a voluntary standard into a market rule.

How do small countries influence global AI rules?

Through coalitions. The African AI Governance Network, the ASEAN AI Working Group, and the Latin American AI Ethics Consortium are examples. By pooling resources and speaking with one voice, smaller nations can push for standards that reflect their needs-like low-bandwidth AI or local-language training data. Global standards aren’t written by the biggest tech companies. They’re shaped by the people who use them.

Can open-source AI be regulated?

Yes, but not by banning it. The focus is on transparency. New tools now require open-source models to include a “model card”-a document that lists training data, known biases, performance limits, and certification status. Platforms like Hugging Face now require these cards for models to be hosted. It’s not perfect, but it’s a start. People can still use any model, but they can’t pretend they don’t know what they’re using.

What happens if a country ignores global AI standards?

They risk isolation. Companies won’t invest in AI systems that can’t be sold abroad. Hospitals won’t buy tools that can’t integrate with global networks. Researchers won’t share data with institutions that don’t follow basic safety norms. The cost isn’t just economic-it’s reputational. In a world where AI trust matters, being an outlier is expensive.

Global AI rulemaking isn’t about control. It’s about coordination. It’s about making sure that when an AI makes a decision-whether it’s who gets a loan, who gets treated first, or who gets hired-it’s done with care, clarity, and accountability. The tools are here. The frameworks exist. What’s missing isn’t technology. It’s will.