By Samuel King
BRUSSELS, Belgium (IPS) – Algorithms decide who lives and dies in Gaza. AI-powered surveillance tracks journalists in Serbia. Autonomous weapons are paraded through Beijing’s streets in displays of technological might. This isn’t dystopian fiction – it’s today’s reality. As AI reshapes the world, the question of who controls this technology and how it’s governed has become an urgent priority.
AI’s reach extends into surveillance systems that can track protesters, disinformation campaigns that can destabilise democracies and military applications that dehumanise conflict by removing human agency from life-and-death decisions. This is enabled by an absence of adequate safeguards.
Governance failings
Last month, the UN General Assembly adopted a resolution to establish the first international mechanisms – an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance – meant to govern the technology, agreed as part of the Global Digital Compact at the Summit of the Future in September. This non-binding resolution marked a first positive step towards potential stronger regulations. But its negotiation process revealed deep geopolitical fractures.
Through its Global AI Governance Initiative, China champions a state-led approach that entirely excludes civil society from governance discussions, while positioning itself as a leader of the global south. It frames AI development as a tool for economic advancement and social objectives, presenting this vision as an alternative to western technological dominance.
Meanwhile, the USA under Donald Trump has embraced technonationalism, treating AI as a tool for economic and geopolitical leverage. Recent decisions, including a 100 per cent tariff on imported AI chips and purchase of a 10 per cent stake in chipmaker Intel, signal a retreat from multilateral cooperation in favour of transactional bilateral arrangements.
The European Union (EU) has taken a different approach, implementing the world’s first comprehensive AI Act, which comes into force in August 2026. Its risk-based regulatory framework represents progress, banning AI systems deemed to present ‘unacceptable’ risks while requiring transparency measures for others. Yet the legislation contains troubling gaps.
While initially proposing to ban live facial recognition technology unconditionally, the AI Act’s final version permits limited use with safeguards that human rights groups argue are inadequate. Further, while emotion recognition technologies are banned in schools and workplaces, they remain permitted for law enforcement and immigration control, a particularly concerning decision given existing systems’ documented racial bias. The ProtectNotSurveil coalition has warned that migrants and Europe’s racial minorities are serving as testing grounds for AI-powered surveillance and tracking tools. Most critically, the AI Act exempts systems used for national security purposes and autonomous drones used in warfare.
The growing climate and environmental impacts of AI development adds another layer of urgency to governance questions. Interactions with AI chatbots consume roughly 10 times more electricity than standard internet searches. The International Energy Agency projects that global data centre electricity consumption will more than double by 2030, with AI driving most of this increase. Microsoft’s emissions have grown by 29 per cent since 2020 due to AI-related infrastructure, while Google quietly removed its net-zero emissions pledge from its website as AI operations pushed its carbon footprint up 48 per cent between 2019 and 2023. AI expansion is driving construction of new gas-powered plants and delaying plans to decommission coal facilities, in direct contradiction to the need to end fossil fuel use to limit global temperature rises.
Champions needed
The current patchwork of regional regulations, non-binding international resolutions and lax industry self-regulation falls far short of what’s needed to govern a technology with such profound global implications. State self-interest continues to prevail over collective human needs and universal rights, while the companies that own AI systems accumulate immense power largely unchecked.
The path forward requires an acknowledgment that AI governance isn’t merely a technical or economic issue – it’s about power distribution and accountability. Any regulatory framework that fails to confront the concentration of AI capabilities in the hands of a few tech giants will inevitably fall short. Approaches that exclude civil society voices or prioritise national competitive advantage over human rights protections will prove inadequate to the challenge.
The international community must urgently strengthen AI governance mechanisms, starting with binding agreements on lethal autonomous weapons systems that have stalled in UN discussions for over a decade. The EU should close the loopholes in its AI Act, particularly regarding military applications and surveillance technologies. Governments worldwide need to establish coordination mechanisms that can effectively counter tech giants’ control over AI development and deployment.
Civil society must not stand alone in this fight. Any hopes of a shift towards human rights-centred AI governance depend on champions emerging within the international system to prioritise human rights over narrowly defined national interests and corporate profits. With AI development accelerating rapidly, there’s no time to waste.
Samuel King is a researcher with the Horizon Europe-funded research project ENSURED: Shaping Cooperation for a World in Transition at CIVICUS: World Alliance for Citizen Participation.For interviews or more information, please contact research@civicus.org