‘Engine of inequality’: delegates discuss AI’s global impact at Paris summit

French President Emmanuel Macron speaks at the plenary session of the AI Action Summit. Screenshot Élysée/YouTube
This story was originally published by The Guardian and appears here as part of the Climate Desk collaboration.
The impact of artificial intelligence on the environment and inequality have featured in the opening exchanges of a global summit in Paris attended by political leaders, tech executives and experts.
Emmanuel Macron’s AI envoy, Anne Bouverot, opened the two-day gathering at the Grand Palais in the heart of the French capital with a speech referring to the environmental impact of AI, which requires vast amounts of energy and resource to develop and operate.
“We know that AI can help mitigate climate change, but we also know that its current trajectory is unsustainable,” Bouverot said. Sustainable development of the technology would be on the agenda, she added.
The general secretary of the UNI Global Union, Christy Hoffman, warned that without worker involvement in the use of AI, the technology risked increasing inequality. The UNI represents about 20 million workers worldwide in industries including retail, finance and entertainment.
“Without worker representation, AI-driven productivity gains risk turning the technology into yet another engine of inequality, further straining our democracies,” she told attenders.
On Sunday, Macron promoted the event by posting a montage of deepfake images of himself on Instagram, including a video of “him” dancing in a disco with various 1980s hairstyles, in a tongue-in-cheek reference to the technology’s capabilities.
Although safety has been downplayed on the conference agenda, some in attendance were concerned about the pace of development.
Max Tegmark, the scientist behind a 2023 letter calling for a pause in producing powerful AI systems, cautioned that governments and tech companies were inadvertently re-enacting the ending of the Netflix climate crisis satire Don’t Look Up.
The film starring Leonardo DiCaprio and Jennifer Lawrence uses a looming comet, and the refusal by the political and media establishment to acknowledge the existential threat, as a metaphor for the climate emergency – with the meteor ultimately wiping out the planet.
“I feel like I have been living that movie,” Tegmark told the Guardian in an interview. “But now it feels l like we‘ve reached the part of the film where you can see the asteroid in the sky. And people are still saying that it doesn’t exist. It really feels like life imitating art.”
Tegmark said the promising work at the inaugural summit at Bletchley Park in the UK in November 2023 had been partly undone. “Basically, asteroid denial is back in full swing,” he said.
The Paris gathering has been badged as the AI action summit, whereas its UK cousin was the AI safety summit. Macron is co-chairing the summit with India’s prime minister, Narendra Modi. The US vice-president, JD Vance, and Chinese vice-premier, Zhang Guoqing, are among the other political attenders, although the UK prime minister, Sir Keir Starmer, is not attending.
Existential concerns about AI focus on the development of artificial general intelligence, the term for systems that can match or exceed human intellectual capabilities at nearly all cognitive tasks. Estimates of when, and if, AGI will be reached vary but Tegmark said based on statements from industry figures “the asteroid is going to strike … somewhere between one and five years from now.
Developments in AI have accelerated since 2023, with the emergence of so-called reasoning models pushing the capabilities of systems even further. The release of a freely available reasoning model by the Chinese company DeepSeek has also intensified the competitive rivalry between China and the US, which has led AI breakthroughs.
The head of Google’s AI efforts, Demis Hassabis, said on Sunday the tech industry was “perhaps five years away” from achieving AGI and safety conversations needed to continue. “Society needs to get ready for that and … the implications that will have.”
Speaking in Paris before the summit, Hassabis added that AGI carried “inherent risk”, particularly in the field of autonomous “agents”, which carry out tasks without human intervention, but those concerns could be assuaged.
“I’m a big believer in human ingenuity. I think if we put the best brains on it, and with enough time and enough care … then I think we’ll get it right.”
Comments
I really wonder as governments start to talk about AI (Artificial Intelligence) or AGI(Artificial General Intelligence) , how much they really understand what it is and what it can achieve. Given how politicians deal with many issues, history shows how misinformed and biased they can be on topics and the regulations they pass. This is very evident listening to some of the US Republicans comments on AI that are based more on conspiracy nonsense than fact.
Are there safety concerns with AI or AGI, sure without a doubt. But don't they think that the companies building these vast AI models, have not taken the time to also deal with safety concerns and the potential misuse?
AI itsself is simulation of human intelligence and makes many of us more productive and quicker to solve issues we are working on. It is so much more productive to use an AI query that returns results in seconds than spending hours going to textbooks or databases. When developing applications, you can build the foundation of a functional app in a day, then days or weeks otherwise. But regardless, you still need to possess the background of whatever you are using AI for, as it makes mistakes or offers incorrect solutions.
AGI on the other hand is a more advance form that can understand and learn. AGI can adapt and learn in diverse domains compared to AI which is narrow in scope. AGI can also make mistakes based on bad information depending on how it is applied.
AGI is a game changer when it comes to healthcare, especially when it comes to diagnosing rare diseases. AGI has even been useful when it comes to analyzing x-rays, CT or MRI scans, spotting health issues before any symptoms even appear.
With other types of usage of AGI, there are risks that can exceed human intelligence and that is of major concern with builders of these systems. For now AGI remains aligned with humanity's interests, but more research is underway to build fail-safes and monitoring mechanisms to ensure AI remains aligned with human interests.