Business
In Seoul summit, heads of states and businesses commit to AI safety
Government officials and AI industry executives agreed on Tuesday to apply elementary safety measures in the fast-moving field and establish an international safety research network.
Nearly six months after the inaugural global summit on AI safety at Bletchley Park in England, Britain and South Korea are hosting the AI safety summit this week in Seoul. The gathering underscores the new challenges and opportunities the world faces with the advent of AI technology.
The British government announced on Tuesday a new agreement between 10 countries and the European Union to establish an international network similar to the U.K.’s AI Safety Institute, which is the world’s first publicly backed organization, to accelerate the advancement of AI safety science. The network will promote a common understanding of AI safety and align its work with research, standards, and testing. Australia, Canada, the EU, France, Germany, Italy, Japan, Singapore, South Korea, the U.K., and the U.S. have signed the agreement.
On the first day of the AI Summit in Seoul, global leaders and leading AI businesses convened for a virtual meeting chaired by U.K. prime minister Rishi Sunak and South Korean president Yoon Suk Yeol to discuss AI safety, innovation and inclusion.
During the discussions, the leaders agreed to the broader Seoul Declaration, emphasizing increased international collaboration in building AI to address major global issues, uphold human rights, and bridge digital gaps worldwide, all while prioritizing being “human-centric, trustworthy, and responsible.”
“AI is a hugely exciting technology — and the U.K. has led global efforts to deal with its potential, hosting the world’s first AI Safety Summit last year,” Sunak said in a U.K. government statement. “But to get the upside, we must ensure it’s safe. That’s why I’m delighted we have got an agreement today for a network of AI Safety Institutes.”
Just last month, the U.K. and the U.S. sealed a partnership memorandum of understanding to collaborate on research, safety evaluation, and guidance on AI safety.
The agreement announced today follows the world’s first AI Safety Commitments from 16 businesses involved in AI, including Amazon, Anthropic, Cohere, Google, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, Samsung Electronics, Technology Innovation Institute, xAi and Zhipu.ai. (Zhipu.ai is a Chinese company backed by Alibaba, Ant and Tencent.)
The AI businesses, including those from the U.S., China, and the United Arab Emirates (UAE), have agreed to the safety commitments to “not develop or deploy a model or system at all if mitigations cannot keep risks below the thresholds,” according to the U.K. government statement.
“It’s a world first to have so many leading AI businesses from so many different parts of the globe all agreeing to the same commitments on AI safety,” Sunak said. “These commitments ensure the world’s leading AI businesses will provide transparency and accountability on their plans to develop safe AI.”
-
Entertainment6 days ago
WordPress.org’s login page demands you pledge loyalty to pineapple pizza
-
Entertainment7 days ago
The 22 greatest horror films of 2024, and where to watch them
-
Entertainment6 days ago
Rules for blocking or going no contact after a breakup
-
Entertainment5 days ago
‘Mufasa: The Lion King’ review: Can Barry Jenkins break the Disney machine?
-
Entertainment5 days ago
OpenAI’s plan to make ChatGPT the ‘everything app’ has never been more clear
-
Entertainment4 days ago
‘The Last Showgirl’ review: Pamela Anderson leads a shattering ensemble as an aging burlesque entertainer
-
Entertainment5 days ago
How to watch NFL Christmas Gameday and Beyoncé halftime
-
Entertainment3 days ago
Polyamorous influencer breakups: What happens when hypervisible relationships end