Entertainment
White House announces new AI initiatives at Global Summit on AI Safety
Vice President Kamala Harris will outline several new AI initiatives today, laying out the US government’s plans to advance the safe and responsible use of machine learning technology. We already know what many of them will be.
The White House previously announced an executive order on AI regulation earlier this week, with the intention to protect US citizens from the potential harm the technology can cause. It is now building further on this order, aiming to position itself as a global leader in ensuring AI is developed and used in the public interest internationally.
Currently in London to attend the Global Summit on AI Safety, Harris is scheduled to deliver her live-streamed speech on the US’ approach to AI at approximately 1:35 p.m. GMT / 9:35 a.m. ET.
White House drops an AI regulation bombshell: 10 new mandates that’ll shake up the industry
“Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions,” Harris said in an excerpt from her prepared speech. “These threats are often referred to as the ‘existential threats of AI,’ because they could endanger the very existence of humanity.”
“So, the urgency of this moment must compel us to create a collective vision of what this future must be. A future where AI is used to advance human rights and human dignity; where privacy is protected and people have equal access to opportunity; where we make our democracies stronger and our world safer. A future where AI is used to advance the public interest.”
Here are the new announcements and government initiatives Harris will reveal.
1. The US is establishing a United States AI Safety Institute
The US government is establishing a United States AI Safety Institute (US AISI), which will be part of the National Institute of Standards and Technology (NIST). Created through the Department of Commerce, the US AISI will be responsible for applying the NIST’s AI Risk Management Framework, developing benchmarks, greatest practices, and technical guidance to mitigate the risks of AI. These will then be used by regulators when developing or enforcing rules. The US AISI will also collaborate with similar institutions internationally.
2. The first draft of policy guidance for the US government’s use of AI is being made available for public comment
The US government is publishing the first draft of its policy guidance on its use of AI, with the public invited to comment. Released through the Office of Management and Budget, this policy is intended to outline tangible steps for the responsible use of AI by the US, and builds on previous guidance such as the NIST’s AI Risk Management Framework. The policy is intended for application across a wide range of departments, including health, law enforcement, and immigration, and requires that federal departments monitor the risks of AI, consult the public regarding its use, and provide an avenue of appeal to those harmed by it.
You can read the draft policy and submit your comments here.
3. 30 nations have joined the Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy
The US made its Political Declaration on the Responsible Use of Artificial Intelligence and Autonomy in February, establishing standards for the lawful, responsible use and development of military AI. This included the requirement that it comply with international humanitarian law. Interestingly, a specific goal of the Political Declaration is to preserve nations’ “right to self-defense,” as well as their ability to develop and use AI for the military.
Thirty other nations have now endorsed this Declaration as well, specifically Albania, Australia, Belgium, Bulgaria, Czech Republic, Denmark, Estonia, Finland, France, Georgia Germany, Hungary, Iceland, Ireland, Italy, Japan, Kosovo, Latvia, Liberia, Malawi, Montenegro, Morocco, North Macedonia, Portugal, Romania, Singapore, Slovenia, Spain, Sweden, and the UK.
4. 10 foundations have pledged over $200 million for public interest AI initiatives
Ten foundations are collectively committing over $200 million to fund AI initiatives intended to further the greatest interests of the global public — specifically workers, consumers, communities, and historically marginalised people. The foundations are also creating a funders’ network, which will coordinate such giving with the specific aim of supporting AI work that protects democracy and rights, drives innovation in the public interest, empowers workers amidst the changes being brought about by AI, improves accountability, or supports international rules regarding AI.
The 10 foundations involved are the David and Lucile Packard Foundation, Democracy Fund, the Ford Foundation, Heising-Simons Foundation, the John D. and Catherine T. MacArthur Foundation, Kapor Foundation, Mozilla Foundation, Omidyar Network, Open Society Foundations, and the Wallace Global Fund.
5. The US government will hold a hackathon to find a solution to scam AI robocalls
The US government will host a virtual hackathon with the goal to build AI models which can detect and block robocalls and robotexts that can be used to scam people. The hackathon will have a particular focus on calls that use AI-generated voices.
6. The US is calling for international authentication standards for digital government messaging
The US is calling on the global community to support the development of international standards for digital and AI content produced by governments. Such standards would be aimed at helping the public identify whether or not an apparent government message is authentic, and may include labelling such as digital signatures or watermarks.
7. The US will develop a pledge committing to the responsible use of AI
Finally, the US government will work with the Freedom Online Coalition (FOC) to develop a pledge that its development and implementation of AI will incorporate responsible practices. The FOC is a group of 38 countries whose stated aim is to advance internet freedom and protect human rights online worldwide.
Topics
Artificial Intelligence
-
Entertainment6 days ago
‘Only Murders in the Building’ Season 4 ending explained: Who killed Sazz and why?
-
Entertainment7 days ago
Polling 101: Weighting, probability panels, recall votes, and reaching people by mail
-
Entertainment5 days ago
When will we have 2024 election results online?
-
Entertainment6 days ago
5 Dyson Supersonic dupes worth the hype in 2024
-
Entertainment4 days ago
Halloween 2024: Weekend debates, obscure memes, and a legacy of racism
-
Entertainment5 days ago
Social media drives toxic fandom. Is there a solution?
-
Entertainment4 days ago
Is ‘The Substance’ streaming? How to watch at home
-
Entertainment4 days ago
M4 MacBook Pro vs. M3 MacBook Pro: What are the differences?