AI companies will be required to report their safety tests to the US government
The National Institute of Standards and Technology will develop a uniform framework for assessing safety
The Biden administration is instituting a new requirement for major artificial intelligence developers to disclose safety test results to the US government.
Here are a few key points:
The White House AI Council will review progress on this executive order, emphasizing the need for AI systems to meet safety standards before release.
Although companies have committed to certain safety test categories, there's no common standard yet.
The National Institute of Standards and Technology will develop a uniform framework for assessing safety.
Nine federal agencies have conducted risk assessments regarding AI's impact on critical national infrastructure.
Keep reading with a 7-day free trial
Subscribe to The PhilaVerse to keep reading this post and get 7 days of free access to the full post archives.