US Regulates the AI Industry

US Regulates the AI Industry


adobe

AI is going to have a huge impact on all of our lives in the near to medium-term future. What we have seen so far in current AI implementations has been pretty impressive, for example, in medical applications, AI systems have proven more accurate at diagnosing some ailments than trained laboratory staff. However, that is only the start and there will be a whole lot more to come, which will dramatically change the way we live and work. Most of these changes will be good, but there is also huge potential for harm if AI is not regulated properly. Last week, President Biden’s White House took action with an executive order intended to protect US society from misuse of AI. Although the executive action only applies in the US, the US government intends to work with other countries and organizations around the world to build a global framework to govern AI.

The executive order will oblige developers of the most powerful AI systems to share safety test results and other critical information with the U.S. government. Any companies who develop a foundation model that poses a risk to US national security, national economic security, or national public health and safety will have to notify the federal government when training the model, and must share the results of safety tests to ensure AI systems are safe, secure, and trustworthy before they are made public.

The government departments that will make and enforce the new rules include the National Institute of Standards and Technology, which will be tasked with setting rigorous standards for extensive testing. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Strong new standards for biological synthesis screening will be developed to prevent the use of AI to design dangerous biological materials. Those Agencies that fund life-science projects will formulate these standards as a condition of federal funding.

The new executive order will also have measures to protect consumers from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will be responsible for the development of guidance for content authentication and watermarking to clearly label AI-generated content. After those standards are established, federal agencies will use these tools to ensure consumers know that government communications are authentic, while setting an example for the private sector and governments around the world.

An advanced cybersecurity program will also be founded to develop AI tools to find and fix vulnerabilities in critical software.

The executive order comes at possibly the right time, before AI is well established. Hopefully the rest of the world takes note and passes the legislation necessary to ensure that AI is a force for good in the world.