U.S. releases first regulations on the artificial intelligence
On October 30, the White House issued what it described as “a landmark” Executive Order directed at regulating artificial intelligence technologies, the first major move of its kind, in an effort“to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI),” according to an official statement.
The Executive Order establishes new standards for AI safety and security, and contains provisions aimed to protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, the promotion of innovation and competition, and more.
A connection to the September innovation-themed issue of EA stands out: the announcement expressed a strategy of what it described as “responsible innovation” as it courted voluntary commitments from 15 companies to drive safe, secure, and trustworthy development of AI.
The document is primarily directed at addressing potential risks and threats. It is lengthy and contains many stipulations, so EA editors did our best to break it down.
The first section contains new standards, mostly requiring that developers of “the most powerful AI systems” share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, companies must notify the federal government when training a new AI model, and must share the results of all red-team safety tests. Red team campaigns are defined as “threat-led penetration tests where the detection and response capabilities of an organization are tested…focusing on target objectives” and were previously organized in secret. Companies will now have to disclose these. The National Institute of Standards and Technology will set standards for “extensive” red-team testing to ensure safety before public release.
Depending on your viewpoint, these moves could entail federal overreach or protection against dangerous technology. Example: The Department of Homeland Security will apply some of these new standards to critical infrastructure sectors and establish “the AI Safety and Security Board.” The Departments of Energy and Homeland Security will also now address AI systems’ threats to critical infrastructure, as well as “chemical, biological, radiological, nuclear, and cybersecurity risks.” Here, the language indicates an overlap with environmental policy looming in the future.
Further verbiage in this section says it intends to guard against “biological synthesis screening”, financial fraud, and software threats.
The second section of the document primarily focuses on the protection of data on the internet. This will require further action by Congress. A third section is devoted to “advancing equity and civil rights” and includes a “Blueprint for an AI Bill of Rights” and issues a second Executive Order directing agencies to “combat algorithmic discrimination”. Algorithmic discrimination is a theoretical flaw in AI that involves unfair projections. This section’s stipulations focus on areas like housing, hiring processes, and the criminal justice system.
Section three zeroes in on AI in consumer markets, healthcare, and education. It promises to “advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs”; “shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools”; and “produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.”
Further sections of the document discuss the United States’ role in the international development of AI technology. America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined. These likely have corporate implications but not many at the ground floor of a motor shop or manufacturer. EA will continue to monitor this Executive Order and its provisions as it develops over the next year.
Comments