See all News

White House Releases Wide-Ranging Executive Order on Artificial Intelligence

November 3, 2023


On Monday, Oct. 30, President Biden signed an Executive Order (EO) on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence (AI). The EO is intended to establish safety standards for leading artificial intelligence (AI) companies to adhere to in the absence of legislation on the issue. The order reflects the growing sense in Washington that the United States must shape how AI evolves in order to maximize its potential, mitigate any negative effects on vulnerable populations, and to limit the influence of foreign adversaries. The order also aims to address the safety, privacy and job security concerns raised by AI’s rapid development. The lengthy EO has directives for nearly all 15 executive departments, and it builds on voluntary commitments leading technology companies made to the White House over the summer. The EO also builds on the White House’s October 2022 Blueprint for an AI Bill of Rights.

The EO outlines eight guiding principles and priorities that will guide the administration’s path forward on AI:

  • Artificial Intelligence needs to be safe and secure. To achieve this goal, the EO recommends robust, reliable, repeatable and standardized evaluations of AI systems, and the development of policies and institutions to test, understand and mitigate the risks from AI systems.
  • The U.S. government should promote responsible innovation, competition and collaboration to unlock AI’s potential to solve the most challenging issues.
  • Responsible development and use of AI requires supporting American workers through job training and education programs.
  • AI policies must be consistent with policies to advance equity and civil rights.
  • The interests of Americans who increasingly use, interact with or purchase AI and AI-enabled products in their daily lives must be protected.
  • Americans’ privacy and civil liberties must be protected, and the collection, use and retention of data must be lawful, secure and confidential.
  • The federal government’s own use of AI must be closely monitored to manage the risks the technology poses, and agencies must increase their internal capacity to regulate, govern and support responsible use of AI to deliver results for all Americans.
  • The federal government should become a global thought leader on the societal, economic and technological progress in this new era of disruptive innovation and change.

The EO is the federal government’s first coordinated attempt to begin regulating AI, and the approach will impact a broad range of industries and businesses. The order relies on and expands the broad powers of the Defense Production Act to place reporting, safety and other regulatory requirements on AI companies. Additionally, several agencies are tasked with developing safety and security standards for AI, and the EO calls for the development of a national security strategy to address AI. These standards and strategies, along with future anticipated actions, will shape the AI landscape in the coming years.

Brownstein’s full section-by-section analysis of the EO can be found here, but at a high-level, the EO impacts key industries in the following ways:


The Department of Health and Human Services (HHS) is directed to create an HHS AI Taskforce and develop a strategic plan within one year for the responsible deployment and use of AI and AI-enabled technologies, including generative AI. HHS must determine whether AI-enabled technologies in the health care system maintain appropriate levels of quality across several areas, including drug and device safety, research and discovery, health care delivery and financing, and public health.

Within one year, the Secretary of HHS must develop a strategy for regulating the use of AI or AI-enabled tools in drug-development processes. The strategy at a minimum must define the objectives, goals and high-level principles required for appropriate regulation throughout each phase of drug development, identify areas where future rulemaking or additional statutory authority may be necessary to implement such a regulatory system, and identify the potential for new public/private partnerships necessary for such a regulatory system. The EO also launches an AI safety program that, in partnership with voluntary federally listed Patient Safety Organizations, creates a framework to identify and capture clinical errors resulting from AI in health care settings.

To advance health equity, the EO directs HHS to provide clear guidance to federal benefits programs to keep AI algorithms from being used to exacerbate discrimination, and to curb the irresponsible uses of AI that can lead to and deepen discrimination, bias and other abuses in health care systems. It also directs HHS to consider ways to advance compliance with federal nondiscrimination laws by providers that receive federal funding and create a safety program for detecting errors or tracking incidents that harm individuals.

The EO also broadly instructs HHS to identify and prioritize grantmaking and awards to advance responsible AI innovation for health care technology developers that promote the welfare of patients and workers, such as accelerating grants awarded through the NIH Artificial Intelligence Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD).

On the biosecurity front, it directs the Department of Commerce to impose reporting obligations on companies developing any foundational model that poses a serious risk to national public health and safety. To protect against the risk of using AI to engineer dangerous biological materials, the EO directs the Department of Homeland Security to evaluate the potential for AI to be misused and develop strong new standards for effective nucleic acid synthesis procurement screening, as well as instructs the Department of Defense (DOD) to enter into a contract with the National Academies of Sciences, Engineering and Medicine to conduct a study on the concerns and opportunities of AI and synthetic biology.


The EO requires the Treasury Department to assess ways AI may make critical infrastructure more vulnerable to critical failures, physical attacks and cyberattacks, as well as options for mitigating these vulnerabilities. This provision also encourages independent agencies to contribute to this effort. Separately, the Treasury Department is also directed to issue a public report outlining best practices for how financial institutions can manage AI-specific cybersecurity risks. Independent regulatory agencies are encouraged to use their authorities, including considering new rulemakings and clarifying existing regulations and guidance, to respond to the risks stemming from the rise of AI. Such risks include but are not limited to fraud, discrimination, privacy violations and threats to financial stability.

On the housing front, the EO directs the Department of Housing and Urban Development (HUD) and encourages the Federal Housing Finance Agency (FHFA) and Consumer Financial Protection Bureau (CFPB) to take steps to shield consumers from AI-driven discrimination and biases that may impact the availability of housing opportunities for protected groups. Specifically, the EO signals that FHFA and CFPB should require regulated entities to evaluate underwriting models, automated collateral-valuation, and appraisal processes to look for bias and find ways to reduce it. It also directs HUD and encourages CFPB to issue new guidance addressing the potential for AI biases in tenant screening and online advertising of housing opportunities.


Several provisions are included that would use AI to strengthen grid resiliency, combat climate change and support the Biden administration’s goal of transitioning to a clean energy economy. The order directs the heads of the Department of Energy (DOE), Council on Environmental Quality (CEQ), Office of Science and Technology Policy (OSTP), Federal Energy Regulatory Commission (FERC) and the Assistant to the President and National Climate Advisor to collaborate on ways AI could be used to mitigate climate risks, enable the deployment of clean-energy technologies and streamline permitting and environmental reviews. To promote innovation, the EO directs agencies to expand partnerships with private sector organizations, academia and others to support new applications of AI in science and energy. It also creates a new office that will oversee the development of AI across DOE programs and National Labs and directs the Secretary of Energy to evaluate how DOE can use AI-generated outputs representing critical infrastructure or energy security threats to develop guardrails and solutions that reduce those risks. Agencies were additionally tasked with issuing a report that outlines how AI could be used to improve project permitting, planning, investments and electric grid operations.


The EO’s primary focus is on federal agency use of AI technologies; however, there are some implications for AI developers and companies. The EO falls short of placing any hardline requirements on AI companies, instead focusing on reporting mechanisms and the development of voluntary standards and best practices. While the tech industry can expect the EO to lead to the emergence of new guidelines around the development and deployment of AI tools, the EO lacks any enforcement directives for adherence to these forthcoming standards. Among these voluntary standards is a call on the National Institute of Standards and Technology (NIST) to develop guidelines for AI developers to conduct “red-teaming” exercises before releasing a new AI model, which uses “adversarial methods to identify flaws and vulnerabilities.” The EO also calls for the development of guidance around watermarking and detection methods for AI-generated content. This guidance will be informed by a report that looks at both existing and future capabilities for content authentication, watermarking, detection and auditing. This guidance would remain voluntary but indicates more broadly that technologies exist to detect and label AI-generated content.

While the portion of the EO focusing on AI developers consists primarily of voluntary standards, there is one enforceable element that establishes notification requirements for newly developed “dual-use foundation models” that meet certain computing power levels. Using the powers granted by the Defense Production Act, the EO compels AI companies in the process of developing “dual-use foundation models” to report on the training, testing and security of those models, including any results from “red-teaming” exercises. The notification requirements will further include the acquisition, development or possession of a “potential large-scale computing cluster,” with such reporting requirements including information on total computing power and location of the cluster. The final reporting requirement outlined in the EO targets “the use of U.S. Infrastructure as a Service (IaaS) Products by foreign malicious cyber actors,” with a particular emphasis on foreign resellers of these products. It is unlikely any of these reporting requirements will apply to previously deployed AI models, but most major AI models on the market today would have met computing power levels to compel notification. AI companies looking to receive a patent for their technology can also expect to be impacted by the EO’s directive to the U.S. Patent and Trademark Office (USPTO) to issue guidance for patent applicants on AI. Tech industry stakeholders will have the opportunity to provide input to federal agencies on a wide range of issues emerging from the order. Importantly, the EO outlines several new and expanding opportunities to support and advance small businesses involved in the development of AI tools through research grants and funding.


Though not a mandate, the EO encourages the Federal Communications Commission (FCC) to examine AI’s ability to enforce efficient spectrum sharing, network security and the mitigation of robocalls and robotexts. Importantly, the EO recognizes the role of AI in enforcing greater network security with AI-enabled communications technologies, such as “self-healing networks, 6G and Open RAN.”


Recognizing the need to help ensure the responsible development and deployment of AI in the education sector, the EO includes directives to develop resources to address safe, responsible, and nondiscriminatory uses of AI, including the impact AI systems have on vulnerable and underserved communities. The EO also calls for the development of an “AI toolkit” for education leaders who are responsible for implementing the Department of Education’s recommendations on AI. The EO also focuses broadly on the need for investments in AI-related education, training, development and research to create a diverse workforce that has the requisite technical expertise. At the same time, the EO recognizes that a portion of the workforce will be displaced by AI, and that education and training opportunities to provide individuals pathways to occupations related to AI will be necessary.


Reflecting growing interest in how AI could impact job quality, market competition, worker health and safety, and worker surveillance, the EO includes several directives focused on this space. This includes the completion of a report by the Council of Economic Advisors (CEA) on AI’s labor-market impacts. DOL is also directed to craft a broader report that will encompass AI-related workforce disruptions and recommend potential legislative and regulatory solutions to address identified concerns. In coordination with other agencies and stakeholders, DOL will also craft best practices for how employers can mitigate the risks of AI and maximize its benefits. Separate DOL-prepared guidance will clarify that if AI is used for employee surveillance or to augment an employee’s work, worker protection laws still apply. The EO also places an emphasis on the civil rights and equity implications of AI and algorithmic discrimination, and it directs the Department of Justice and other agencies to prepare several related reports and pieces of guidance.


The EO issues several directives to the Department of Transportation (DOT), including a requirement that the Nontraditional and Emerging Transportation Technology (NETT) Council assess the need for guidance regarding the use of AI in transportation. Further, the NETT Council is tasked with supporting existing and future AI-related transportation pilot projects and determining if the outcomes of the pilots should warrant additional regulatory action by DOT or other federal and state agencies. The NETT Council is also directed to establish a DOT Cross-Modal Executive Working Group to solicit feedback from relevant stakeholders. Certain DOT Federal Advisory Committees, including the Advanced Aviation Advisory Committee, the Transforming Transportation Advisory Committee, and the Intelligent Transportation Systems Program Advisory Committee, are directed to provide advice on responsible usage of AI in transportation. Lastly, DOT is required to direct the Advanced Research Projects Agency-Infrastructure (ARPA-I) to explore the transportation-related opportunities and challenges of AI, including through public consultation, and prioritize grants to those opportunities.


The Department of Defense (DOD) understands the utilization of AI to augment decision-making, analytical and cyber capabilities to reach machine speed is key to the future of national security. DOD and the Intelligence Community have been the first movers on advancing the study and utilization of AI technology to protect critical warfighting systems, analyze disparate data, identify vulnerabilities and deliver effects on the battlefield. Established in 2018, the DOD created the Joint Artificial Intelligence Center to explore combat and communication capabilities for AI. The organization has since been integrated into the Chief Digital and Artificial Intelligence Office, which continues the mission of studying opportunities for AI to improve decision-making and analytics across the defense sector. While AI will enable new methods and scales for conflict including cyberattacks, swarm tactics and misinformation operations, DOD is working domestically and with international partners to harness AI to advance national security and promote international stability. DOD understands U.S. technological superiority over our strategic competitors is imperative for future conflicts and maintaining the liberal world order. The 2022 National Security Strategy and 2023 National Defense Science and Technology Strategy acknowledged the transformative impacts of AI on the battlefield and reference investments toward the timely deployment of trusted AI.

The Biden administration has said that this EO takes steps to mitigate the risks of AI systems and ensure further developments adhere to national security objectives. The Department of Defense has lead the United States government on navigating the use of AI through efforts like the release of last year’s Responsible Artificial Intelligence (RAI) Strategy and Implementation Pathway and the establishment of the Generative AI Task Force, which this EO is intended to align with. The DOD has a few specific tasks required through this EO that are likely to impact those in the defense industrial base that are developing AI-related tools. These measures include requiring companies building dual-use foundational models to report physical and cybersecurity measures to the government and the implementation of a pilot project to develop and deploy AI capabilities to improve cybersecurity in critical government software systems and networks. These reports are likely to be met with some resistance from industry partners that bristle at the new requirements to share sensitive information about their products. The EO also directs the development of a National Security Memorandum, coordinated by the President’s National Security Advisor and Deputy Chief of Staff for Policy, to address the usage of AI for our national security, military and intelligence community purposes as well as the risks that the use of AI systems pose to the security of the United States.


As part of the Biden administration’s efforts to address the challenges and opportunities provided by AI, they are dedicated to continuing the collaboration with other nations to ensure the trustworthy development and use of AI across the globe. The State Department will work with the Department of Commerce to establish international frameworks to properly utilize the benefits of AI while also managing its risks. The Biden administration will also work with international partners and in-standards organizations to ensure the technology is safe, secure and interoperable during the development and implementation of vital AI standards. They also aim to ensure that the deployment of AI abroad can be used to advance sustainable development, mitigate dangers to critical infrastructure and solve other global challenges. Over the past several months, the Biden administration has also collaborated with various world leaders across the globe to ensure that the EO supports and complements existing frameworks established through the UK Summit on AI Safety, ongoing discussions at the United Nations, Japan’s leadership of the G-7 Hiroshima Process as well as India’s as the chair of the Global Partnership on AI.


The EO comes as agencies grapple with the growing implications of AI and as Congress examines the outlines of what AI safety legislation might look like. One of the champions of this effort is Senate Majority Leader Chuck Schumer (D-NY), who launched his SAFE Innovation Framework for AI in June 2023 before kicking off a series of senator-level forums and briefings on the topic. In late October, Leader Schumer held a second AI forum with dozens of tech executives and other interested parties where participants discussed the creation of a new regulatory agency to oversee AI’s development.

Additionally, several committees across both chambers recently hosted AI-centered oversight hearings to understand the promise and peril of the new technology. Meanwhile, U.S. allies in Europe are moving quickly to make their own imprint on AI policy. On Nov. 1, Vice President Kamala Harris attended the UK’s two-day AI Safety Summit meeting, which featured world leaders, including China’s tech vice minister, alongside tech company executives, researchers and nonprofits. The European Union is also close to approving its own AI legislation that would allow the EU to shut down services that are deemed harmful to society.

Implementing the EO will be a lengthy process. The deadlines assigned to directives in the EO cover a significant span of time; some projects are due by the end of November, while others will not be completed until late 2024 or early 2025. These deadlines are nonbinding, so agencies may end up ahead of or behind schedule. Shortly after the EO’s release, the White House Office of Management and Budget executed one of its directives by releasing a draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.

Our Government Relations practice group is uniquely equipped to assist any stakeholders looking to interface with federal agencies implicated by this announcement, relevant policymakers in Congress or in the administration, as well as in preparing comments in response to forthcoming proposed rules previewed in this announcement.