The Office of Management and Budget (OMB) is seeking public input on a draft policy for the use of AI by the U.S. government. This draft policy would empower Federal agencies to leverage AI to improve government services and more equitably serve the American people. The document focuses on three main pillars:
Deadline: December 5, 2023
This week, President Biden signed a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. As the United States takes action to realize the tremendous promise of AI while managing its risks, the federal government will lead by example and provide a model for the responsible use of the technology. As part of this commitment, today, ahead of the UK Safety Summit, Vice President Harris will announce that the Office of Management and Budget (OMB) is releasing for comment a new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. This guidance would establish AI governance structures in federal agencies, advance responsible AI innovation, increase transparency, protect federal workers, and manage risks from government uses of AI.
Strengthening AI Governance
To improve coordination, oversight, and leadership for AI, the draft guidance would direct federal departments and agencies to:
Advancing Responsible AI Innovation
To expand and improve the responsible application of AI to the agency’s mission, the draft guidance would direct federal agencies to:
Managing Risks from the Use of AI
To ensure that agencies establish safeguards for safety- and rights-impacting uses of AI and provide transparency to the public, the draft guidance would:
We write to you, on behalf of the Center for AI and Digital Policy (CAIDP), regarding the need for the OMB to establish regulations for the use of AI by the federal agencies of the United States. These regulations are required by Executive Order 13960 and the AI in Government Act of 2020. Further delay by the OMB places at risk fundamental rights, public safety, and commitments that the United States has made to establish trustworthy AI.
1) Countries must establish national policies for AI that implement democratic values
2) Countries must ensure public participation in AI policymaking and create robust mechanisms for independent oversight of AI systems
3) Countries must guarantee fairness, accountability, and transparency in all AI systems
4) Countries must commit to these principles in the development, procurement, and implementation of AI systems for public services
5) Countries must halt the use of facial recognition for mass surveillance
The OMB Should Begin the AI Rulemaking Now
The OMB Should Follow the President’s Lead and Establish Safeguards for Trustworthy AI.
We write to you, on behalf of the Center for AI and Digital Policy (CAIDP), regarding the need for the OMB to establish regulations for the use of AI by federal agencies. We wrote to OMB on October 19, 2021, explaining the urgency of the matter. Recent developments underscore the need for the OMB to begin the rulemaking process.
The OMB should issue the government-wide memorandum and begin the formal rulemaking for the regulation of AI, as required by E.O. 13960 and the AI in Government Act.
The Center for AI and Digital Policy (CAIDP) supports the request for comment on the Proposed Memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, announced by the Office of Management and Budget on November 1, 2023. We intend to submit comments. In advance of the deadline, we offer several recommendations to commentators.
Good luck!
[This is an excerpt from the US country report, prepared by CAIDP]
The U.S. lacks a unified national policy on AI but President Biden, and his top advisors, has expressed support for AI aligned with democratic values. The United States has endorsed the OECD/G20 AI Principles. The White House has issued two Executive Orders on AI that reflect democratic values, a federal directive encourages agencies to adopt safeguards for AI. The most recent Executive Order also establishes a process for public participation in the development of federal regulations on AI though the rulemaking has yet to occur. The overall U.S. policy-making process remains opaque and the Federal Trade Commission has failed to act on several pending complaints concerning the deployment of AI techniques in the commercial sector. But the administration has launched new initiatives and encouraged the OSTP, NIST, and other agencies to gather public input. The recent release of the Blueprint for an AI Bill of Rights by the OSTP represents a significant step forward in the adoption of a National AI Policy and in the U.S.’s commitment to implement the OECD AI Principles. There is growing opposition to the use of facial recognition, and both Facebook and the IRS have cancelled facial recognition systems, following widespread protests. But concerns remain about the use of facial surveillance technology across the federal agencies by such U.S. companies as Clearview AI. The absence of a legal framework to implement AI safeguards and a federal agency to safeguard privacy also raises concerns about the ability of the U.S. to monitor AI practices.
[More information about the AI and Democratic Values Index]