[From the NTIA - April 11, 2023]
WASHINGTON – Today, the Department of Commerce’s National Telecommunications and Information Administration (NTIA) launched a request for comment (RFC) to advance its efforts to ensure artificial intelligence (AI) systems work as claimed – and without causing harm. The insights gathered through this RFC will inform the Biden Administration’s ongoing work to ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities.
While people are already realizing the benefits of AI, there are a growing number of incidents where AI and algorithmic systems have led to harmful outcomes. There is also growing concern about potential risks to individuals and society that may not yet have manifested, but which could result from increasingly powerful systems. Companies have a responsibility to make sure their AI products are safe before making them available. Businesses and consumers using AI technologies and individuals whose lives and livelihoods are affected by these systems have a right to know that they have been adequately vetted and risks have been appropriately mitigated.
“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said Alan Davidson, Assistant Secretary of Commerce for Communications and Information and NTIA Administrator. “Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems.”
NTIA’s “AI Accountability Policy Request for Comment” seeks feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems that they work as claimed. Much as financial audits create trust in the accuracy of a business’ financial statements, so for AI, such mechanisms can help provide assurance that an AI system is trustworthy in that it does what it is intended to do without adverse consequences.
Just as food and cars are not released into the market without proper assurance of safety, so too AI systems should provide assurance to the public, government, and businesses that they are fit for purpose. NTIA is seeking input on what policies should shape the AI accountability ecosystem, including topics such as:
President Biden has been clear that when it comes to AI, we must both support responsible innovation and ensure appropriate guardrails to protect Americans’ rights and safety. The White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights provides an important framework to guide the design, development, and deployment of AI and other automated systems. The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework serves as a voluntary tool that organizations can use to manage risks posed by AI systems.
Comments will be due 60 days from publication of the RFC in the Federal Register.
The National Telecommunications and Information Administration (NTIA), part of the U.S. Department of Commerce, is the Executive Branch agency that advises the President on telecommunications and information policy issues. NTIA’s programs and policymaking focus largely on expanding broadband Internet access and adoption in America, expanding the use of spectrum by all users, advancing public safety communications, and ensuring that the Internet remains an engine for innovation and economic growth.
NTIA’s “AI Accountability Policy Request for Comment” seeks feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems. Much as financial audits create trust in the accuracy of a business’ financial statements, so for AI, such mechanisms can help provide assurance that an AI system is trustworthy. Just as financial accountability required policy and governance to develop, so too will AI system accountability.
NTIA is seeking input on what policies should shape the AI accountability ecosystem, including topics such as:
What kind of data access is necessary to conduct audits and assessments
How can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountability
What different approaches might be needed in different industry sectors—like employment or health care
Written comments in response to the RFC must be provided to NTIA by June 12, 2023, 60 days from the date of publication in the Federal Register. Comments submitted in response to the RFC will be made publicly available via https://www.regulations.gov/.
Comments should respond to questions posed in the RFC, and commenters are encouraged to correlate the content of their comments to the pillars and questions set forth in the RFC. Commenters need not respond to every question. Comments should be typed, double-spaced, and signed and dated by the filing party or a legal representative of that party.
How to file a written comment
Summary of CAIDP Comments
CAIDP Comments to NTIA on AI and Accountability (June 12, 2023)
The Center for AI and Digital Policy (CAIDP) supports the Request for Comment concerning a AI Accountability announced by the National Telecommunications and Information Administration (NTIA) on April 11, 2023. We intend to submit comments. In advance of the deadline, we offer several recommendations to commentators.
Good luck!
CAIDP has several recommendations for AI accountability:
An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system.
Accountable. Notices should clearly identify the entity responsible for designing each component of the system and the entity using it
AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.
We would appreciate your support for these recommendations!
[This is an excerpt from the US country report, prepared by CAIDP]
The U.S. lacks a unified national policy on AI but President Biden, and his top advisors, has expressed support for AI aligned with democratic values. The United States has endorsed the OECD/G20 AI Principles. The White House has issued two Executive Orders on AI that reflect democratic values, a federal directive encourages agencies to adopt safeguards for AI. The most recent Executive Order also establishes a process for public participation in the development of federal regulations on AI though the rulemaking has yet to occur. The overall U.S. policy-making process remains opaque and the Federal Trade Commission has failed to act on several pending complaints concerning the deployment of AI techniques in the commercial sector. But the administration has launched new initiatives and encouraged the OSTP, NIST, and other agencies to gather public input. The recent release of the Blueprint for an AI Bill of Rights by the OSTP represents a significant step forward in the adoption of a National AI Policy and in the U.S.’s commitment to implement the OECD AI Principles. There is growing opposition to the use of facial recognition, and both Facebook and the IRS have cancelled facial recognition systems, following widespread protests. But concerns remain about the use of facial surveillance technology across the federal agencies by such U.S. companies as Clearview AI. The absence of a legal framework to implement AI safeguards and a federal agency to safeguard privacy also raises concerns about the ability of the U.S. to monitor AI practices.
[More information about the AI and Democratic Values Index]