California Gov. Gavin Newsom signed an executive order (Order) on Sept. 6 aimed at preparing the state for the use and regulation of generative artificial intelligence (AI). Calling California the world leader in AI innovation, the Order seeks to foster that innovation while protecting against potential harms and responsibly deploying AI tools in state government.

The Order sets a path and timeline toward “measured guardrails” for the use of AI by requiring various state agencies and departments to perform the following tasks:

Reports on AI Uses and Risks

Within the next two months, various state agencies must submit a report identifying the most beneficial potential uses of AI by the state. This report should also explain potential risks, including “risks stemming from bad actors and insufficiently guarded governmental systems, unintended or emergent effects, and potential risks toward democratic and legal processes, public health and safety, and the economy.” The report must focus on high-risk uses, such as where AI is used to make decisions that affect access to essential goods and services.

State agencies must also perform a joint risk analysis of potential threats from AI to California’s critical energy infrastructure, including mass casualties and environmental emergencies, and must develop a strategy to assess similar threats to other critical infrastructure. Following this report, which is due in March 2024, the agencies are to submit classified briefing for how to address these threats.

State Procurement of AI Tools

Also, within the next two months, all agencies and departments are required to submit an inventory of their current high-risk uses of AI (including those that affect essential goods and services) and must appoint a senior manager responsible for updating that inventory.

By January 2024, certain agencies must issue guidelines for state government procurement of AI tools. At a minimum, these guidelines should address the topics of safety, algorithmic discrimination, data privacy and notice to consumers, as those topics are described in the White House’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework. The state must update its existing procurement, approval and contract requirements based on these guidelines by July 2025.

State Use of AI Tools

By March 2024, the California Department of Technology must make programs available to all state agencies and departments to conduct pilots, or “sandboxes,” to test AI tools. In addition to testing the safety and efficacy of AI tools, these pilot programs should measure how AI can improve Californians’ experience with government services and how AI can support state employees in performing their duties. All state agencies are required to consider such pilot programs as well as “opportunities where AI can improve the efficiency, effectiveness, accessibility, and equity of government operations.”

In addition to pilot programs, by July 2024, various agencies must develop guidelines for all state agencies and departments to analyze the impact that AI tools may have on vulnerable communities and the state government workforce, including social equity in high-risk use cases.

Also by July 2024, various agencies must provide guidelines for how to help government employees use AI tools effectively and must develop trainings for using state-approved tools. These trainings should be designed to achieve equitable outcomes; mitigate potential output inaccuracies, hallucinations and biases; and enforce privacy and other applicable state laws.

Legislative and Private Sector Support

State agencies and departments are required to engage with the legislature and “relevant stakeholders” while performing the tasks listed above. The Order specifically identifies historically vulnerable communities and organizations that represent state government employees as relevant stakeholders.

The Order also requires certain agencies to partner with the AI departments at Stanford University and U.C. Berkeley beginning in the fall of 2023. The Order directs this partnership to host a joint California-specific summit in 2024 to “engage in meaningful discussions and thought partnership about the impacts of AI on California and its workforce.”

Regulatory Landscape

The Order comes just three weeks after the California Legislature adopted a resolution drafted by AI, which expresses the state’s commitment to the regulation and exploration of AI. The White House is expected to issue an executive order on AI soon, following months of discussions with AI business leaders.

On Sept. 7, the Democratic and Republican leaders of the Senate Judiciary Committee’s subcommittee for privacy, technology and law jointly announced a sweeping framework to regulate AI, which will include licensing and auditing requirements, an independent federal oversight office, liability for privacy and civil rights violations, and requirements for data transparency and safety standards. Details of the Senate framework will be revealed on Sept. 12.