Keeping abreast of the latest artificial intelligence (AI) developments in this rapidly changing area of law is critical. This newsletter highlights recent updates in AI regulation, covering major AI developments in the United States, China, Europe and elsewhere.

United States

White House Executive Order — President Joe Biden signed an executive order on Oct. 30 titled “Safe, Secure & Trustworthy Development and Use of Artificial Intelligence” (the Order). The Order outlines a comprehensive approach to managing the development and deployment of AI in a manner that is safe, secure, ethical and beneficial to society. The Order establishes safety and security standards, seeks to protect privacy and civil rights, and requires federal agencies to complete a number of tasks that will govern public release of AI tools and the federal government’s procurement of those tools.

Safety and Security

The Order requires all companies developing any AI tool that poses a serious risk to national security, economic security or public health and safety to notify and share the results of safety tests with the government before releasing their products. The Order establishes eight guiding principles to govern AI development, focusing on safety, security, ethical use and compliance with federal laws. The National Institute of Standards and Technology must set standards to ensure that AI tools are safe before public release, which the Department of Homeland Security will also enforce in critical infrastructure sectors.

Civil Rights and Equality

The Order requires the Department of Justice and federal civil rights offices to coordinate on best practices for investigating and prosecuting civil rights violations related to AI and for using AI in a wide range of policing and criminal justice contexts. The Order promises to provide clear guidance to landlords, federal benefits programs and federal contractors to keep AI algorithms from being used to exacerbate discrimination.

Consumer Protection

The Order requires the Department of Health and Human Services to establish a program to monitor and address unsafe health care practices stemming from AI. The Department of Labor must develop best practices to mitigate the harms and maximize the benefits that AI can cause for workers and to write guidelines for federal contractors to prevent discrimination in AI-driven hiring systems. The Order requires certain agencies to address bias caused by using AI for the sale of financial products and encourages the Federal Communications Commission to consider using AI to block unwanted robocalls and texts.

Transparency

The Order directs the Department of Commerce to develop guidance for authenticating and labeling AI-generated content to avert fraud and deception.

Innovation and Competition

The Order encourages responsible innovation and competition in AI, with a focus on investments in AI education, training and research, while addressing intellectual property challenges. The Order seeks to spur a “government-wide AI talent surge,” encourages AI deployment that respects workers’ rights and benefits workers, and streamlines visa requirements for foreign workers with AI expertise.

Congressional and Federal Agency Action on AI — Congressional committees in both the House and Senate have held nearly three dozen hearings on AI-related topics since January 2023. Multiple lawmakers have introduced separate frameworks for comprehensive AI legislation. Several AI bills have been voted on and passed through committees. Although it remains to be seen what legislation will ultimately pass, AI regulation is a priority for the U.S. Congress.

The Securities and Exchange Commission also proposed rules to address the AI use by broker-dealers and investment advisers. The rules identify five specific risks — bias, conflicts of interest, financial fraud, privacy and intellectual property concerns — and would require broker-dealers and advisers to eliminate the effect of certain conflicts of interest associated with their use of AI and other technologies that optimize, predict, guide, forecast or direct investment-related behaviors or outcomes.

Securities and Exchange Commission Chair Gary Gensler recently warned businesses against misleading investors regarding their AI use or capabilities. Gensler reinforced that public disclosures regarding AI are governed by the same securities laws that govern all public disclosures. Gensler referred to misleading AI disclosures as “AI-washing,” a reference to the practice of greenwashing, in which a business misrepresents how environmentally friendly its operations are. Gensler noted gaps between the AI capabilities that businesses are promising versus what they can deliver and gaps between investor knowledge of AI and the latest AI technologies, stating businesses must “fairly and accurately describe the material risks” of any AI offering or use.

State and Local Action on AI — California’s governor signed an executive order similar to President Biden’s that encourages AI innovation while setting standards for responsible development of AI tools and their adoption by state agencies. At least 12 state legislatures considered bills in 2023 to govern AI development and use, although none had passed.

Illinois remains the only state to have passed a law, in 2019, specifically governing the use of AI in job recruiting contexts. The Illinois Artificial Intelligence Video Interview Act requires employers who use AI to analyze video interviews (which may include measuring an applicant’s facial expression, word choice, body language and vocal tone, among other metrics) to provide notice to applicants, explain how AI is used, obtain consent before using AI on the video and destroy copies of the video within 30 days of an applicant’s request. New York City passed a law in 2023 banning the use of AI for certain employment purposes unless notice, bias audit and reporting requirements are met.

Most comprehensive state privacy laws restrict or prohibit the use of AI to make decisions significantly impacting consumers, including decisions in financial and lending services, housing, insurance, education, criminal justice, employment, health care services or access to basic necessities. In November 2023, the California Privacy Protection Agency, which is charged with enforcing California’s state privacy law, released proposed regulations on the use of AI in these sectors. The proposed regulations would govern notice to consumers of AI use, when and how consumers can opt out of such use, and how consumers can access information about AI use by a business. The proposed regulations discuss how consumer data is used to train an AI tool and whether it can be used on children under the age of 16.

Europe

Compromise Reached on the EU AI Act — On Dec. 8, various European legislators achieved a provisional compromise on legislation to govern AI in the European Union. Legislators first proposed the EU AI Act (the Act) in May 2023, focused on data quality, transparency, human oversight, accountability and ethical questions surrounding AI. The original draft of the Act took a risk-based approach, with greater requirements for AI systems or applications whose uses present higher risks to the health, safety and fundamental rights of individuals.

In November 2023, negotiations stalled over how to regulate foundational AI models, which are large deep-learning networks trained on a broad range of unlabeled data and capable of performing a wide variety of general tasks. AI systems and applications often rely on foundational models as a starting point to build more specialized platforms tailored to specific uses. EU member states with prominent foundational model companies, including France, Italy and Germany, proposed that the Act not regulate the underlying foundational models but instead focus on AI systems and applications that rely on those models, leaving the models to self-regulate through company pledges and codes of conduct. After three days of negotiations in early December, EU legislators settled on a provisional agreement under which all AI systems, including foundational models, will be regulated based on risk, with greater requirements for “high impact” foundational models. Some EU member states, such as France, have criticized the compromise and requested adjustments.

Applying the EU AI Act — European legislators are expected to finalize the Act in 2024, indicating that it will likely not take effect until 2026.

Who must comply?

The Act has global reach and will apply to all providers, manufacturers, importers, distributors and deployers of AI systems that offer those systems in the EU or where the output is intended to be used in the EU. The Act applies across industries and includes companies not based in the EU. The Act defines AI systems broadly to include any machine-based system acting with a certain level of autonomy that generates outputs, such as predictions or decisions, in response to inputs. The Act does not apply to areas outside the scope of EU law, nor does it restrict member states’ competence in national security systems used exclusively for military or defense purposes, for the sole purpose of research and innovation, or for people using AI for nonprofessional reasons.

A risk-based approach

The Act creates four tiers of risk to regulate AI based on its use cases:

  • Unacceptable Risk — AI systems that pose an unacceptable risk to the health, safety and rights of individuals are prohibited. These include systems used for behavioral manipulation, emotion recognition, social scoring, inferring sensitive data such as sexuality or religious beliefs, or classifying people based on behavior, status or personal characteristics. Unlimited scraping of facial images from the internet is also prohibited, as is remote biometric identification in public spaces unless performed by law enforcement for specific purposes.

  • High Risk — AI systems that pose a high risk to the health, safety and rights of individuals may only enter the EU market if they meet certain requirements. High-risk systems include those used for education scoring, employee training, access to employment, access to essential services, such as determining creditworthiness, and managing critical infrastructure. The high-risk category also includes certain products, regardless of their use, including radio equipment, medical devices and products used in the civil aviation and automotive industries. High-risk systems must conduct or deploy mandatory impact assessments, data governance and quality standards, anti-discrimination and anti-bias measures, recordkeeping and incident reporting standards, human oversight programs, and cybersecurity measures. AI systems that perform narrow functions and do not influence human decisions may be exempt from high-risk classification, but AI systems that profile individuals will always be considered high risk.
  • Limited Risk — AI systems that do not qualify as unacceptable or high risk but that still interact with individuals (such as generative language or image systems) will be subject to transparency obligations that include informing users that the content is AI-generated.
  • Minimal Risk — AI systems that do not fall within one of the three risk categories above will not be regulated by the Act.

Foundational models

The compromise reached in December 2023 added new requirements for foundational models, which must comply with certain transparency obligations before entering the market. Additionally, “high impact” foundational models, including those with above-average complexity and capabilities that are trained on large amounts of data, must meet heightened requirements under the assumption that they can disseminate systemic risks along the supply chain. These requirements are similar to those applied to high-risk use cases and include obligations to assess and mitigate systemic risks, report serious incidents, ensure cybersecurity, and provide information on the energy consumption of the high-impact model.

Penalties for noncompliance

Noncompliance with the Act may result in regulatory fines by EU member states or civil actions by individuals who are harmed by an AI system. Fines range from the greater of 7.5 million euros or 1.5% of global revenue and may increase to a maximum of the greater of 35 million euros or 7% of global revenue.

China

Management of Generative AI — China’s Interim Measures for the Management of Generative Artificial Intelligence Services (the Measures) took effect in August 2023. The Measures apply to the use of generative AI technologies to provide services to the public in China. “Generative AI technology” is broadly defined as models and technologies capable of generating content such as text, images, audio or video. However, industry organizations, enterprises, educational and scientific research institutions, public cultural institutions, and relevant professional institutions that develop and apply generative AI technology but do not provide generative AI services to the domestic public are not subject to the Measures.

Key aspects of the Measures include:

  • Development and Governance — The Measures seek to foster innovation and encourage the development of AI while ensuring coordinated efforts across industries in data resource management and risk mitigation.
  • Service Provider Responsibilities — AI service providers are required to meet online content standards and protect personal information. The Measures emphasize transparent service agreements and guiding lawful and appropriate use, especially for minors.
  • Government Oversight — The Measures detail the role of various government departments in overseeing AI services, with special scrutiny on providers with significant public influence. The Measures require government registration and review of all AI tools before public release, as well as labeling of AI-generated content.
  • Legal and Ethical Framework — The Measures represent a balancing act between technological advancement and ethical considerations, emphasizing the importance of security, legal compliance and responsible AI usage. The Measures also contain prohibitions against discriminatory use of AI.

As of December 2023, many news outlets report that China has taken a lax approach to enforcing the Measures, preferring instead to foster domestic technology innovation.

Other Countries

Canada, Israel and Brazil are currently contemplating draft legislation specifically aimed at AI. Others, including Australia, Japan and New Zealand, have clarified the application of existing data, privacy and intellectual property laws to AI use while considering “soft law” approaches such as non-proscriptive policies and general guidance. By contrast, the United Arab Emirates released a 46-page “National Strategy for Artificial Intelligence” that largely encourages AI development and attracting AI talent, with almost no regulation.

Should you have any questions about this article or AI issues in general, we invite you to reach out to Kramer Levin’s Artificial Intelligence Group for assistance.

Related Practices