Since the release of ChatGPT in late 2022, popular use of artificial intelligence (AI) has exploded.[1]One survey reported that over 56% of employees already use AI at work, with 1 in 10 using it daily.[2]However, only 26% of respondents said their company has an AI policy, and only 46% of those who reported using AI at work said their management was aware.[3]

Legislators around the world are scrambling to regulate AI. For example, President Joe Biden signed an executive order addressing security, civil rights and government use of AI tools. Draft AI regulations have been proposed by Senate subcommittees, the Securities and Exchange Commission, and at least a dozen states. New York City and the state of Illinois have passed laws regulating the use of AI in certain employment contexts. The European Union, Israel and Brazil are finalizing their own AI regulations, while China’s AI law took effect in August.

With employee use of AI prevalent and the legal landscape in flux, companies should consider whether to adopt a policy governing their employees’ use of AI and, if so, how extensive and prohibitive that policy should be. Depending on a company’s business profile, failure to do so could lead to violations of laws and regulations.

What Your AI Policy Should Address

A threshold question for any AI policy is determining whether your company seeks to encourage or limit AI use on the job. Companies in highly regulated or high-risk industries, such as health care, financial services or those with heightened duties of confidentiality or care, may wish to consider restricting employee use of AI to certain permitted tasks. The same is true for companies that make decisions significantly affecting individuals, such as the insurance, housing, lending, and employment, staffing or headhunting industries, or anytime public access to essential goods and services is involved.

Companies in lower-risk industries, such as entertainment or fashion, may wish to encourage AI use while still educating their employees on related legal, ethical or other obligations. Whatever your company’s risk profile, we have identified below some key considerations for an AI policy.

Inaccurate, Unreliable, and Biased or Harmful Results

An AI policy should educate employees on the accuracy of AI and its potentially harmful impacts. Every AI tool is trained on limited datasets, the quality and sources of which are unknown. AI often generates false responses, called hallucinations, that appear plausible but are untrue.

AI is also known to generate biased or discriminatory responses. For example, computer-aided diagnosis systems in health care have been found to return lower accuracy results for people of color.[4]Amazon discontinued a hiring algorithm after finding it favored men’s resumes.[5]Image-based AI and facial recognition tools have a well-documented history of offensive outputs.[6]

Companies should enforce a zero-tolerance policy for the use of any harmful, biased or discriminatory content and consider requiring employees to verify the accuracy, truthfulness and appropriateness of AI outputs. Companies may also consider shortlisting specific AI platforms that are approved for employee use.

Laws and Regulations

An AI policy should make employees aware of the laws and regulations that already restrict AI use. For example, New York City bans the use of AI for evaluating job applicants without certain controls in place. Companies should consider reminding employees to comply with any applicable laws and regulations, including intellectual property laws and industry-specific regulations. To ensure compliance, company personnel may wish to direct questions regarding the use of AI to the legal department.

Confidentiality of Company, Client and Customer Information

Many AI tools retain the prompts and information they are given and the resulting outputs, and then analyze and store that data to further train the AI engine. There is no expectation of confidentiality for information entered into AI platforms and no practical way to limit its dissemination. In other words, once the information is entered, it is out there for public consumption. Companies should remind employees of the severe consequences — including losses of trade secret protection, reputational damage and adverse legal action or penalties — that may arise from disclosing confidential or sensitive information. Companies should expressly prohibit employees from inputting confidential or sensitive information into an AI platform.

Intellectual Property Rights, Plagiarism, and Terms and Conditions

AI raises complex issues relating to ownership and potential infringement of intellectual property rights. To help avoid liability or legal action, an AI policy should remind employees to consider these intellectual property concerns, including reviewing the terms and conditions applicable to the relevant AI platform, before using AI in the workplace.

Data Privacy Risks

Data privacy laws in the United States and abroad require companies to disclose (i) the sources of certain types of data used in their business (including personal information), (ii) the purposes for such use and (iii) how that data is shared. When employees use AI for work, it can be difficult to comply with these disclosure requirements, as the company may not know the sources or content of the data used by the AI tool. Entering personal information into an AI tool may also violate the data privacy rights of the individuals involved. An AI policy should ensure that employees comply with data privacy laws, the company’s internal privacy policies and procedures, and its online privacy notice when using AI for work.

Prohibited Uses of AI

If a company chooses to allow AI in the workplace, certain uses should generally be prohibited. Additionally, risk-averse companies may choose to identify permitted uses of AI and prohibit its use for any other purpose.

A non-exhaustive list of uses that companies may wish to prohibit includes:

  • Employment-related decisions, including, for example, hiring, firing or promotion
  • Automated screening or profiling of employees, candidates or resumes
    Performance evaluations or employee assessments
  • Use that results in harm, bias, discrimination, insensitive, or fraudulent or deceptive acts or communications
  • Use that violates applicable law or exposes the company to legal liability
  • Use classified as “Unacceptable Risk” or “High Risk” under the European Union’s Artificial Intelligence Act
    • Unacceptable Risk includes, for example, social scoring; classifying people based on behavior, status or personal characteristics; behavioral manipulation; or biometric sorting
      High Risk includes any use that will negatively affect the safety or fundamental rights of individuals, including, for example, education scoring, employee training, access to employment, or access to essential goods and services
  • Other use that violates the company’s information security program or employee handbook, including any acceptable use policy

Some companies should also consider prohibiting employees from inputting certain types of information into an AI platform, no matter the purpose, including the following:

  • Confidential or proprietary client, customer or company information
  • The names of customer, client or other information that could identify a customer or client
  • Personal information that may be used to identify any individual (including, for example, names, contact information, personal identifiers, employment history or personal characteristics such as age, race, religion, health, sex or sexuality, or personal financial information)

Reporting and Prior Authorization

An AI policy should inform employees whether they are required to seek approval before using AI on the job. Companies may also consider requiring employees to report to their supervisor anytime they use AI for a new purpose or for a new client or customer. Finally, companies may wish to restrict employees from sharing AI-generated content outside the company without prior authorization.

Recordkeeping

Regardless of the company’s authorization or reporting requirements, it is prudent to require some form of recordkeeping for AI use. For example, companies may consider requiring employees to maintain a log that includes the dates or date ranges of such use, the purposes for such use, the AI platform or tool used, and a description of the results of such use, including how it was used and whether the resulting output was stored, reused, shared or destroyed. These records may help the company monitor AI use and comply with any subsequent records requests, audits or other requirements imposed by impending regulations.

Policy Review

Because AI is an emerging technology governed by a quickly developing area of law, any AI policy should be reviewed and updated regularly. Most notably, whenever there are major changes in employee use of AI or developments in related regulations, an AI policy review should be undertaken.

AI is here to stay, and its use by employees continues to grow. Companies that move quickly to protect themselves with relatively simple measures, starting with an AI policy, can reap huge benefits and reduce risk exposure. Should you have any questions about this article or AI issues in general, we invite you to reach out to Kramer Levin’s Artificial Intelligence Group for assistance.


[1]Bloomberg Intelligence reports that the generative AI market will grow to $1.3 trillion over the next 10 years, with a compound annual growth rate of more than 42%. See https://www.bloomberg.com/company/press/generative-ai-to-become-a-1-3-trillion-market-by-2032-research-finds/

[2]https://www.conference-board.org/press/us-workers-and-generative-ai

[3]Id.

[4]https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples/

[5]https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/

[6]https://www.nytimes.com/2023/07/04/arts/design/black-artists-bias-ai.html#:~:text=Discussion%20of%20racial%20bias%20within,questions%20of%20fairness%20and%20bias; https://www.washingtonpost.com/news/the-intersect/wp/2015/05/20/google-maps-white-house-glitch-flickr-auto-tag-and-the-case-of-the-racist-algorithm.