• The HR Specialist - Print Newsletter
  • HR Specialist: Employment Law
  • The HR LAW Weekly
  • The HR Weekly
  • California Employment Law
  • New York Employment Law
  • Texas Employment Law

As artificial intelligence permeates HR, regulation increases

07/30/2019

By Jennifer G. Betts, Esq., Ogletree Deakins

More and more organizations are beginning to use artificial intelligence tools in their workplaces—and in their HR functions. AI raises a host of issues that have prompted an array of federal and state regulations.

What is AI?

Essentially, artificial intelligence uses software to enable machines to learn to “think” and “learn” much as people do in order to solve problems.

There are several kinds of AI. With machine learning, computers use data to make predictions. Natural-language AI computers can understand spoken words and respond appropriately. Computer vision or image recognition allows computers to process, identify and categorize images based on their content.

AI is becoming increasingly prevalent in talent acquisition and recruiting.

Many organizations have adopted machine-learning software to facilitate screening of job candidates. Additionally, AI tools power many of the increasingly common employee self-service tools that enable quicker, more efficient answers to common employee relations questions.

AI improves efficiency, lowers costs of products and services, improves quality and reduces errors.

It’s not perfect

But AI is not perfect. Indeed, government and media attention has centered on the potential for AI-driven tools to be biased or discriminatory. For example, during the Obama administration, the White House issued a detailed report on potential civil rights issues that highlighted “the potential of encoding discrimination in automated decisions” made on the basis of artificial intelligence. The EEOC has also voiced concerns how AI might compromise protections against employment discrimination protections.

Regulatory response to AI

A number of city, state and federal regulations have been proposed or enacted with a goal of eliminating potential discrimination and increasing transparency related to AI. For example:

Facial recognition software ban. Tech­­nology-friendly San Francisco passed a ban in mid-May 2019 on the use of facial recognition software by police and other government agencies. The ban, which does not apply to the use of facial recognition software by private entities, makes San Francisco the first major city to legislatively ban the use of this technology. Similar bans are under consideration in other jurisdictions.

AI in hiring in Illinois. One popular use of AI in the hiring process is through AI “interview bots,” which evaluate personal characteristics such as an applicant’s facial expression, body language, word choice and tone of voice. The software then provides feedback that employers can use to evaluate whether to hire a candidate. In May 2019, the Illinois General Assembly passed a first-of-its-kind measure that would restrict employers’ use of this kind of artificial intelligence in hiring. It will likely become law.

The law, known as the Artificial Intelligence Video Interview Act, is a disclosure-and-informed-consent rule that would require employers to take the following steps before asking applicants to submit to video interviews:

  1. Notify applicants for Illinois-based positions of plans to have their video interviews analyzed electronically
  2. Explain to applicants how the artificial intelligence analysis technology works and what characteristics will be used to evaluate them
  3. Obtain the applicants’ consent to the use of the technology.

Illinois has become something of an incubator for workplace-technology legislation. It was the first state to pass legislation regulating employers’ use of employee biometric information such as retinal scans, fingerprint scans and facial recognition software.

Algorithmic Account­ability Act. In April 2019, congressional Democrats introduced the Algorithmic Account-ability Act of 2019, which seeks to enhance federal oversight of artificial intelligence and data privacy.

For processes that fit the proposed statute’s definition, organizations would be required to audit for bias and discrimination and take appropriate corrective action to resolve any identified issues. The bill would give oversight responsibility to the Federal Trade Commission.

The Algorithmic Accountability Act probably will not successfully pass Congress. However, it could be a harbinger of things to come. This kind of legislation may gain momentum on a federal level depending on how the 2020 elections play out.

And, regardless, proposed federal legislation often catches the attention of legislators in one or more states and spurs similar proposals at the state level.

Indeed, California, for example, has passed the California Consumer Privacy Act, a sweeping data privacy law that becomes effective Jan. 1, 2020, and whose scope and application to workplaces remains unclear.

The pace of legislation and regulatory activity relating to AI looks to be increasing. In February 2019 President Trump issued an Executive Order on Maintaining American Leadership in Artificial Intelligence. The Office of Management and Budget is expected to issue draft guidelines for the AI sector this summer. The White House recently launched www.AI.gov, a website designed as a platform for government agencies to share AI initiatives.


Jenn Betts is an Ogletree Deakins shareholder based in the firm’s Pittsburgh office.