A law in New York City which comes into effect in January 2023, requires companies to audit the assessments of biases along the lines of race and gender, in the AI systems they use to hire. Under this law, the hiring company could be held liable for violations.
Unlike other financial audits, the audit process for AI is new without established guidelines.
‘There is a major concern, which is it’s not clear exactly what constitutes an AI audit,’ said Andrew Burt, managing partner at AI-specialized law firm BNH. ‘If you are an organization that’s using some type of these tools… it can be pretty confusing.’
According to the New York State Department of Labor, there are about 200,000 businesses operating in New York City.
A New York City spokesman stated that its Department of Consumer and Worker Protection have been working on the guidelines on the implementation of the law, but there was no timeline for its publication yet.
Kevin White, co-chair of the labor and employment team at Hinton Andrews Kurth LL.P., stated that employers are confident that audit requirements will be implemented in more jurisdictions.
AI has been slowly making its way into the Human Resources department of every company. According to the Society for Human Resource Management, about 1 in 4 companies use AI, automation, or both to support HR activities. It increases to 42% in companies with over 5,000 employees.
Emily Dickens, SHRM’s head of government affairs, stated that AI technology can assist businesses in hiring candidates amid a ‘war for talent.’
Advocates of the technology have argued that it could stop biases that creep into hiring decisions. For example, a person may unconsciously choose a candidate who went to the same college as him/her, but computers don’t have this problem.
A human mind is ‘the ultimate black box’ unlike an algorithm whose responses to different inputs can be investigated, said Lindsey Zuloaga, chief data scientist at HireVue Inc., a company which offers software that can automate interviews.
Zuloaga added that AI can ‘be very biased at scale, which is scary’. She stated that she was in support of the scrutiny into AI systems, because it is important for customers to feel comfortable with the AI tools.
In 2020, an audit of HireVue’s algorithms published found that minority candidates were more likely to give brief answers like ‘I don’t know’ to interview questions, which would result in being flagged for human review. The software has now been changed to appropriately deal with short answers.
According to the U.S. Chamber of Commerce, businesses have concerns about the ‘opaqueness and lack of standardization’ of AI auditing. Jordan Crenshaw, vice president of the Chamber’s Technology Engagement Center, stated his concern on the possible impact on small businesses.
Mr. White explained that numerous companies have had to rush to determine the extent to which they use AI systems in the hiring process. In some companies, human resources drives the AI process, while in others, it is the IT department or the chief privacy officer.
‘They pretty quickly realize that they have to put together a committee across the company to figure out where all the AI might be sitting,’ he said.
White expects that different approaches will be taken in the audits, but he doesn’t believe that difficulty in complying will force the companies to stop the use of AI.
‘It’s too useful to put back on the shelf,’ he said.
The New York Civil Liberties, the Surveillance Technology Oversight Project, and other organizations have pushed for harsher penalties for violation of the law. They argued that companies selling these biased tools should themselves face punishment.
‘The good faith effort is really what the regulators are looking for,’ said Liz Grennan, co-leader of digital trust at McKinsey & Co. ‘Frankly, the regulators are going to learn as they go.’ She stated that some companies have begun to act.
Companies are partly motivated by the potential risk to their reputation. Concerns about the social and environmental impact and issues outweigh concerns about being ‘slapped by a regulator’, said Anthony Habayeb, chief executive of AI governance software company Monitaur Inc.
‘If I’m a larger enterprise… I want to be able to demonstrate that I know AI might have issues,’ he said. ‘And instead of waiting for someone to tell me what to do… I built controls around these applications because I know like with any software, things can and do go wrong.’
By Marvellous Iwendi.
Source: WSJ