Opinion | We Need Laws to Take On Racism and Sexism in Hiring Technology

American democracy depends on everyone having equal access to work. In reality, people of skin color, women, people with disabilities and other marginalized groups suffer from disproportionately high rates of unemployment or underemployment, especially given the economic impact of the Covid-19 pandemic. Now, employing artificial intelligence for hiring can exacerbate these problems and further influence the hiring process.

The New York City Council is currently debating a new law that will regulate automated tools for evaluating applicants and employees. If done right, the law could make a real difference in the city and have a huge national impact: in the absence of federal regulation, states and cities have used models from other places to regulate emerging technologies.

In recent years, more and more employers have used artificial intelligence and other automated tools to speed hiring, save money, and screen applicants without face-to-face interaction. All of these are features that are becoming increasingly attractive during the pandemic. These technologies include screeners that search résumés for keywords, games that claim to score attributes like generosity and risk taking, and even emotion analyzers that claim to read facial and voice instructions to predict whether candidates are engaged, and team players.

In most cases, providers train these tools to analyze employees who are rated as successful by their employer and to measure whether applicants have similar characteristics. This approach can worsen underrepresentation and social differences if, for example, Latino men or black women are not sufficiently represented in the pool of employees. In another case, a resume review tool could identify Ivy League schools from the resumes of successful employees and then downgrade resumes from historically black or women’s colleges.

In its current form, the Council Bill would require vendors selling automated assessment tools to screen for bias and discrimination, and to see if, for example, an instrument selects male candidates at a higher rate than female candidates. In addition, providers would have to tell applicants the characteristics that the test is supposed to measure. This approach could be helpful: it would shed light on how applicants are being screened and force providers to think critically about possible discriminatory effects. In order for the law to have teeth, we recommend some important additional protective measures.

The measure must require companies to publicly disclose what they find when they test their technology for bias. Despite pressure to narrow its scope, the city council must ensure that the bill addresses discrimination in all forms – not just on the basis of race or gender, but also on the basis of disability, sexual orientation and other protected characteristics.

These audits should take into account the circumstances of people who are multiple times excluded – for example black women who can be discriminated against because they are black as well as women. Bias audits conducted by companies typically don’t.

The invoice should also require validation to ensure that the tools are actually measuring what they are claiming to be and to ensure that they are measuring characteristics that are relevant to the job. Such tests would ask whether the candidates’ efforts to blow up a balloon in an online game, for example, really indicate their willingness to take risks in the real world – and whether the job requires taking risks. Mandatory validity tests would also eliminate bad actors whose recruitment tools do arbitrary things like differentiating judgments about applicants’ personalities based on subtle changes in the background of their video interviews.

Additionally, the city council must require vendors to tell candidates how they will be screened by an automated tool before screening so candidates know what to expect. For example, blind people cannot assume that their video interview will perform poorly if they do not make eye contact with the camera. Once they know what is being tested, they can contact the employer for a fairer test. Under current legislation proposed before the city council, companies would have to notify candidates within 30 days if they have been assessed using AI, but only after they have taken the test.

After all, the bill must cover not only the sale of automated hiring tools in New York City, but also their use. Without this provision, hiring tool providers could escape the obligations of this bill by simply locating sales outside of town. The Council should fill this gap.

With this bill, the city has an opportunity to tackle new forms of discrimination in the workplace and move closer to the ideal of what America stands for: making access to opportunities more equitable for all. Unemployed New Yorkers watch.

Alexandra Reeve Givens is the executive director of the Center for Democracy and Technology. Hilke Schellmann is an artificial intelligence reporter and assistant professor of journalism at New York University. Julia Stoyanovich is Assistant Professor of Computer Science, Engineering and Data Science and Director of the Center for Responsible AI at New York University.

Leave a Comment