The lack of racial and gender diversity among creators of the artificial intelligence that’s shaping modern life has reached crisis levels, according to a new report from New York University’s AI Now Institute.
The study aggregates research showing that women are vastly underrepresented in the power centers of AI research and development. Only 18% of authors at leading AI conferences are women, while 80% of AI professors are men. Additionally, women comprise only 15% and 10% of AI-focused staffs at Facebook and Google respectively.
Researchers also lay out a list of suggestions for improving the situation, including more transparency around hiring and payroll practices and better-established advancement paths for temps and contract workers.
“Both within the spaces where AI is being created and in the logic of how AI systems are designed, the costs of bias, harassment and discrimination are borne by the same people: gender minorities, people of color and other underrepresented groups,” the report’s authors wrote.
Despite years of criticism, the numbers of women and members of racial minorities working in Silicon Valley still lag far behind those of the rest of the population in proportion, and the divide is particularly stark in technical roles like AI. Much of the AI technology that plays a quiet but ever-expanding role in people’s everyday lives originates in the research departments of tech giants like Facebook, Google and Microsoft and a small number of university labs, which all tend to be white and male-dominated, according to the report.
Researchers say those gaps lead to a blinkered perspective that manifests in the technology the industry creates, pointing to high-profile examples like image recognition systems that can’t recognize black faces, criminal sentencing algorithms that discriminate against black defendants and chatbots that adopt racist or misogynistic language.
Addressing such biases will require a more comprehensive approach than the technical solutions that have been offered thus far, according to the report.
“Our research points to the need for a more careful analysis of the ways in which AI constructs and amplifies systems of classification, which themselves often support and naturalize existing power structures, along with an examination of how these systems are being integrated into our institutions and how they may be experienced differently on the basis of one’s identity,” the report says.
The report is the result of a year-long pilot study undertaken by AI Now, an interdisciplinary center that focuses on the social implications of the technology to draw together computer science, social science and humanities literature in examining the scale of the diversity and bias problem within AI. The institute says it’s the first stage in a multi-year project that will focus research efforts on various facets of the issue going forward.
AI Now isn’t the only academic institution dedicated to these sorts of problems. As a series of headline-grabbing incidents have shone a light on the dangers of unchecked algorithms, some of the country’s top universities have opened similar research arms aimed at a more cross-disciplinary study of AI, including Stanford’s Human-Centered AI Institute and MIT’s Stephen A. Schwarzman College of Computing.