Study: People mirror AI systems’ hiring biases


Thinking about letting artificial intelligence play a role in your hiring decisions? A recent University of Washington study indicates that doing so could lead to missing out on qualified candidates due to racial and other biases built into many AI models. 

In findings recently presented at a conference on artificial intelligence, ethics and society in Madrid, Spain, UW researchers shared how more than 500 study participants picked candidates for jobs ranging from nurse practitioner to housekeeper.  

The results were stark: When picking candidates without AI or with neutral AI, participants picked white and non-white applicants at equal rates. But when they worked with a moderately biased AI, if the AI preferred non-white candidates, participants did too. If it preferred white candidates, participants did too. In cases of severe bias, people made only slightly less biased decisions than the recommendations. 

While research had previously found that hiring bias – against people with disabilities, or certain races and genders – permeates large language models, or LLMs, such as ChatGPT and Gemini, the recent study showed that having humans still involved in hiring decisions can be skewed when AI is involved. 

“People have agency, and that has huge impact and consequences, and we shouldn’t lose our critical thinking abilities when interacting with AI,” said senior author Aylin Caliskan, a UW associate professor in the Information School, in a statement. “But I don’t want to place all the responsibility on people using AI. The scientists building these systems know the risks and need to work to reduce systems’ biases. And we need policy, obviously, so that models can be aligned with societal and organizational values.” 



Source link

Leave a Reply