![]() In addition, the lessons we learn from unwinding systemic racism in AI technologies, according to Benjamin, could give us insight into how other forms of institutionalized oppression get inadvertently baked into algorithms in the name of efficiency, progress or profit. If you don't have enough people of color working in this domain, you will perpetuate systemic bias."ĭavis said he is hopeful that the coronavirus pandemic has moved people to be more empathetic to the ways in which certain populations are constrained due to forces outside their control, making this the right time to delve into how racism is baked into technology. "Practically every decision that can be made about access to capital to start or expand a business, buy a house, a car, and even the type of medical care you receive, will be based on AI to drive that assessment. "There is not a facet of life that will not be touched by AI and machine learning over the next five to 10 years," he said. Chuck Davis, CTO of Bayesiant, an analytics software vendor that is using AI to quantify COVID-19 risks in specific areas and populations, argued that there's not a moment to lose in effecting changes to how AI is developed and applied. But AI bias presents a singular problem, experts interviewed for this article said, because of the speed and scale with which AI can alter business processes.Īs businesses increase their use of AI to streamline business processes, the fear is they may also be streamlining racial inequality. As Princeton Associate Professor of African American Studies Ruha Benjamin laid out in her 2019 book Race After Technology: Abolitionist Tools for the New Jim Code, racism was baked into camera film and processing, web cams, medical diagnostic equipment and even soap dispensers. Understanding how AI's many techniques are used by businesses and governments in ways that perpetuate racism will be a longer and harder effort.īias in technology long predates the enterprise's current rush to adopt AI techniques. In response to the growing public awareness of how racism gets institutionalized, corporations have started to make changes - for example, stopping the sale of facial recognition technologies to governments and removing terms like slave and blacklist from IT systems. ![]() ![]() ![]() The killing of George Floyd and the Black Lives Matter movement have shined a light on how racial biases are woven into the fabric of the modern power structures that define communities, governments and businesses. What is missing is racial inclusivity into who gets to develop AI tools."Ī confluence of events has laid the groundwork for a meaningful examination of how AI systems are developed and applied. It also reaches down into how we categorize the data, and how the AI tools are created. The people involved in defining the problem approach it from a biased lens. "It starts with how we conceive a problem. "I think that racism and bias are rampant in AI and data science from inception," said Desmond Upton Patton, associate professor of sociology at Columbia University. Stories like these call attention to serious problems with society's application of artificial intelligence, but to understand racism in AI - and form a business strategy for dealing with it - enterprise leaders must get beneath the surface of the news and beyond the algorithm.
0 Comments
Leave a Reply. |