Settlement Sparks Change
Rite Aid has reached a settlement with the Federal Trade Commission (FTC), leading to the cessation of their AI facial recognition technology. This technology, used since 2012 for shoplifting prevention, has faced criticism for racial profiling. It reportedly targeted Black, Latino, and Asian shoppers more than white individuals.
Bias in AI: A Retail Racism Case Study
Allegations against Rite Aid's system include racial profiling, particularly against customers of color. These customers were more often flagged for potential theft, a practice linked to broader retail racism issues. Such discrimination has historically manifested in various retail practices, unfairly targeting Black shoppers.
The Technology's Troublesome Track Record
The use of AI in retail settings like Rite Aid's has shown a disturbing trend: the perpetuation of racial biases. These biases, according to Rashawn Ray of The Brookings Institution, reflect societal stereotypes. AI, when designed in non-diverse environments, can encode and amplify these biases.
Misguided Measures and Their Impacts
Rite Aid's AI system not only contributed to racial profiling but also had financial implications. Wrongfully targeted shoppers, predominantly from minority groups, faced public humiliation, which in turn affected the store's revenues. In some areas, these shoppers had no other retail options, forcing them to endure this discriminatory environment.
Beyond Retail: Wider AI Concerns
The issues with biased AI are not confined to retail. There have been instances in other sectors, like policing, where AI inaccuracies led to wrongful arrests. Data from Pew Research Center shows a significant lack of trust in AI among Black communities, particularly concerning its use in law enforcement.
Calls for Responsibility and Testing
In response to these concerns, some U.S. senators have challenged the use of facial recognition at airports, citing its higher error rates for people of color. As part of its settlement with the FTC, Rite Aid will halt its AI surveillance for five years. Experts advocate for other retailers to conduct thorough testing to identify and correct biases in AI technologies.
The discontinuation of Rite Aid's AI facial recognition technology raises essential questions about the ethical use of AI in retail and beyond. While it marks a step towards addressing racial biases in technology, it also highlights the ongoing challenges in ensuring AI systems are fair and just for all.