AI in Home Surveillance: A Study Highlights Inconsistent and Biased Decision-Making
A new study from MIT and Penn State University reveals that large language models (LLMs), like those used in home surveillance, may produce inconsistent and biased decisions regarding police intervention. Inconsistent decisions between models, as well as apparent bias in how these decisions varied across different neighborhoods, raise concerns about the use of AI in high-stakes applications.
Wilfred