One
< R e s e a r c h +
What if the biases embedded in machine learning systems continue unchecked, concealed beneath a façade of objectivity?
To explore this, I delved into the real-world applications of AI—policing, hiring, and healthcare—where machine learning models are integral to decision-making processes. This investigation revealed not merely the existence of bias but its systemic reinforcement:
Predictive policing disproportionately targeting marginalized communities.
Hiring algorithms excluding candidates from underrepresented backgrounds before human review.
AI-driven healthcare models allocating fewer resources to economically disadvantaged patients.
These issues are not anomalies; they are the consequences of training AI on historical data steeped in inequality. The opacity of AI decision-making processes exacerbates these biases, rendering them difficult to detect and address until significant harm has occurred.
At Imperial College London, researchers in Explainable AI are striving to unravel AI decision-making processes, aiming to understand the rationale behind model predictions. However, these efforts are nascent, and often, biases become apparent only after adversely affecting entire demographics.
From voice assistants exhibiting gender biases to predictive policing disproportionately affecting African American neighborhoods, these challenges are current and widespread.
The primary inference drawn was that any tool that hinders human agency or exerts micro-control over our actions is a tool that we would never desire.
For a detailed overview, refer here
Build a Bot workshop, Royal College of Art, White City. February 2024.