< S  i  g  n  
a  l  s >

Why investigating and designing in response to AI Bias is crucial ?










AI systems increasingly mediate the ways we work, interact, and access essential resources—governing everything from hiring and predictive policing to medical diagnostics and risk assessments. These technologies, often framed as objective and neutral, instead reflect the biases of the data they are trained on and the sociopolitical structures that shape their development.

Becoming (more)human responds to this reality by reimagining socio-technical infrastructures and questioning the methodologies that sustain them -

What if the sociotechnical bias embedded in AI systems today define the institutionalised hegemonies of tomorrow ?


The project is not just about identifying bias in AI, but about exploring how these biases, when encoded into the systems we rely on, create new forms of classification, inclusion, and exclusion.

Rather than framing these concerns through fear-based narratives, which can often feel paralyzing, this project takes a speculative, world-building approach—one that invites participation and reflection. Fear can sometimes limit engagement with emerging technologies, making it harder to ask the right questions. Instead of presenting a rigid critique, Becoming (more)human constructs an immersive "what-if" future where audiences can actively navigate the implications of AI-driven governance. This fictional world is not meant to dictate a singular viewpoint, but to open up an experiential space for curiosity, questioning, and conversation.

The narrative is built from real-world signals—bias in hiring algorithms, predictive policing, emotion AI, and behavioral tracking—where classification systems, once intended as tools for efficiency, begin to reshape the fundamental ways we understand human value and functionality.

Becoming (more)Human raises questions like:

Who defines intelligence, stability, and success in AI-driven systems?

What cultural assumptions are embedded in these measurements?

How do we intervene in the logics that shape technological development?









































Signal - Encoding Discriminatory Stereotypes in Healthcare
1.  Algorithmic bias perpetuating harmful practice in healthcare tech. While consistently being developed to mitigate biases, promising new tech for diagnosis also brings significant problems associated with training bias.

Refer here, here, and here.  


Signal  -  Encoding Discriminatory Stereotypes in
Hiring Practice


2. AI driven softwares are employed to screen potential employees. Many reports present the screening as biased, accounting for unfair hiring practice.

Refer here, and here.


Signal - AI for Defence
3. Report by Washington Post detailing use of AI driven defence systems by Israel. 

Refer here.







Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.









Y’s Website Navigator