What     if     the     sociotechnical     bias     embedded     in     AI     systems     today     define     the     institutionalised    hegemonies     of     tomorrow  ?


TOBRO, speculative product from 2029, part of AI Bias Research Project















The integration of emerging technologies of AI into the everyday functions of being human represent



a paradigm shift in the ways we make sense of the world by encoding bias into future realities.
Becoming (more)human is a speculative world-building narrative that illustrates a “what-if” speculative future, wherein these biases reinforced and amplified by the technocratic developments of AI create new systems of prejudice. 

Through a multimedia installation - a speculative poetry book, a diptych film, and an AI-driven wearable device (TOBRO), the project immerses audiences in a 2029 reality where AI scoring systems dictate human value and function in society.















Methodologies

Speculative Design Framework by Dunne and Raby | World Building | Product Design | Participatory Design | System and Design Research


Collaborators

3D Printing - RCA 3D Print Lab
Film Support - Sauranav Deka


Tools and Mediums

Blender - 3D Modeling and Prototyping  > Writing & Poetry - Narrative Construction > Film & Artefacts - Storytelling and Speculative Documentation >
Product Design
- AI-Driven Wearable (TOBRO) >
Adobe Premiere
- Film Editing >
Adobe InDesign - Speculative Poetry Book Design  










< E  x  p  e  r  i  e  n  c  e >
What constitutes Becoming (more)human?

In the year 2029, the world operates under MESIF—the Marker of Emotional Stability, Intelligence, and Functionality, a universal AI-driven scoring system that dictates an individual’s worth and role in society. MESIF quantifies human existence, determining access to education, employment, healthcare, and social opportunities based on biometric data, behavioral patterns, and AI-driven assessments.

This speculative future is brought to life through an immersive multimedia installation, allowing participants to step inside the world of MESIF through three diegetic artefacts from the world -

Diptych Film: Two screens displaying the shift in the daily routine of humans through a performance video from the perspective of an artist residing in the world.

TOBRO—The AI Wearable: At the heart of the installation sits TOBRO, a speculative AI companion device designed to “correct” individuals with low MESIF scores by analyzing brain-wave patterns and emotional responses, subtly guiding them to behave more like high-scoring individuals.

Poems from 2031:  Speculative writings in prose detailing shifts experienced from before and after establishment of MESIF systems - from an underground publication in 2031.












< S  i  g  n  
a  l  s >

Why investigating and designing in response to AI Bias is crucial ?










AI systems increasingly mediate the ways we work, interact, and access essential resources—governing everything from hiring and predictive policing to medical diagnostics and risk assessments. These technologies, often framed as objective and neutral, instead reflect the biases of the data they are trained on and the sociopolitical structures that shape their development.

Becoming (more)human responds to this reality by reimagining socio-technical infrastructures and questioning the methodologies that sustain them -

What if the sociotechnical bias embedded in AI systems today define the institutionalised hegemonies of tomorrow ?


The project is not just about identifying bias in AI, but about exploring how these biases, when encoded into the systems we rely on, create new forms of classification, inclusion, and exclusion.

Rather than framing these concerns through fear-based narratives, which can often feel paralyzing, this project takes a speculative, world-building approach—one that invites participation and reflection. Fear can sometimes limit engagement with emerging technologies, making it harder to ask the right questions. Instead of presenting a rigid critique, Becoming (more)human constructs an immersive "what-if" future where audiences can actively navigate the implications of AI-driven governance. This fictional world is not meant to dictate a singular viewpoint, but to open up an experiential space for curiosity, questioning, and conversation.

The narrative is built from real-world signals—bias in hiring algorithms, predictive policing, emotion AI, and behavioral tracking—where classification systems, once intended as tools for efficiency, begin to reshape the fundamental ways we understand human value and functionality.

Becoming (more)Human raises questions like:

Who defines intelligence, stability, and success in AI-driven systems?

What cultural assumptions are embedded in these measurements?

How do we intervene in the logics that shape technological development?









































Signal - Encoding Discriminatory Stereotypes in Healthcare
1.  Algorithmic bias perpetuating harmful practice in healthcare tech. While consistently being developed to mitigate biases, promising new tech for diagnosis also brings significant problems associated with training bias.

Refer here, here, and here.  


Signal  -  Encoding Discriminatory Stereotypes in
Hiring Practice


2. AI driven softwares are employed to screen potential employees. Many reports present the screening as biased, accounting for unfair hiring practice.

Refer here, and here.


Signal - AI for Defence
3. Report by Washington Post detailing use of AI driven defence systems by Israel. 

Refer here.







Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.

















< M  e  t  h  o  d
o  l  o  g  i  e  s  +  
P  r  o  c  e  s  s >



How do we build the what-if world of AI systems through designing future artefacts?































Chapter
One




<  R e s e a r c h   +










It began with a critical inquiry:

What if the biases embedded in machine learning systems continue unchecked, concealed beneath a façade of objectivity?
To explore this, I delved into the real-world applications of AI—policing, hiring, and healthcare—where machine learning models are integral to decision-making processes. This investigation revealed not merely the existence of bias but its systemic reinforcement:

Predictive policing disproportionately targeting marginalized communities.

Hiring algorithms excluding candidates from underrepresented backgrounds before human review.

AI-driven healthcare models allocating fewer resources to economically disadvantaged patients.


These issues are not anomalies; they are the consequences of training AI on historical data steeped in inequality. The opacity of AI decision-making processes exacerbates these biases, rendering them difficult to detect and address until significant harm has occurred.

At Imperial College London, researchers in Explainable AI are striving to unravel AI decision-making processes, aiming to understand the rationale behind model predictions. However, these efforts are nascent, and often, biases become apparent only after adversely affecting entire demographics.

From voice assistants exhibiting gender biases to predictive policing disproportionately affecting African American neighborhoods, these challenges are current and widespread.








W o r k s h o p p i n g  >
Collaborating with my colleagues - designers and researchers to explore ethical AI futures, I designed and organised a test workshop regarding what kinf od AI bot or tool we would want or not want - to study what individual and collective bias we unknowingly embody within our custom AI systems.

The primary inference drawn was that any tool that hinders human agency or exerts micro-control over our actions is a tool that we would never desire.

For a detailed overview, refer here



Build a Bot workshop, Royal College of Art, White City. February 2024.


Chapter
Two




<  W o r l d B u i l d i n g     &  
N a r r a t i v e  D e s i g n  >











The year is 2029. Your worth isn’t just felt—it’s calculated.

Every interaction, every emotional fluctuation, every social exchange feeds into a single AI-driven system that determines your access to opportunity, stability, and survival.

Welcome to MESIF.
MESIF (Marker of Emotional Stability, Intelligence, and Functionality) is a government-adopted classification framework—a system designed to measure human potential through AI-driven assessments. It operates under the guise of neutrality and optimization, but in practice, it is an enforcement mechanism, deciding who thrives, who struggles, and who disappears.

What was once implicit bias in hiring, healthcare, and social systems is now explicit, standardized, and automated—packaged as an objective measure of human capability.



Constructing MESIF’s Inner Workings


To build a world where AI-driven classification is the foundation of governance, I had to define its logic, parameters, and consequences.

What is MESIF tracking?

Biometric stress markers
Your emotional stability under pressure.


Social interaction patterns
Your perceived trustworthiness and adaptability.


Cognitive efficiency scores
Your potential for success based on AI predictions.


Together, these metrics form a composite score, dictating everything from employment eligibility to medical care access, from social mobility to basic survival.

The system is presented as an optimization tool—designed to "enhance" societal efficiency. But in reality, MESIF is an algorithmic sorting machine, reinforcing historical inequalities under the guise of progress.







Who Benefits? Who Gets Left Behind?
Some people game the system—curating their behavior to maximize their MESIF scores, carefully optimizing every interaction, expression, and movement.

Others, no matter what they do, remain trapped in low-scoring limbo, deemed unfit by AI's rigid classifications.

MESIF does not just measure human worth—it creates it.

The fiction of meritocracy is now fully mechanized, standardized, and enforced.







< D e s i g n i n g
t h e
E x p e r i e n c e >












MESIF wasn’t just a system. It was a world.


And to make that world tangible, it needed presence. It needed artifacts that would make the weight of its logic felt.

Rather than a single linear narrative, the project materialized through three speculative artifacts, each offering a different lens into a MESIF-governed future.





Chapter
Three




< D e s i g n i n g T O B R O >

















A tool for self-improvement—or submission?
TOBRO was designed as an AI-driven wearable device that tracks, monitors, and corrects human behavior in real time. Marketed as an assistant but functioning as a compliance mechanism, TOBRO serves as the physical manifestation of MESIF’s invisible governance. Built using data from high-scoring individuals, TOBRO helps condition low-scoring individuals’ behaviour to match with their counterparts and become more compliant to the scoring system, while becoming a visible marker of their inadequacy.


TOBRO is presented as a 3D printed enahancem,ent wearable device for better living.


Designing TOBRO to Feel... Wrong

Cold-to-the-touch materials → Unsettling, clinical, impersonal.

Constrictive ergonomic fit → Mimicking the weight of constant surveillance.

Subtle vibrations & biofeedback responses → A physical reminder: You are being watched.






Chapter
Four




< W r i t i n g  P o e t r y + P r i n t i n g >





















How do you write poetry in a world where human value is machine-generated?
The speculative poetry book exists as a relic of defiance, a fragmented account of a society shifting from pre-MESIF to post-MESIF. Written by a writer publishing in an underground zine, it captures the emotional residue of a world reduced to metrics.


Designing the Book

Narrative structure → Inspired by Olga Tokarczuk’s nonlinear speculative storytelling, weaving personal reflections through a first-hand experience speculative lens.

Print format → Simple, utilitarian laser-printed pages, reflecting the urgency of underground publishing.

Content themes → Fragmented testimonies, MESIF propaganda, glitch poetry—interwoven to blur the lines between fact, fiction, and machine logic.

The book is not an instruction manual—it is a rupture, a refusal, a question.





Y’s Website Navigator