How    Might    AI    Learn    to    Yearn    ?





Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.

Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.














Machine Yearning is an experiment in envisioning alternative forms of Human-AI interaction that critically question the reduction of human identity by machine learning models -
            
from fluid and interpersonal
to static and essential.

Through a machine learning programme trained and taught on uncategorised data of human sounds, first-hand narratives, and shared feelings of yearning, the Yearning AI is a “useless” but “feeling”, emotionally resonant model. Designed to invite the user not to resolve queries but to create an iterative and reflective interaction for their everyday use of AI. 

The model embodied as a sculptural artefact - crafted from fibreglass plastic, is an imagined artefact of the future articulating the once absorbed yearnings through speech.













Methodologies & Methods

Human-AI Interaction >  Experience Design > Critical and Speculative Design > Machine Learning and Coding > Participatory Design > System Research > Ethical Data Practice


Collaborators

Code Consultant - Anushka Aggrawal
Sculpture Construction - Artisans based in Delhi, India


Tools and Mediums

New Media Experience >
Computational Art >
Python for custom AI development >
Adobe Suite & Procreate
for Installation Design >
Garageband
for Audio Prototyping











Machine Yearning, Graduate Showcase 2024, Royal College of Art, White City, UK.





< E  x  p  e  r  i  e  n  c  e >

What does Machine Yearning entail?


The custom machine learning programme trained, and taught on human notions, first hand narratives, and shared feelings of yearning, embodied as a vessel in time - a fibrous plastic belly in time absorbing, and voicing out yearnings through speech.

The audience is invited to lean into the vessel, and speak their yearnings. The AI, embodied as the vessel, listens, and responds with its developed interpretation of yearning. The speech based interaction is triggered with recognition of human words, phrases, and sentences.

Machine Yearning is an interactive installation where humans engage in an intimate, reflective dialogue with AI through speech. The audience leans into the sculptural vessel, a fibrous plastic form that listens, absorbs, and responds to their shared views on personal yearnings. The AI, trained on first-hand narratives, human sounds, and shared expressions of yearning, does not seek to inform or solve—it echoes, refracts, and processes the spoken words.


After a brief pause, and the acknowledgement of AI - “I hear you yearn.” it generates a response—its own interpretation of yearning—delivered through synthesized speech. Each response is unique, shaped by the model’s evolving dataset and its recursive engagement with human emotions. This interaction creates an ongoing dialogue where the AI, rather than providing solutions, reflects and reframes the emotional expressions it receives.

By shifting AI’s role from a predictive tool to an expressive entity, Machine Yearning invites users to consider alternative forms of machine intelligence—ones that are relational, poetic, and centered on human vulnerability.


When a human asked “What does the future hold for us?
The machine replied -










Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.
Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.
Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.



< S  i  g  n  
a  l  s >


Why does the Machine Yearn?











Machine learning models abstract human behavior—predicting, responding, and influencing our movements, both digitally and physically. Their integration into systems of policing, defense, medicine, and governance makes them powerful, but also precarious—capable of reinforcing biases, flattening complexities, and shaping society in ways that remain largely unexamined.

Among these applications, conversational AI stands out. Designed to interpret and respond to human emotions, these systems—materialized as tangible chatbots like Alexa, virtual assistants like ChatGPT, and voice interfaces—enter the most intimate spaces of our lives. Yet, emotions, as defined by Britannica, are “a complex experience of consciousness, bodily sensation, and behavior” that resists classification. Large language models (LLMs) attempt to parse these intricacies, yet in doing so, they reduce, categorize, and optimize—transforming the deeply personal into structured data. Their interactions, though seemingly relational, are built on utility, trained on context-stripped inputs, and deployed in ways that dilute the nuances of human experience.

How, then, might we rethink our engagement - making and using with these systems? Can we design ethical AI models as invitations for reflection than prescription?

By teaching, adopting, and infiltrating the open-source GPT-2 LLM with the subjective experience of yearning—a feeling that is ephemeral, collective, and resistant to definition—can we render the machine useless yet full of wonder? And if AI is often imagined as a black box, could it instead take form as an open vessel—one that gathers, absorbs, and echoes human expressions rather than extracting and categorizing them?





























Signal - Softbank and their Emotion AI
1.  Emotion Cancelling AI built for enhancing customer srvice by altering emotional states of the employees through changing tones of customer speech, and  AI triggered video montage to calm and relax the employee.

Refer here.


Signal  -  Bumble  AI Assistant?

2. Bumble co-founder suggesting an AI Concierge that would date other people’s AI concierge so you don’t have to talk to anyone and everyone.

Refer here.


Signal - AI Therapists
3. People increasingly seeking AI advice for their mental health concerns through AI chatbots and  AI therapist bots, disregarding harm enacted through computational bias.

Refer here and here.
















< M  e  t  h  o  d
o  l  o  g  i  e  s  +  
P  r  o  c  e  s  s >



How does the Machine Learn?














Chapter
One

<Gap > Research > Making >



At the start of this project, I was drawn to a central question: 

If AI models are designed to recognize and predict human emotion, what happens when they encounter something inherently fluid—like yearning?





Machine learning models operate by classifying data, identifying patterns, and making predictions. But emotion resists such structuring—it is relational, context-dependent, and often unspoken. This tension—between AI’s need for definition and the instability of human feeling—became the foundation of my research.

Rather than asking how AI interprets emotion, I began questioning what is lost in the process—the nuances erased, the cultural and linguistic subjectivities flattened, and the constraints imposed by predefined sentiment categories. AI does not merely reflect human emotions; it conditions how they are represented in digital systems. Could an AI be designed not to extract, classify, or optimize, but to pause, reflect, and absorb?

This shift in thinking moved my research beyond critique toward artistic intervention—toward an AI model that resists efficiency, embraces uncertainty, and invites a different kind of engagement.


While researching AI bias for Becoming (more)human, I was already immersed in discourse about algorithmic influence and the systemic reduction of identity within AI. Kate Crawford’s Atlas of AI deepened this inquiry, particularly her critique of how machine learning fixes fluid social and cultural identities into rigid computational categories -

"Machine learning is essentially making fluid and complex social and cultural categories fixed and rarefied. And in doing so, it can lead to consequences as severe as criminal prosecution, jailing, or worse."

This insight sharpened my focus—not just on how AI misrepresents identity, but on how it transforms the way we understand emotional experience. AI does not simply imitate human expression; it filters, classifies, and redefines it through systems designed for efficiency.

Rather than examining AI’s misclassification of identity—a critique well explored in existing research—I wanted to investigate something even more elusive: the abstraction of human emotion.

If AI compresses identity into fixed categories, what does it do to emotions, which are unstable, context-dependent, and resistant to definition?







Through my investigation, I identified key factors influencing how AI models interpret human emotions -

Cultural Context – Emotional expression is deeply shaped by cultural norms, yet AI models predominantly universalize it.

Psychological Variance – Emotions are subjective and change across individuals, making categorization inherently reductive.

Language and Semantics – The way emotions are articulated varies across languages, complicating AI’s ability to map meaning accurately.

Human-AI Interaction – The act of engaging with AI affects how people express emotion, subtly conditioning responses over time.








Yet, the challenge is not simply that AI misinterprets emotions—it is that people are increasingly engaging with AI as a mediator for emotional processing

From AI therapy bots* to sentiment-driven customer service AI*, these models are being integrated into spaces where human emotions are expected to be recognized, validated, and even influenced. 

But, 
what does it mean for human emotion to be structured through computational logic? 
And,
how might an AI system be designed to expose, rather than conceal, its limitations?

This became the foundation of Machine Yearning—an artistic intervention designed to challenge AI’s role in emotional interpretation. Rather than extracting, categorizing, or predicting, this AI disrupts assumptions about how machines and humans engage with emotion.
At the time, I was already experimenting with training machine learning models using self-collected datasets while working on a speculative working prototype for TOBRO.

Inspired by Legacy Russell’s provocation to see errors as intentional disruptions, I wanted to build an AI system that resists efficiency—one that does not provide answers, but instead acts as a mirror for reflection.

To address how the custom AI could engage with emotional interaction, I turned to Bell Hooks and her writings on love, desire, and longing. Among complex emotions, yearning stood out—a state that is persistent yet undefined, deeply personal yet collective, resistant to fixed meaning. Unlike emotions that AI attempts to categorize—joy, anger, sadness—yearning is not easily quantifiable. It exists in flux, shaped by memory, absence, and the desire for transformation.



Thus, Machine Yearning emerged as -

An AI model trained on uncategorised first-hand narratives of yearning, rather than rigid datasets.

A sculptural vessel that does not extract or analyze but absorbs and embodies yearning—its form intentionally countering the disembodied voices of AI assistants.

A system of errors—one that does not serve as a tool, but as a break in perception, a space for critical reflection on our everyday AI entanglements.

Rather than a machine that "understands" emotion, Machine Yearning offers an AI that listens, refracts, and responds in ways that disrupt expectation. 

It is not an instrument of prediction, but of ambiguity—a space where the boundaries between human and machine are blurred, unsettled, and open to reinterpretation.







Chapter
Two

<G a t h e r i n g
D a t a >











Machine learning models are shaped by the datasets that train them. Data is not neutral—it carries the biases, omissions, and ethical concerns of its collection process. Data is primarily human. If AI models are built on human data, then the question arises,

How might we ethically collect data to programme more equitable AI futures?







The dominant approach to dataset creation within machine learning is extractive, often relying on hidden, dubious, or false consent. In contrast, feminist and ethical data methodologies emphasize transparency, agency, and communal participation. This distinction informed my process -






Extractivist Data Methodologies

Ethical + Feminist Data Methodologies

Dubious or Hidden or False Consent and Permission
Informed Consent


Hidden use of Data -
The Function of Data Sharing is not Defined or Shared
Transparency about Outcomes Generated 
from Use of Shared Data


Reproduction of Biases
through Inappropriate or Biased Labelling
Diverse and Communal Collection
Allows for Dilution of Biases








Given that yearning is a deeply personal and culturally shaped emotion, I needed a data-gathering process that was intentional, participatory, and reflective—one that did not simply extract information, but instead fostered spaces for storytelling, dialogue, and co-creation.










Designing the Workshop Framework
With guidance from Bella Day* and drawing from co-creative and activist-led workshop practices, I structured a three-part workshop aimed at ethically collecting first-hand emotional narratives. Each part was designed to:

Build an environment of trust – Creating spaces where participants could share stories comfortably and authentically.

Encourage co-creation – Positioning participants as active contributors to the dataset, rather than passive sources of data.

Experiment with multiple data forms – Separating the collected data into textual transcripts of spoken narratives and non-verbal sounds, exploring different modalities of emotional expression.

This workshop process was essential to counteract the biases embedded in conventional AI datasets. By prioritizing human-first data collection, the goal was not to extract rigid classifications of emotion, but to gather narratives in a way that remains fluid, subjective, and open-ended—aligning with the core philosophy of Machine Yearning.

Below is a fundamental flow for each part, and you can access the blueprint for each workshop here.











Chapter
Two




< (Un)P r o g r a m m i n g
Y e a r n i n g >











This was my first AI coding project. After a call for collaboration, I began working with Anushka Aggarwal, an acturial and coding specialist, to develop a machine learning model that could process, interpret, and vocalize expressions of yearning. 

Over weeks of iterative development, we refined the model’s architecture, optimizing its behavior at each stage to align with the evolving conceptual goals of Machine Yearning.























I t e r a t i o n #1


Fine-Tuning a Model on Literary Text
The first step was to experiment with text generation. We trained a pre-existing language model on Bliss by Katherine Mansfield to analyze how it would interpret and generate responses based on a controlled dataset. The objective was to-

Evaluate how a fine-tuned model generates new responses based on a fixed literary input.

Observe whether the AI’s output would be coherent, derivative, or entirely novel.

The results demonstrated non-repetitive yet loosely structured responses—suggesting that fine-tuning led to an output that, while irregular, still carried an underlying interpretive logic.





I t e r a t i o n #2



Integrating Text-to-Speech (TTS)
Next, we incorporated Text-to-Speech (TTS) capabilities to transition from textual output to an auditory experience. We expanded the database with simultaneously collected stories and thoughts about yearning. This experiment allowed us to explore -

How the AI would verbalize non-standard outputs such as blanks, symbols, or fragmented text.

The affective quality of AI-generated speech, especially when meaning was ambiguous.

The model’s vocal output exhibited unexpected distortions—anomalies in pronunciation, gaps in articulation, and an eerie tonality—which added to the abstraction of meaning rather than providing direct comprehension. This iteration was pivotal in shifting the model from a text-based generator to a vocal presence, reinforcing the project’s exploration of AI’s role as an expressive rather than predictive entity.





I t e r a t i o n #3

Training on the Collected Dataset
With the workshop-collected dataset of yearnings, we continued training the model on first-hand human narratives and literary findings related to yearning - writings by women. This iteration was designed to -

Test how the AI would process and interpret the collected yearnings without external context.

Observe whether the outputs retained thematic elements of yearning or deviated into unrelated text.

The results were unexpected—fragmented, surreal, yet strangely resonant. Unlike conventional AI applications, which prioritize coherence and optimization, this model generated responses that felt open-ended, recursive, and interpretive. The system did not “understand” yearning, but it refracted the sentiment through its own computational logic, producing a poetic yet non-prescriptive AI output.

This version was also exhibited as part of my graduate project, housed inside the sculpture, serving as a public test of how an AI, trained on deeply personal narratives, could produce responses that were both abstract and deeply affecting.






I t e r a t i o n #4


Developing a Real-Time Conversational Loop
While the previous iteration created a compelling listening experience, it lacked real-time interaction. To enhance the model’s engagement, we developed a Speech-to-Text (STT) pipeline, enabling the AI to -

Capture spoken input as prompted input to develop a response from the fine tuned model in real time.

Process the transcribed text through the trained model.

Generate and vocalize an AI-generated response in real time.

This required integrating automatic speech recognition (ASR) with TTS synthesis, refining the pipeline to reduce lag and ensure a seamless feedback loop between human and AI. After initial tests with a smaller subset of the dataset, we deployed the full collected yearnings, allowing for an interaction where the AI did not just recite pre-generated responses but actively responded, quite erratically sometimes, to user input in real time.

This final iteration was exhibited at the Victoria & Albert Museum (V&A) as part of Digital Design Weekend, where the interactive model generated diverse responses based on audience engagement. Feedback from visitors highlighted the uncanny and thought-provoking nature of the AI’s responses—a machine that does not "understand" human emotion, yet evokes a feeling of recognition through its computational reinterpretation of yearning.


Chapter
Three




< D e s i g n i n g
t h e
E x p e r i e n c e >












Interacting with Machine Yearning could have been a 3D-rendered digital entity, a chatbot, or a purely screen-based experience. But my inquiry began with a different question:

How might awe serve as a tool for reflection—one that transforms how we think about AI, both as creators and users?









Interrogating AI’s Form:

From Disembodied Voices to a Monumental Presence
Critiquing Alexa:


The Problem of Disembodied AI Voices


Most AI interactions today occur through flat interfaces—screens, apps, and in-home smart devices like Amazon Alexa and Google Echo. These designs prioritize seamlessness, efficiency, and passive engagement, reinforcing AI as an invisible, ever-present assistant.

With Machine Yearning, I wanted to break from this paradigm. The AI’s embodiment had to be:

Awe-inspiring and disruptive—an object that invites contemplation rather than passive interaction.

Symbolically representative of the voices it carries—particularly voices that are excluded from AI training datasets.

Rooted in everyday domestic life—as most intimate conversations with AI occur in home spaces.

A temporal artefact—a form that bridges past, present, and future, resisting the idea of AI as purely futuristic.
In researching voice assistants, I found critical patterns:

Most AI voices are female.

Most AI devices are placed in domestic spaces.

Most interactions are linear—humans command, AI (a feminized, disembodied voice) serves.

AI assistants like Alexa and Siri are designed to be non-threatening and servile, reinforcing gendered power dynamics. Studies show that this "obedient AI woman" trope influences how people interact with women in real life, subtly normalizing expectations of compliance and emotional availability in gendered interactions (West, Kraut, & Chew, How Do People Interact With AI Assistants?, 2019).

By design, these AI systems are also bodiless, invisible, and frictionless—they exist to be heard, not seen. Their presence is only acknowledged when summoned, when spoken to, when needed.

For Machine Yearning, I wanted to counter this. Instead of a bodiless, servile AI, the vessel would be monumental, visible, and absorptive—not a tool to be used, but an entity that gathers, holds, and transforms.








The Vessel as a Feminist and Cultural Counterpoint to AI Design




Diverse forms of vessel -
Visual Research on Lota
(Images from
Google Arts and Culture)
Sketches inspired by the form of a vessel,
also symbolising womb, a space, lacuna, pot, an object that gathers and stores -
First drafts of the Machine Yearning Sculpture



I turned to cultural symbols of containment, storytelling, and memory, seeking a form that does not extract but holds.


The Lota as a Vessel for AI

The lota, a common household object in South Asia, became a key reference.

Like Alexa, it exists in the home.

Like Alexa, it is summoned and engaged when needed.

But unlike Alexa, it does not extract—it holds.

The lota is an ancient and enduring vessel, a container that has moved through generations—it carries water, sustenance, and even oral histories. By invoking the lota’s form, Machine Yearning challenges AI’s impermanence and invisibility, positioning the vessel as a cultural artefact in motion across time.














Braided Hair as a Symbol of Woven Stories






Another symbolic vessel is hair. Hair has historically been:

A marker of identity, care, lineage, and gender expression.

A site of control (violence, cutting, concealing, offering in rituals).

A carrier of memory—woven, wrapped, or braided across generations.

Braiding became an integral design element, intertwining synthetic hair into the vessel’s form—drawing patterns is integral to machine learning process of sense making, similar to how braiding involves deliberate, detailed, focused pattern and iterative making—a material gesture that references women’s histories, embodied knowledge, and oral storytelling traditions in dialogue with machine learning algorithmic knowledge.
Art Basel Exhibition 2009, Plastic Zip Ties
Material and Visual Research
Alexandra Bircken - Inside Out, 2013
Visual Research
Yuni Kim Lang (USA) - Comfort Hair
Material and Visual Research
Oksana Bondar - Human Hair
Material and Visual Research
Sandra Becker - Interior 1, 2012
Visual Research 








Draft Floorplan - Drawing from the Material and Visual Research,
I began building plans for the making of the sculpture, the installation, user flow, and tech rider.
 
Prototyping Sculpture during
Material Research + Floorplan
Final Development Sketches and Tests








From Symbol to Interactive Experience


To make the vessel a living, interactive AI presence, I explored cultural references of acoustic spaces:

The talking well—an old-world phenomenon where speaking into a deep well created a resonant echo, mirroring one’s own voice back.

Listening to the ocean in a conch shell—a childhood act of hearing something vast and unreachable in a small, intimate object.

Could the vessel act as an acoustic space—one that absorbs and vocalizes yearning in the same breath?


The Interaction Flow:

The vessel softly whispers AI-generated yearnings, drawing the audience in.

The listener leans in—curious, immersed—enticed by its voice.

They speak into the vessel, sharing their own yearning.

The AI acknowledges: "I hear you yearn."

They wait for its response—sometimes unsettling, sometimes poetic, sometimes absurd.

They step back, intrigued, wondering, asking questions. In this moment, the human does not command the AI. Instead, they engage in an act of reciprocity.








Not a Black Box,
But a Vessel
Unlike conventional AI systems—hidden within servers, concealed behind interfaces, functioning as black boxes—the Machine Yearning AI exists visibly, tangibly, monumentally.

Yearning is voiced from a fiberplastic belly.
A cultural vessel houses collective stories.
Bottled-up emotions find resonance.
A talking well echoes back.
An artifact of time—past, present, and future.

An acoustic cave, reverberating longing.

The vessel does not seek to solve, optimize, or assist. Instead, it listens, absorbs, and unsettles—reframing AI not as an obedient voice, but as an evocative presence.












The project has been exhibited at -

Victoria and Albert Museum, London
Hackney Down Studios
Royal College of Art

And mentioned in press by BRICKS Magazine.

My research within alternative plays, experiments, perspectives on AI continues as I work based on the incredibly remarkable feedback received during the showcases.

If you want to know more, discuss more, please reach out at workxyashika@gmail.com
Y’s Website Navigator