Chapter
One

<Gap > Research > Making >



At the start of this project, I was drawn to a central question: 

If AI models are designed to recognize and predict human emotion, what happens when they encounter something inherently fluid—like yearning?





Machine learning models operate by classifying data, identifying patterns, and making predictions. But emotion resists such structuring—it is relational, context-dependent, and often unspoken. This tension—between AI’s need for definition and the instability of human feeling—became the foundation of my research.

Rather than asking how AI interprets emotion, I began questioning what is lost in the process—the nuances erased, the cultural and linguistic subjectivities flattened, and the constraints imposed by predefined sentiment categories. AI does not merely reflect human emotions; it conditions how they are represented in digital systems. Could an AI be designed not to extract, classify, or optimize, but to pause, reflect, and absorb?

This shift in thinking moved my research beyond critique toward artistic intervention—toward an AI model that resists efficiency, embraces uncertainty, and invites a different kind of engagement.


While researching AI bias for Becoming (more)human, I was already immersed in discourse about algorithmic influence and the systemic reduction of identity within AI. Kate Crawford’s Atlas of AI deepened this inquiry, particularly her critique of how machine learning fixes fluid social and cultural identities into rigid computational categories -

"Machine learning is essentially making fluid and complex social and cultural categories fixed and rarefied. And in doing so, it can lead to consequences as severe as criminal prosecution, jailing, or worse."

This insight sharpened my focus—not just on how AI misrepresents identity, but on how it transforms the way we understand emotional experience. AI does not simply imitate human expression; it filters, classifies, and redefines it through systems designed for efficiency.

Rather than examining AI’s misclassification of identity—a critique well explored in existing research—I wanted to investigate something even more elusive: the abstraction of human emotion.

If AI compresses identity into fixed categories, what does it do to emotions, which are unstable, context-dependent, and resistant to definition?







Through my investigation, I identified key factors influencing how AI models interpret human emotions -

Cultural Context – Emotional expression is deeply shaped by cultural norms, yet AI models predominantly universalize it.

Psychological Variance – Emotions are subjective and change across individuals, making categorization inherently reductive.

Language and Semantics – The way emotions are articulated varies across languages, complicating AI’s ability to map meaning accurately.

Human-AI Interaction – The act of engaging with AI affects how people express emotion, subtly conditioning responses over time.








Yet, the challenge is not simply that AI misinterprets emotions—it is that people are increasingly engaging with AI as a mediator for emotional processing

From AI therapy bots* to sentiment-driven customer service AI*, these models are being integrated into spaces where human emotions are expected to be recognized, validated, and even influenced. 

But, 
what does it mean for human emotion to be structured through computational logic? 
And,
how might an AI system be designed to expose, rather than conceal, its limitations?

This became the foundation of Machine Yearning—an artistic intervention designed to challenge AI’s role in emotional interpretation. Rather than extracting, categorizing, or predicting, this AI disrupts assumptions about how machines and humans engage with emotion.
At the time, I was already experimenting with training machine learning models using self-collected datasets while working on a speculative working prototype for TOBRO.

Inspired by Legacy Russell’s provocation to see errors as intentional disruptions, I wanted to build an AI system that resists efficiency—one that does not provide answers, but instead acts as a mirror for reflection.

To address how the custom AI could engage with emotional interaction, I turned to Bell Hooks and her writings on love, desire, and longing. Among complex emotions, yearning stood out—a state that is persistent yet undefined, deeply personal yet collective, resistant to fixed meaning. Unlike emotions that AI attempts to categorize—joy, anger, sadness—yearning is not easily quantifiable. It exists in flux, shaped by memory, absence, and the desire for transformation.



Thus, Machine Yearning emerged as -

An AI model trained on uncategorised first-hand narratives of yearning, rather than rigid datasets.

A sculptural vessel that does not extract or analyze but absorbs and embodies yearning—its form intentionally countering the disembodied voices of AI assistants.

A system of errors—one that does not serve as a tool, but as a break in perception, a space for critical reflection on our everyday AI entanglements.

Rather than a machine that "understands" emotion, Machine Yearning offers an AI that listens, refracts, and responds in ways that disrupt expectation. 

It is not an instrument of prediction, but of ambiguity—a space where the boundaries between human and machine are blurred, unsettled, and open to reinterpretation.





Y’s Website Navigator