Chapter
Two




< (Un)P r o g r a m m i n g
Y e a r n i n g >











This was my first AI coding project. After a call for collaboration, I began working with Anushka Aggarwal, an acturial and coding specialist, to develop a machine learning model that could process, interpret, and vocalize expressions of yearning. 

Over weeks of iterative development, we refined the model’s architecture, optimizing its behavior at each stage to align with the evolving conceptual goals of Machine Yearning.























I t e r a t i o n #1


Fine-Tuning a Model on Literary Text
The first step was to experiment with text generation. We trained a pre-existing language model on Bliss by Katherine Mansfield to analyze how it would interpret and generate responses based on a controlled dataset. The objective was to-

Evaluate how a fine-tuned model generates new responses based on a fixed literary input.

Observe whether the AI’s output would be coherent, derivative, or entirely novel.

The results demonstrated non-repetitive yet loosely structured responses—suggesting that fine-tuning led to an output that, while irregular, still carried an underlying interpretive logic.





I t e r a t i o n #2



Integrating Text-to-Speech (TTS)
Next, we incorporated Text-to-Speech (TTS) capabilities to transition from textual output to an auditory experience. We expanded the database with simultaneously collected stories and thoughts about yearning. This experiment allowed us to explore -

How the AI would verbalize non-standard outputs such as blanks, symbols, or fragmented text.

The affective quality of AI-generated speech, especially when meaning was ambiguous.

The model’s vocal output exhibited unexpected distortions—anomalies in pronunciation, gaps in articulation, and an eerie tonality—which added to the abstraction of meaning rather than providing direct comprehension. This iteration was pivotal in shifting the model from a text-based generator to a vocal presence, reinforcing the project’s exploration of AI’s role as an expressive rather than predictive entity.





I t e r a t i o n #3

Training on the Collected Dataset
With the workshop-collected dataset of yearnings, we continued training the model on first-hand human narratives and literary findings related to yearning - writings by women. This iteration was designed to -

Test how the AI would process and interpret the collected yearnings without external context.

Observe whether the outputs retained thematic elements of yearning or deviated into unrelated text.

The results were unexpected—fragmented, surreal, yet strangely resonant. Unlike conventional AI applications, which prioritize coherence and optimization, this model generated responses that felt open-ended, recursive, and interpretive. The system did not “understand” yearning, but it refracted the sentiment through its own computational logic, producing a poetic yet non-prescriptive AI output.

This version was also exhibited as part of my graduate project, housed inside the sculpture, serving as a public test of how an AI, trained on deeply personal narratives, could produce responses that were both abstract and deeply affecting.






I t e r a t i o n #4


Developing a Real-Time Conversational Loop
While the previous iteration created a compelling listening experience, it lacked real-time interaction. To enhance the model’s engagement, we developed a Speech-to-Text (STT) pipeline, enabling the AI to -

Capture spoken input as prompted input to develop a response from the fine tuned model in real time.

Process the transcribed text through the trained model.

Generate and vocalize an AI-generated response in real time.

This required integrating automatic speech recognition (ASR) with TTS synthesis, refining the pipeline to reduce lag and ensure a seamless feedback loop between human and AI. After initial tests with a smaller subset of the dataset, we deployed the full collected yearnings, allowing for an interaction where the AI did not just recite pre-generated responses but actively responded, quite erratically sometimes, to user input in real time.

This final iteration was exhibited at the Victoria & Albert Museum (V&A) as part of Digital Design Weekend, where the interactive model generated diverse responses based on audience engagement. Feedback from visitors highlighted the uncanny and thought-provoking nature of the AI’s responses—a machine that does not "understand" human emotion, yet evokes a feeling of recognition through its computational reinterpretation of yearning.
Y’s Website Navigator