Can       imagining       alternative       fictional       realities    prompt      us       to       situate       ourselves       and     our    communities       on       the       internet       differently      ?




Will You Eat Me, Dutch Design Weekend, Eindhoven, October 2024.

















What? - The Experience










The frameworks that inform the innovation and development of deep learning AI applications are embedded with layers of bias in the status quo. Kate Crawford in Atlas of AI, argues the far-reaching harms of biased technological systems. These algorithms, in practice, not only perpetuate the bias they are built on but also amplify the existing systems of discrimination.

            
Engaging with Dunne and Raby’s what-if methodology of speculative design fiction, ‘Becoming (More)Human’ constructs a probable future world in 2029, where

MESIF - Marker of Emotional Stability, Intelligence, and Functionality

has become a universalised scoring system that demarcates the function of an individual in society. This system is an aggregate score of five AI-driven sub-scoring systems - Happiness Meter, Smile Scale, Resting Factor, Labour Rate and Social Life Expectancy. These sub-systems are based on the AI research and experiments of 2024 - Mental-Health Biomarkers, Computer Vision, Movement Analysis, Office Monitoring, and Social Media AI tools, respectively.















Methodologies

Thinking through Making | Narrative Building | Participatory Design | Research through VR | Speculative Fictions


Collaborators

Team Members
(Collective Research + Narrative Building + Execution)

Unity Designer -
Yashika Goel
3D Maker - Emerald Chen
Audio Narrative - Boluwatife Konuwale


Tools and Mediums

Virtual Reality > Immersive Experience >
Unity for VR Development 

















Still from ‘Will You Eat Me?’











































































































< E  x  p  e  r  i  e  n  c  e >

What     do     you     eat     in    VR   ?






Will You Eat Me? is an interactive speculative storytelling experience wherein the audience is invited to create their own cheesburgers using speculative digital food ingredients, while listening to the narration encapsulating the journey of food from today to tomorrow.


















Why? - The Signals















Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.
Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.
Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.



















< S  i  g  n  
a  l  s >


Why  speculate  VR  Food  Futures ?

















As climate change fundamentally alters our food production capabilities, there's an urgent need to reimagine our relationship with food consumption. While this future might seem distant, the decisions we make today directly shape tomorrow's dining table.

"Will You Eat Me?" emerged as a response to this critical juncture, using virtual reality not just as a technological tool, but as a medium for embodied storytelling that bridges the gap between current food practices and potential future realities. The project deliberately employs familiar actions - like building a burger - to make abstract concepts about food sustainability tangible and personally relevant. By contrasting comfortable food memories with unfamiliar future alternatives, the experience creates a space for meaningful discourse about food sustainability while remaining accessible to diverse audiences.

































Signal - Softbank and their Emotion AI
1.  Emotion Cancelling AI that is built for better customer srvice by altering emotional states of the employees through changing tones of customer speech, and  AI triggered video montage to calm and relax the employee.

Refer here.




Signal  -  Bumble  AI Assistant?

2. Bumble co-founder suggesting an AI Concierge that would date other people’s AI concierge so you don’t have to talk to anyone and everyone.

Refer here.




Signal - AI Therapists
3. Masses increasingly seeking AI advice for their mental health concerns through AI chatbots and  AI therapist bots, disregarding harm enacted through computational bias.

Refer here and here.














How? - The Methodologies and Research

















Machine Yearning, Digital Design Weekend, London Design Week, Victoria and Albert Museum, September 2024.



























< M  e  t  h  o  d
o  l  o  g  i  e  s  +  
P  r  o  c  e  s  s >



How does the Machine Learn?














Chapter Four
Feedback Reflections
Citations
7












Chapter
One




<G a t h e r i n g
D a t a >




















Datasets are active participants in defining the function of machine learning models. Data is primarily human. So,

How might we ethically collect data to programme equitable AI models and systems?
















Extractivist Data Methodologies
Ethical + Feminist Data Methodologies




Dubious or Hidden or False Consent and Permission
Informed Consent


Hidden use of Data -
The Function of Data Sharing is not Defined or Shared
Transparency about Outcomes Generated 
from Use of Shared Data


Reproduction of Biases
through Inappropriate or Biased Labelling
Diverse and Communal Collection
Allows for Dilution of Biases






















Adopting, learning and practicing with human first data methods, I designed a 3 part workshop focusing on
co-creating narratives, sharing, and collecting.
Below is a fundamental flow for each part, and you can access the blueprint for each workshop here.

















Chapter
Two










< (Un)P r o g r a m m i n g
Y e a r n i n g >











This was my first AI coding project, so post my call for collaboration was answered in the form of a brilliant math & coding whiz, Anushka Aggarwal, we began with what would be

weeks of back and forth developing each iteration, each time optimising to achieve our evolving purpose.



































I t e r a t i o n #1
The first draft code was programmed to generate textual interpreted responses using the short story Bliss by Katherine Mansfield as the input data. I wanted to see how similar, or different the responses would be, following a given data set, what does the model produce. The responses were varied, not repetitive, rather weirdly interpretive - conclusion - fine tuning a model generates legible but irregular responses. There is some sense to be derived from it.






I t e r a t i o n #2
The next development was to add Text to speech, a simple experiment in building the audio interaction. Especially for the phrases where words weren’t generated rather blanks, or symbols were generated, I wanted to learn how the model would convert it into speech, and how would that interaction feel for us.






I t e r a t i o n #3
The next test was using the developed dataset as test to review the generated responses. Without prompting, this would be the first raw generation of the model’s understanding of the collected yearnings. Needless to say, all of it was very fascinating, a repository of stories from AI’s perspective, very useless, and very much invoking curiosity of the audience. This version was also exhibited for my graduate project.






I t e r a t i o n #4
Albeit, a meaningful listening experience, the experience did not feel enough to develops feedback loop in terms of human-ai interaction enough for people to link it to AI, and initiate further conversation. So, the next natural step was to build a speech-to-text setup, wherein the model would take the spoken word as prompt and input data, and generate speech in response to that. I tested this model with a smaller dataset, and then with the collected dataset.


































Chapter
Three

< D e s i g n i n g
t h e
E x p e r i e n c e >




















Interacting with the custom yearning AI could have been a 3D visualised digital character embodying the voices of yearning but my primary enquiry was -
How might we utilise the feeling of
awe as a transformative tool for cultivating critical reflection among the masses who make and use AI?










So, I designed a vessel.


Yearning voiced from fibreplastic belly.
A cultural vessel housing stories.
Bottled-up emotions.
The talking well.
An artefact in time; a future artefact.
The echoing cave.
and more.





1.

What are the possible forms that the custom AI could embody?































Y’s Website Navigator