B u i l d a B o t -
A P a r t i c i p a t o r y D e s i g n
W o r k s h o p >
Feb
2024
2024
What do we want from AI -
and what we don’t?
and what we don’t?
If AI is shaping our interactions, decisions, and access to resources, who gets to decide what it should do? The Build a Bot workshop was designed as an interactive experiment in speculative design, allowing participants to engage in critical AI-making rather than just consuming pre-built systems.
Instead of merely critiquing existing AI systems, the workshop asked participants to design their own bots—grappling with the choices, assumptions, and unintended consequences embedded in AI development.
Instead of merely critiquing existing AI systems, the workshop asked participants to design their own bots—grappling with the choices, assumptions, and unintended consequences embedded in AI development.
The Structure - Designing AI from the Ground Up
Step 1
Conceptualizing the Bot
"A Bot for Me" → A bot tailored for personal use.
"A Bot for All" → A bot designed for society.
What is its purpose? Who benefits from it? Who might be excluded?
Step 2
Defining the Dataset
What kind of data does the bot need to function?
In 10 minutes, participants wrote 5 key data points their bot would rely on.
In 2 minutes, they were asked to revise or add more information.
In 10 minutes, they browsed the internet to "optimize" their bot with additional datasets.
Step 3
Testing & Peer Review
Participants shared their bots and discussed:
Would you use this bot?
What makes it helpful—or concerning?
Does it reinforce biases? Who might be affected?
Would you release it into the world as it is? Why or why not?
Conceptualizing the Bot
Step 2
Defining the Dataset
Step 3
Testing & Peer Review
Step 4
Reflection Round
Key takeaways -
AI as a system of choices
AI as a system of choices
Even when participants designed their bots with good intentions, many realized their systems replicated existing biases—often unintentionally.
Participants were comfortable with AI designed for their own needs, but when considering a bot for society, they hesitated—raising questions about power, responsibility, and ethics.
Even after reflection and revision, most bots still carried ethical dilemmas—revealing that AI’s problems aren’t just about technical improvement but about systemic frameworks.
Bridging the Workshop to Becoming (more)Human
The insights from Build a Bot directly informed Becoming (more)human, shaping MESIF and TOBRO’s speculative design:
Scoring & Classification → How MESIF turns human identity into ranked data points.
Optimization as Surveillance → How AI systems, like TOBRO, quietly enforce compliance.
The Power of Data → The realization that AI doesn’t just reflect society—it actively shapes it.
If we were given the power to build AI from scratch, would we actually build something different? Or would we end up replicating the same biases, assumptions, and exclusions that exist today?