< W o r k s h o p
B u i l d   a   B o t -
A   P a r t i c i p a t o r y D e s i g n  
W o r k s h o p >

Feb
2024











What do we want from AI -
and what we don’t?

If AI is shaping our interactions, decisions, and access to resources, who gets to decide what it should do? The Build a Bot workshop was designed as an interactive experiment in speculative design, allowing participants to engage in critical AI-making rather than just consuming pre-built systems.

Instead of merely critiquing existing AI systems, the workshop asked participants to design their own bots—grappling with the choices, assumptions, and unintended consequences embedded in AI development.










The Structure - Designing AI from the Ground Up
Step 1
Conceptualizing the Bot


"A Bot for Me" → A bot tailored for personal use.

"A Bot for All" → A bot designed for society.

What is its purpose? Who benefits from it? Who might be excluded?


Step 2
Defining the Dataset


What kind of data does the bot need to function?

In 10 minutes, participants wrote 5 key data points their bot would rely on.

In 2 minutes, they were asked to revise or add more information.

In 10 minutes, they browsed the internet to "optimize" their bot with additional datasets.


Step 3
Testing & Peer Review


Participants shared their bots and discussed:

      Would you use this bot?

      What makes it helpful—or concerning?

      Does it reinforce biases? Who might be affected?

      Would you release it into the world as it is? Why or          why not?


Step 4
Reflection Round


How was it to build your own AI?

What do you think AI should—and shouldn’t—do?

What concerns or insights emerged from this process?











Key takeaways -

AI as a system of choices
Bias is not accidental—it is embedded in design choices.

Even when participants designed their bots with good intentions, many realized their systems replicated existing biases—often unintentionally.

A bot that serves you personally may harm others at scale.

Participants were comfortable with AI designed for their own needs, but when considering a bot for society, they hesitated—raising questions about power, responsibility, and ethics.

Fixing AI is harder than expected.

Even after reflection and revision, most bots still carried ethical dilemmas—revealing that AI’s problems aren’t just about technical improvement but about systemic frameworks.










Bridging the Workshop to Becoming (more)Human

The insights from Build a Bot directly informed Becoming (more)human, shaping MESIF and TOBRO’s speculative design:

Scoring & Classification → How MESIF turns human identity into ranked data points.

Optimization as Surveillance → How AI systems, like TOBRO, quietly enforce compliance.

The Power of Data → The realization that AI doesn’t just reflect society—it actively shapes it.



Ultimately, this workshop reinforced a central question of the project:

If we were given the power to build AI from scratch, would we actually build something different? Or would we end up replicating the same biases, assumptions, and exclusions that exist today?

    Y’s Website Navigator