Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Defragged – a hack-game about fixing calendars and building AI empathy (getclockwise.com)
21 points by raphael_damico on March 1, 2021 | hide | past | favorite | 2 comments



CEO of Clockwise here, so I'm biased! This game was built by our designers on a Hack Day. So, so fun and such a cool demonstration of how difficult internal scheduling can be.


Hey HN! I wanted to share a project I built on at Clockwise with a huge assist by my colleague Amy He. I'm a UX designer by day and game designer at most other times, so figured y'all might enjoy some notes on what I learned making it!

WHAT WAS THIS GAME TRYING TO SIMULATE?

Clockwise automatically rearranges team calendars to create more Focus Time, time for Lunch, and soon other goals like helping you avoid long nightmare blocks of uninterrupted meetings. It's an agentive system which you have to be able to trust with your time, which creates a huge design challenge of deciding which preferences to expose to users (balance complexity vs power) and then making a million small decisions about how the AI should behave. This game puts you in the shoes of that AI.

(1) WRITING GOOD GOALS

The biggest thing I learned was how valuable it was to try to boil down the overall Clockwise algorithm into its most important goals.

For a game to be legible, the player must understand what it means to "win". Too many goals (hey there, Twilight Imperium) create a game where strategy is impossible, onboarding takes hours, and player agency is removed.

The interesting bit is that designers of AI/algorithm powered systems also need legible goals, in this case the goals of the system. This is nothing new in the AI ethics & alignment literature, but I found it particularly clarifying to look at this problem through the lens of trying to design goals that could be understood at face value by a player with no other context.

I certainly didn't invent this method—I first encountered it in my agent based modelling class in graduate Design school with the polymathic Michael J. North. Before writing the first line of code for our simulations, we had to design simple games that we live action roleplayed, forcing us into the role of the turtles & patches. You can get pretty far simulating complex systems with simple rules based improv games.

The biggest thing we noticed playtesting Defragged at Clockwise was how in simpler, earlier versions the natural winning strategies created exactly the outcomes that we'd received the most user feedback about in the earlier days of Clockwise (for example meetings piling up at the start or end of the day). A great example of spotting specific second-order consequences through a playful simulation.

In the magic circle of a game, people suddenly latch on to incentives the way a traditional neoclassical economist could only dream of!

(2) INTERNALIZING EXTENT OF ABSTRACTION

There are many ways to specify the goals of an algorithm, and limiting yourself to the things a player might understand is a good way to force limit yourself to things the designers of the algorithm are likely to be able to understand too.

Doing it in the context of a game also forces you to specify the goals in the frame of the simplified abstraction of the world that the AI will actually be operating on. For example:

* the ways in which the data itself is a proxy (e.g. calendars are a map of your meetings, but not a completely accurate map of how you actually use your time)

* the missing social nuance around the data that you can't know (e.g. a meeting called "DNS" embeds vastly different norms and expectations depending on the culture of the workplace, the society around it, and the relationships of the individuals)

* the simplifications that are forced by the limitations of technology (e.g. our algorithm doesn't understand the names of meetings)

... and many more

It's easy to theoretically know that those are simplifications, but being faced with seeing ONLY what the machine can see really forces you to confront what is actually possible for the system. There can be plenty of wishful and optimistic thinking in product design (and there should be!), but there's nothing more brutal than a playtest to help create the right conversations about which features to go after and which to leave by the wayside. A half-right (or even 90% right) algorithm can do a tremendous amount of harm.

But you can't have it be too brutal. One of the surprising insights was how much the tone of the game affected the conversations it influenced. My co-designer Amy developed four monster characters to anchor the game, which layered a playful abstraction on top of it and created appropriate distance. That distance is important! It helps put us in a more generative space when faced with the implications of the game on our algorithm design decisions.

(3) "FEELING" THE MEDIUM YOU'RE WORKING WITH

Finally, I was excited to explore and feel what it means to optimize multiple calendars at the same time. It's the core thing that Clockwise does, but doesn't have a clear analogue as a regular task. What we know from our users is that people don't tend to think "Wow, that's hard", they actually mostly think "That's just not something that is possible"

Kind of similar to how in the age of the horse you probably wouldn't go on a quick day trip 100 miles away. You don't go "That's hard, I hope someone will fix it". It just doesn't cross your mind.

As a designer of Clockwise, I could try to mess with the team's calendars, but creating a toy environment where I can see what "shape" the problem has, and get a personal, visceral feel for how it gets harder as you add more calendars was incredibly valuable to me.

I like to think that any maker should be thoroughly conscious of the thing they're working with. A woodworker should know how the object they craft should warp, expand, contract with the seasons. A chef is constantly tasting their creations.

How might we build that same intuition with complex systems?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: