Haunted Dollhouse Planning

At this point I have gotten the basic Google home API working on my raspberry pi and have started manipulating the methods of interaction (changing the trigger word, changing the voices) and have started to work on building a specific Google home Action. Still to do is build the haunted dollhouse enclosure. Since the basic first interaction I want users to have with the haunted doll is to knock on the dollhouse door, I wanted to first set up a basic knock and open circuit.

The piezo would be connected to the inside of the door and the motor underneath or above the door, rotating an axle that controls the door hinges. I am not sure if a servo is best for this or if there is a better motor for this application. The piezo would detect the knock (any kind of knock should do) and trigger the motor to open the door.

Beyond this very basic interaction I have a few different ideas. The aesthetics and interaction in my mind are beginning to push into Zoltar territory, so I was thinking of adding a thermal printer printout of whatever fortune or advice the doll is giving. Additionally I would like to replace the trigger word with the knock, i.e. to connect the arduino to the pi and trigger the pi to start listening for a question once a user has knocked on the door. Additionally I would like to incorporate some movement in the doll, that was the biggest feedback that I received at user testing. I think an easy start would be to put two LEDs in the eyes so that the eyes light up. From there, with time and resources I could add additional movements, such as a head or arm movement.

Haunted Dolls and Things

Recently I’ve been thinking mostly – but somewhat disjointedly – about haunted dolls as relational to AI and various societal-level fears. The doll as feminine artifact rendered creepy by autonomous movement and malicious intent is drawing parallels for me to visions of consumer-level AI – which frequently is embodied feminized labor – disrupting human civilization, both on the level of job displacement and as genocidal overlords. Many questions arise. At what depth of the uncanny valley does misogyny lie? On what level are we as a society afraid that factory workers and truck drivers are being emasculated by machines and at what level are we genuinely concerned about joblessness and the negative externalities of creative destruction? How much are baseless fears of AI – akin to fear of paranormal and unknown – tangled in with worthwhile critiques and concerns, and how should AI policy and practice be formulated taking this intermingling into consideration?

This is a puzzle that I want to play with so I’ve been wanting to make some kind of “haunted doll” and to put inside it a camera with face tracking and servos that allow for head movement to do the kind of “Scooby Doo” project but then also put in a microphone and speaker connected to a Raspberry Pi running the Amazon Alexa API. Voila, a creepy Amazon Echo.

Hi Five Machine

For our midterm Ridwan and I thought it would be really funny to make a machine that claims to give a high five but actually fakes you out every time.

Aesthetically it reminds me a bit of the toilet hand from Zelda.

Inside it has an arduino, two metal gear servos on a pan/tilt bracket meant for camera tracking controlling the hand, an HC- SR04 distance sensor, and an LCD screen. Ridwan found a box template for a laser cutter and we laser cut the enclosure out of cardboard, making the necessary modifications for our sensor, hand, and screen.

We had quite a bit of trouble with powering servos and actually destroyed two servos by putting too high a voltage across them.


Making a Controller for My P5 Game

(note all the code for these projects can be viewed on my Github here though I will make specific references to scripts down below).

One of my P5 projects that I am the happiest with is an interactive poetry/anti-game game that involves corralling a ball with the cursor to reveal a narrative. I decided I wanted to replace the mouse and cursor with a joystick, and that I wanted to use node, express, and a websocket to interface between P5 and the arduino.

I started out by adapting the SerialInOut example code for arduino to work for a joystick since the example uses a potentiometer and a joystick is just two potentiometers and a switch. Making those changes I wrote this code for outputting the joystick readings to serial. I had some trouble setting wiring the joystick initially but Aiden helped me out. Also I read on a now forgotten stack exchange that INPUT_PULLUP is necessary for setting the pinmode on digital ins that are going over serial, but I didn’t understand why that is the case.

Before trying to figure out how to integrate the serial readings into my game/poem script I connected it to a very quick example here which you can see in the first gif above. I’d used node and websockets before but I didn’t really remember them very well so it took me a pretty long time to get off the ground. I made use of some code that I had from Mark Hellar from when I was at Grey Area, which is why I ended up using express.

Once I figured out the example worked I went about redoing my P5 project to take the serial input. It didn’t take very long, I just replaced the mouseX and mousePressed variables with the respective inputs from the joystick. I intentionally made it so that you couldn’t see the player’s “cursor”, I wanted discovering the relationship between what the player was doing and the motion of the ball to be didactic and not immediately clear (so that you have to play around with the joystick before you figure out what is going on).

Breaking A Servo?

I wanted to make a project that would display a dubiously motivational message each time I sat at my home office desk. To start off I used a HC-SR04 ping distance sensor and attached a pin with a message (in this case a pin I got at the art book fair) to a little SG90 micro servo motor. I initially tried to map the distance readings of the HC-SR04 to the angle of the servo motor, but it was too noisy and the servo jittered around quickly (you can see that code commented out below).

After that I tried running it as an if/then statement, where the servo would change positions if the distance sensor recorded a short distance. However, the servo didn’t change position even as I watched the distance go below the threshold on the serial monitor. Instead it stayed in one place making a really awful noise and getting hotter until it stopped moving at all. After that it wouldn’t move at all even for simpler movements like the servo sweep example code, leading me to think that it was broken. However I tried it again after it had cooled down and it worked fine, but just still not for the application I wanted.

You can see the code here:

#include <Servo.h>

Servo myServo;

const int trigPin = 13;
const int echoPin = 12;
const int servoPin = 6;
int servoAngle;

long duration;
int distance;

void setup() {
  pinMode(trigPin, OUTPUT);
  pinMode(echoPin, INPUT);

void loop() {
  digitalWrite(trigPin, LOW);
  digitalWrite(trigPin, HIGH);
  digitalWrite(trigPin, LOW);
  duration = pulseIn(echoPin, HIGH);
  distance = duration * 0.034 / 2;
  Serial.print("Distance: ");

  if(distance < 10) {
  } else {

  // int servoAngle = map(distance, 0, 30, 0, 179);
  //  myServo.write(servoAngle);


Subway Turnstile

Since moving to New York I’ve done it again and again: swiped my MetroCard through the reader only to slam at full walking speed into an unmoving turnstile. At first I thought the problem was just me, I was new to the subway and hadn’t gotten the hang of swiping smoothly enough to avoid “PLEASE SWIPE AGAIN”, but then I noticed that I was not alone in my turnstile mistakes. Not only were other people getting the “PLEASE SWIPE AGAIN” message, they were walking straight into the turnstile after doing it.  This led me to two conclusions:

a) Swiping is unreliable

b) There is no feedback for when the swipe is wrong, the turnstile makes the same ping for a successful and an unsuccessful swipe

The clear advantage of swiping is that it is a fluid forward motion that can be combined with walking to speed up the flow of foot traffic through the turnstyle. Obviously this fluidity is interrupted by the miss-swipes, completely reversing the advantage some small percentage of the time, and making subway riders more cautious, eroding a bit of the advantage overall. Other metro systems I’ve been on (notably BART and the London Tube) have used tapping a preloaded card as a gateway mechanism, which is a more reliable experience but less fluid, usually tapping requires a quick pause in forward movement as the card is hovered over the tap area. I’ve heard that the MTA plans to move to an all-tapping system in the next five years with the intention to include a ticket app and smartphone tapping (furthering the smartphone’s integration of all personal items into its being). The idea of smartphone tapping puts me off, there is something democratic and status-leveling about the MetroCard that becomes lost once tappable cards become used alongside smartphones. Beyond that, smartphone tapping for metro access contributes to the power creep of smartphones toward the sole tool of world navigation. I don’t want to live in a world in which having a smartphone is a necessity for mundane tasks like commuting.

As for the feedback, how hard could it be for the turnstile to make a different kind of beep when the swipe was unsuccessful? The mini LCD screen displays the “PLEASE SWIPE AGAIN” message but by the time it displays I am already half a stride beyond view of the screen.

Beyond improving the reliability and fluidity of the turnstile – improvements which mainly allow for travelers to notice its existence less – how can the experience of the turnstile be changed to bring delight and joy into our daily lives? Is this possible, or are our lives only made better by the turnstile’s cognitive evanescence. In researching a better turnstile experience I came across James Murphy’s Subway Symphony project which feels like a step in the right direction but misplaced in terms of effort. A difficulty is that any kind of variation between turnstiles can decrease usability by sowing confusion in riders who are not in the know. Something like the Masstransiscope or a spur of the moment melody driven by differing notes attached to turnstiles in a single station would be disorienting at first and wearisome over time. I can’t think of anything that really fits with the turnstile interaction, can you?

Servo Switch

Pen Switch

My first switch was very simple, I wanted to incorporate elements outside of the traditional breadboard circuit so I used the metal part of a pen to connect two circuit elements and complete a circuit and light up an LED. 

Servo Switch

I really wanted to get my hands into programming the arduino so I next made a switch that involves stringing a resistor into the arm of a small servo motor and using the arduino to control the movements of the motor so that it completes a circuit and lights up an LED. It’s a really bad switch on its own because it just cycles on and off so I added a button that lets you stop the motion of the servo motor when it is pressed (kind of jankily by mixing up the ground and the communication signals the arduino is sending to the servo). It starts moving again as soon as you unpress the button so if you want the LED to stay lit you have to keep your finger on the button. To move the servo I just used the SERVO_SWEEP_INO example code in the arduino IDE.

What Is Interaction?

Within ten minutes of using any screened device my left eye begins to strain and ache. My wrists hurt almost constantly and frequently I need to use wrist splints when typing in order to avoid the pain. When I go to sleep at night I have a dull ache in my neck and between my eyes, and when I wake up my first thought is to check my phone. Before thinking about coffee, before thinking about food, or work, or my day ahead, or my partner lying beside me, a light goes off in my brain, a synapse fires, indicating that it is seeking a dopamine reward for checking Twitter. I have callouses on the fingers of my right hand from holding my phone.

Is this interaction? Bad interaction? Parasitic assimilation? These interaction stress-points within my body – eyes, wrist, neck, brain – appear to be failing, or at least warping in accommodation of the devices I use every day. My phone and my eyes might be in daily conversation, but their relationship is an unhealthy one, with an uneven power structure in which my phone does all the talking. Even with my Alexa robot I feel on unequal footing. While the physical stress points I experience with my phone or my computer are removed when interacting with it, I find that regardless of where I am thoughts are now occasionally coming to me in the form of “Alexa, what is ___”.

Interaction conceptually stretches quite a bit of ground beyond that of the relationship between a human and its electrically powered glass-encased computational tools. Starting in the activity of hewing down that ground to something precise a natural question arises: is some level of consciousness necessary for interaction? Could two comets colliding in deep space be said to be participating in an interaction? I would tend to say no, and my inclination in saying no is that there is no information meaningful to either party being exchanged. Chris Crawford’s metaphor, but essentialized to information exchange. Two cats fighting in an alley: definite interaction, with meaningful information being exchanged back and forth. Me staring at a painting and spurned to contemplate all the bad art in the world: there is information being consumed and processed but not exchanged, so I would incline to say no, not interactive.  Two computers pinging each other over a network exists in a debatable grey area for me. Perhaps I’ve anthropomorphized the computer a bit too much out of familiarity; it seems more meaningful to a computer to receive a ping that instructs it to emit a ping than it is meaningful to a comet to lose some of its mass in a collision with another comet. Perhaps it is that computers are human made tools, so any communication between the computers is more grandly thought of as their human creators communicating across time. Perhaps it is the reciprocity, the computer takes in information and then interprets that information as a call to emit different information.  It’s really little different between a human and a computer interacting, though the human adds an element of unanticipatable information to the exchange.

So then interaction is the exchange of information between two entities aware of the exchange, and some definition of bad interaction could be one that causes frictions and damage to one or both of those entities.  What then is good interaction? Just the opposite of bad, pain free interaction?  I would add another condition, that of “mutually desirable results”. Rather than try to define that concept generally let’s just assume that that will make itself apparent in any given interactive situation. Moreover, the interaction should be didactically simple, the method of information exchange should be easy to learn if it is not already known.  So then “good” interaction meets these three criterion: it causes no harm in the long or short run to either entity, it achieves mutually desirable results, and it is didactically simple. Easy!

To end I have one fairly simple digital work that I thought of that I saw in San Francisco at Grey Area that is intentionally non-interactive: http://grayarea.org/initiative/cache-money-teal-back-wage-gap/