How Thrilling: Extending The Body

‘How Thrilling’ uses the familiar song and dance of Michael Jackson’s “Thriller” to explore how technology can extend the body into many disparate spaces, through many representations, and for many audiences. Through this lens, the project examines how technology standardizes the body.

The project is composed of four feeds illustrated in the image above.

The performing body is presented through two primary representations in feed 1 and 2 respectively: an abstracted stick-figure-like skeleton that is normalized to a single set of proportions and an unmodified in-situ RGB image feed. The abstraction encourages an unselfconciousness of the performer while also highlighting its irregularity of motion in contrast to the precise repetition of Michael Jackson’s looping skeleton. In juxtaposition, the RGB feed–seen only by an audience in an entirely separate space without the accompanying music–highlights the nonconformity of bodies to any form of standardization.

If the performing body closely matches Michael Jackson’s moves or a set time period expires (whichever happens first), the front projection for the performer switches to reveal a live RGB image feed of the audience watching their RGB image feed. For a brief moment, they can communicate across these displays (basically just like Skype, Facetime, etc.) and the audience realizes they are not watching a recording by a live performance. Then, without warning, the projection for the performer reverts back to the abstracted skeletons.

Two additional feeds provide context within the project. Firstly, a constant silent loop of the original Thriller video excerpt gives visual context to the audience watching the RGB image of the performer. They might recognize the actions of the performer in the Michael Jackson video and vice versa. The last feed visualizes the motion trails of the performing body. Without the skeleton, it draws attention to the impercision of our actions despite attempting repetation.

Desired locations:

  • Feed 1 (Performer + skeleton projection + audio of Thriller song): first floor lobby
  • Feed 2 (RGB image of performer): somewhere on the fourth floor, not too close to the elevators
  • Feed 3 (Original Thriller video loop): somewhere closer to the elevators
  • Feed 4 (Action trails of performer): somewhere on the fourth floor, proximate but not adjacent to the other screens

Making ‘Making Legible’ Legible: Part 3

Since processing the text documents, I’ve been refining the goal of “finding latent (content and contextual) relationships within a large corpus of texts”. As the text remains a work in progress, I want to focus on how it has evolved and continues to evolve. A genealogical approach to text-relationships can be used to identify what pieces have been disregarded or ignored (and thus require further inspection) or identify the dominant tendancies and trains of thought.

An interesting writing tool for collaboration and version control: http://docs.withdraft.com

Beyond looking at the past, I think this project can provide a foundation for developing a writing tool that moves beyond version control or collaborative commenting. Version control tends to provide a fine-grain binary approach: it compares two things and extracts the insertions or deletions. While this is helpful in an isolated scenario, I’m interested in broader developments across multiple objects over many time periods. Alternatively, version control also provides a high-level view indicating change-points over a long time, but those points of change are overly simplified – often represented by just a single dot. Without context or without knowing what specific time a change was made, this larger overview provides little information beyond the quantity and frequence of changes. Through a geneological and contextual approach to analyzing an existing body of text, I’m hoping to identify what sort of relationships could inform the writing and editing process.

With all the data now added to the database, I’ve been exploring sentence similarity. The diagram below shows the process I’ve gone through up to this point.

Once I’ve computed a two-dimensional array mapping the similarity of all sentences to each other, I plan on using that information to create visual interface for explore those relationships. The wireframes below are a rough sketch of what form this might take.

More Controllers for Pong

The lastest controller for Pong uses a cardboard plane attached to a potentiometer to control both speed and direction of the virtual paddle. The rotation of the potentiometer divided in half to control direction, and then within that extent, speed is modulated.

Process

Because the ESP Chip is somewhat expensive (relatively), I invested in protyping my circuit on breadboards and milled boards on to which the chip could plug in temporarily.

Breadboard Prototype
Milled prototype with header pins for the ESP chip

In order to prototype directly with the chip rather than a breakout board, I needed a programming jig for connecting via USB and closing certain routes on particular pins. When programming…

  • GPIO0 needs to be connected to GND. A button was held while uploading code.
  • Reset need to be flashed. A button was pressed initially before programming.
  • Tx and Rx connections between the ESP chip and FTDI cable were accomodated with header pins
Completed board
Schematic

The Board

The ESP chip draws a significant amount of power; however, conflicting advice online made it difficult to size capacitors. Although, I found these tips to be most helpful. While they recommend a very large capacitor (470 uF) across the Vcc to Gnd, the Adafruit breakout only used 10 uF. While I included two 470 uF capacitors, my next iteration would explore smaller size. However, a 0.1 uF decoupling capacitor across the ESP8266 Vcc to Gnd inputs very close to the pins was a critical addition.

Making ‘Making Legible’ Legible: Part 2

The structure of data has profound consequences for the design of algorithms.
– “Beyond the Mirror World: Privacy and the Representational Practices of Computing”, Philip E. Agre

To atomize the entire corpus of text, the server processes each document upload to create derivative objects: an upload object, a document object, sentence objects, and word objects. By disassociating the constituent parts of the document, they can then be analyzed and form relationships outside that original container. I’ll discuss those methods of analysis in a later blog post. The focus of this post is how the text is atomized and stored because as Agre pointed out, the organization of data fundamentally underpins the possibility of subsequent analysis.

The individual objects are constructed through a series of callback functions which assign properties. These functions alternate between creating an object with its individual or inherited properties (i.e. initializing a document object with a unique ID, shared timestamp, and content string) and updating said object with the relational properties (i.e. an array of the IDs of all words contained within a document). By necessity, some of these properties can only be added once other objects are processed. The spreadsheet below shows the list of properties and how they are derived.

Properties for each object type

Additionally, as discussed in the previous post, the question of adjacency (or context) is a significant relationship. After the words or sentences are initialized with their unique IDs, the callback function then reiterates over them to add a property for the ID of the adjacent object.

At the sentence level, because the original documents were written in markdown, special characters had to be identified, stored as properties and then stripped from the string. While the “meaning” and usage of these characters is not consistent over time or across documents, they can later be used to identify and extract chunks from document.

Below is an example excerpt of a processed output, from which the individual objects are added to the database. The full code for processing the document upload can be found here.