Stacked Still Lifes (068)

Stacked Still Lifes places images at the location in which they were taken. As more snapshots are captured and located in the scene, previous images become new again as they are incorporated into each subsequent photo.

Similar to Tile Swap (005), the map is defined by edges: the edges of the snapshots within the scene distinguish them from the environment, but also compound within subsequent photos. Thus, it is the background movement that distinguishes “live” from previously recorded.

In using the phone’s position to place an image in space, the focus of the map is recording artifacts rather than manipulating them. Similar to Routine Tracking (060), a history of place is made evident by tracing people’s actions over time.

Technical

Working with ARKit in Swift, the map is based on a demo from the Apple developer conference in 2017, which can be seen here.

On tap, a new plane is added to the scene, slightly in front of the camera’s position. Using snapshot(), the current scene is rendered as a new image object, including the AR information, which is applied as a material to the plane.

Subway Stutter Cutting (067)

Subway Stutter Cutting (067) explores movement and repetition within the context of a subway station. Every second, the video loops back half a second in time; stuttering through the footage.

The repetition highlights unnoticed subtleties. The slight motion of shifting from one foot to another, glancing down at the platform, or crossing one’s hands are interrupted and then repeated.

The movement of the train in the background is registered against the stationary columns in the foreground. Without the movement of people against the whizzing train, would stutter-cutting even be evident?

Looping and repetition makes the bygone quality of the footage obvious. In seeming things twice, we know that at least part of the video stream isn’t live. The stutter creates a new time context, in which the scene plays out over twice the length of time in which the original was recorded.

Technicals

The video was filmed on a Nikon D610, at the W 4th Subway Station. It was cut together using Adobe Premier.

Compare to:
  • Routine Tracking (060): if only the people who barely moved were tracked, would we even register time or change?

Typical Size, Typical Shape (066)

Typical Size, Typical Shape compares the resulting clusters of the same dataset processed by three different algorithms.

Shown in three rows, all the blocks in the Bronx are initially sorted by size. On clicking any block, each row becomes sorted by clusters and centers on the selected block. Across rows, blocks in the “same” clusters are compared. In one algorithm, a block may be part of a cluster with only 5 other blocks while in another, additional blocks may be deemed similar.

Technicals

Different clustering results from using K-Means Clustering, Gaussian Mixture Modeling and Agglomerative Clustering. Refer to Comparing Clusters (061) for a description of each algorithm.

The map is built using d3 on three separate canvases, one per algorithm. This inherently has some limitations such as drawing elements across each row, but was used to quickly achieve independently sorting the clusters and scrolling to the appropriate location on click.

Next Steps
  • Build the map on a single canvas; when focused on a single cluster, draw connecting lines across the rows for each block in the selected cluster

The ‘Oriented’ Self (065)

The ‘Oriented’ Self  engages a familiar context through an unexpected orientation. Detected bodies are turned upside down and projected onto the ceiling.

In changing one’s orientation within an environment, movement slows and becomes more deliberate. The mental mapping of action to representation required an unexpected effort: “If I put my hand here, it corresponds to this. Do I need to move it up? Down? Left? Okay, now it’s here.”

By changing the relationship of body and context, users explore the space in new ways: hanging from the lights, moonwalking, trying to touch elements they wouldn’t typically be able to reach.

Technicals

The map uses a Kinect camera to detect a body in space. When initialized, the camera captures a static image of the context and places it as the background of a website. When a body is detected, it is isolated, rotated, and updated in realtime.

Next Steps
  • Rather than orient the body to the ceiling, how would the interaction be different if the digital representation of the environment was flipped?
  • Use a live-background, which shows the detected body twice — oriented to the ceiling and ‘normally’.

Body Swappers (064)

Body Swappers superimposes the representation of one person onto the representation of an other. Only one person sees the resulting composition and can control the on-screen position of the elsewhere-body.

The bounding box of the elsewhere-body is identified from a live video-feed using object detection. As such, the two distinct contexts for the experience collide and are made explicit through the box’s edges. But the contexts are distinct and create different performances: the controlling body is in a classroom, responding, while the other is waiting for an elevator.

In positioning the elsewhere-body, the actions of the “controlling” body are obscured. However, in re-positioning the actions of the elsewhere-body, the controlling body gives their actions new meaning and context.

Technicals

Through object detection on a live camera feed—as detailed in Live Body Context (063), a body in one space is identified and superimposed on mirror-image of another space.

Next Steps
  • Offer prompts for performance in each space: what is the body who is detected responding to?