What are You Looking At, or Second Glances (073)

What are You Looking At, or Second Glances (073) uses repetition to examine movement along a path in Prospect Park. The video follows a pattern: it plays for two seconds, then jumps back one second, plays for two seconds, then jumps back one second. As such, each part of the video is seen twice but book-ended by different sequences each time.

The map is a counterpoint to a previous map, Stutter Cutting (067), which uses a smaller time interval for cutting. In seeing action extend over one and two seconds, the interruption is much more identifiable. But in having the sequences repeat, there’s anticipation as to whether or not the movement will continue or be broken.

Technicals

The video was filmed on an iPhone X, on Saturday, March 17th, 2018. It was cut together using Adobe Premier.

Next Steps
  • Cut the video around people’s movement

Moving Around (072)

Moving Around (072) creates new scenes by allowing a user to drag parts of an image to new locations in space.

As with previous maps, Moving Around captures images in an AR environment, but adds dragging functionality to create new compositions within that scene.

Rather than focusing on an in-situ comparison of fixed image and context, by re-positioning, “Moving Around” allows the user to deliberately obscure parts of the environment as well duplicate parts of the scene.

Technicals

Moving Around was written in Swift using ARKit. In order to drag images, a hit test is performed when a user touches the screen to identify the image to move. As the user drags their finger across the screen, the x-y coordinates of the gesture are projected to the AR scene and used to update the image node’s position.

Not Quite Still Lifes (071)

Not Quite Still Lifes captures short videos, locates them in-situ, and plays them on loop. The videos replace any new activity happening behind them, but allow for a comparison of the recent past to the current present.

Where previous maps in the “Still Life” series used the movement in the “live” background to distinguish the static images, here, looping and edge provide subtle distinction between realtime and recorded.

Technicals

The map is built in Swift using ARKit. For the duration of a long-press, subsequent pixel buffers from the camera device are extracted using capturedImage() and piped into a video writer object. When the press is released, the video finishes writing to disk and appears in the AR scene.

Double Sided Stills (070)

Double Sided Stills builds on the previous maps by duplicating the captured image on both sides of a plane and locating the photo where it was taken. In walking through the captured images, things are seen in reverse: cars face the opposite direction, stairs obscured by a wall are visible in the photo. In a sense, the double-sided image acts as a historical rear-view mirror.

Technicals

The map is built in Swift using the ARKit framework. On tap, a new plane is added to the scene, slightly in front of the camera’s position. A pixel buffer from the camera device itself is extracted using capturedImage() and converted into an image using VTCreateCGImageFromCVPixelBuffer(). The image is applied as a material to the double-sided plane.

Still Still Lifes (069)

Still Still Lifes places photo-captures of a scene within the scene itself.

Unlike the previous map, the photos do not compound. Rather, taking a new capture brings the background, obscured by a previous capture, to the fore. Instead of an aggregation, each captures seems to replace the previous.

The captures duplicate the context, but also change the environment in which they are situated. With each capture, its plane is distinguished from the context by a slightly misaligned edge. The unsteady movement of the photographer is registered against the static captures and steady progression of the train.

Technicals

As with yesterday, the map is built using Swift and the ARKit framework. On tap, a new plane is added to the scene, slightly in front of the camera’s position. Rather than using snapshot(), a pixel buffer from the camera device is extracted using capturedImage(). This is then converted into an image using VTCreateCGImageFromCVPixelBuffer() and applied as a material to the plane.

Both experiments (068 and 069) also demonstrate ARKits’s overall reliance on distinct features or static contexts for the photos to be situated relatively. As shown in the video below, when the maps were explored outside at an intersection, the scene had difficultly maintaining the photos in a fixed location.

Next Steps
  • Compare how the app responds differently to a changing environment versus a static one.