The Subway and The Station

My final project for Alt-Docs proposes a web-doc about the subway as a place and space, as a character.

A stasis when moving between stations and then eruption of activity at the stations.
A stasis when moving between stations and then eruption of activity at the stations.

A subway train often exists between contexts (the stations) and acts as a context itself for the activity that takes place. When between stations, there’s a calmness within the subway car, but once in station and with its doors open, the subway car undergoes a reshuffling.

In relation to the stations, the train connects these fixed and disparate world.

The movement between these spaces is largely devoid of context, often travelling in tunnels without lights or changing scenery which typically orients people through travel. In this way, the train car and its people are simultaneously the context and the characters.

traininternal

 

This external movement of the train between stations is juxtaposed with the internal stasis and consistency. The people are a step removed from movement: the train is moving but they are stationary. They’re also isolated from the external city as a context. In alternative modes of transportation such as riding the bus or walking, people physically see the change in location.

diagrams7

Manifestation

The web-doc will present series of video pairings, juxtaposing the train and the station. The user interaction will focus on switching between or controlling the presence of the two videos.

Potential Shots


One camera is stationed in the train, and rides the line from one end to the other. This would be paired with fixed shots from several stations along the same line. These shots would have fleeting moments of shared movement but it would also highlight the disconnect/isolation within the train as the station shots picking up other trains (purple) while the “main character” train is only “knows about” its existence.

Two simultaneous videos of the train are shot from either end of the line. They overlap for a moment in the middle. But how does this relate to the station?

Audio

The audio track for each video pairing will provide an imagined narrative. But it will be constructed from overhead conversations recorded on the train, audio book recordings of what people are reading, clips from songs the subway-riders are listening to, etc. The composited audio hopes to bring together the paired videos into a single narrative.

Controllers for Pong, Part 4: Post-Playtesting

“Same” Controllers

I see games as having two forms: the idea of the game and the practiced game. In the idea of the game, all rules and tools are the same and players are differentiated by their skills. By each side using the same tools, it is an attempt to equalize play. However, this is a falsehood as it this presumes that all bodies are the “same” and competition is determined by skill alone. In practice, we have different bodies with different capabilities. In the practiced game, players may use identical tools, or controllers, but the tools aren’t the equalizer their standardization suggestions. Controllers inherently cater to a type of body and type of capability.

Players and Controllers

The project is driven by an attempt to understand controllers:

  • How can the same goal of hitting a ball be achieved by different interactions?
  • How do different interactions change the relationship between the user and the game?
  • Do different interactions make different games?

While these questions are interesting to me, through play testing it was evident that the players are not motivated by these same curiosities. They asked “How do I play and how do I win?” My assertion is that the answer to “How do I win?” varies between players. By offering a multitude of controllers, will players seek out the one that allows them to play their “best”?

“Different” Controllers

Players may decide to use the same controllers or different controllers. If two players choose different controllers, is it still the same game as when players are using the same controllers? That is, do the controllers themselves define the game?

Fundamentally, the behaviour for playing pong is to move a paddle to intercept a ball. When different controllers control the same paddle parameters—speed, direction, and/or position—but through different interactions, it is a question of how they are controlling them. By providing players the choice in how they control the paddle’s behaviour, the game rejects that a singular form of interaction is required to constitute a game.

Providing Different Levels of Control

Many users had difficulty using controllers which modified only one aspect of the paddle (i.e. its speed or direction) through a binary condition (i.e. fast versus slow or up versus down). They had to anticipate both the location of the paddle as it was constantly moving as well as the location of the ball.

In the context of play testing, they had very little time to learn and master these new controls. They quickly had to work against their expectations for how the paddle would behave. I wonder if played over a long period of time, once they could anticipate the behaviour of the paddle would it become boring as toggling between two binary states is a somewhat passive physical interaction? Additionally, how would adding a button that toggled between “in motion” versus “stationary” affect the game play?

Controller Orientation

For the controllers that moved the paddle to an absolute position on the game area, users implemented different orientations of these controls.

When using the multiple-button based controller, one user oriented the buttons horizontally and used multiple fingers to jump between locations. Alternatively, another user oriented the controller vertically—directly corresponding to the movement of the paddle on the screen—and used only a single finger to control the paddle’s position.

Next Steps in Testing

When play testing, I manually simulated the action of the paddle as the users interacted with cardboard controllers. When they controllers offered various parameters to adjust—speed, direction, position—I had a hard time accurately mapping their interaction to my manual adjustment of the paddle. In my next round of user testing, I plan on using a coded version. The controllers themselves will not be the final fabricated versions (enclosures versus breadboard), but they will use the actual sensors to directly control how the paddle behaves on screen. I also plan on coding a “debug” view for myself that visually shows the sensor output in order to map the user behaviour to what’s going on “behind the scenes”. This visualization will be particularly helpful when testing the tilt-based controllers where speed and direction are controlled along two different axes.

Controllers for Pong, Part 3: Planning

A while ago, I tested three early ideas for different controllers. Since then, I’ve broken down the paddle parameters: speed, direction, position and motion. These will be combined or isolated in a variety of ways. Some will be able to be controlled by the user versus others are products of the system. I hope to create controllers based on three categories:

  • object: akin to a typical controller, with buttons, switches, etc
  • environment: the controller is affected by its context (light, temperature, orientation, sound)
  • body: the movement and position of the user’s body itself input

Mapping the System: Various Object-Based Controllers

Bill of Materials

Sound Notation

Reading and Transcribing Sounds

Musical scores are a precise set of actions: play this note, at this speed, at this volume, for this duration, with these other notes. They are read and then played. They are rules for the future, outlining the sounds to be played.

Sonograms (also known as spectrograms) and waveforms exist after the sounds have been played. They are visual artifacts of the sound. Sonograms show the spectrum of frequencies in a sound selection with time along the horizontal axis and frequency along the vertical axis. The colour intensity often represents the amplitude of the individual frequencies. When frequency is shown logarithmically, rather than linearly, the musical and tonal relationships are emphasized.

Scores and sonograms bracket the event of music being produced.

  • A sonogram is a visual analysis of a recording, of sound already played
  • By parsing out notes from the sonogram, I’m going through a translation of played > recorded > parsed > replayed in a new context
  • Using the sonogram to approximate a note and then building it into the composition
  • Bird songs are complex, polyphonic yet the score (In C) is composed of single notes, no chords – monophonic

Experimental Notation Precedents

Robert Moran
Louis Andriessen, Blokken from “Souvenirs d’enfance”
Brian Eno’s graphic notation for Music for Airports, published on the back of the album sleeve
Brian Eno’s graphic notation for Music for Airports, published on the back of the album sleeve
Artist John De Cesare’s rendition of Wagner’s “Ride of the Valkyries”
Gyorgy Ligeti
Konstellationen by Roman Haubenstock Ramati

Bird Sounds in C

Conducting Music

Aleatoric music is a form of composition in which a significant element is left to the determination of its performers. Terry Riley’s “In C” is a composition of 53 phrases to be played in sequence, but each performer individually determines the number of times they play each phrase. They form a collective conductor yet also act as many conductors as the role is distributed among each of them.

Terry Riley's "In C"
Terry Riley’s “In C”

Bird Sounds in C proposes to create an orchestra of birds to play “In C”. For each bird, the phrases of “In C” are recomposed by alternating and arranging an individual “C note” extracted from their bird song. These phrases are then further composed by a single user who chooses when to advances each bird through the composition.

Constructing Sounds

Rather than starting with notation and then playing the written composition as a “live” performance, Bird Sounds in C instead starts with recorded sounds and reorders extracted elements to form the phrases of Riley’s “In C”. By using recorded bird songs—sounds with specific structure and meaning within their species—and recomposing them into a new composition, the sounds are given new meaning and recontextualized outside of “wildlife”.

The written score for “In C” appears monophonic, consisting of a single melody without chords or harmonies. However, through Riley’s intended method of performance, the various phrases and patterns overlap to create a densely layered and heterophonic texture.

Reinterpreting this dense texturing, Birds in C instead uses the complex polyphonic sounds of bird songs. When visualized as sonograms, the layered frequencies of recorded bird songs are striking.

A Fox Sparrow’s Breeding Song (Source: The Bird Guide)
A Fox Sparrow’s Breeding Song (Source: The Bird Guide)

Using the sonogram as a visual artifact of the bird song, I identified a snippet which approximated “C” which I then transposed and arranged into Riley’s phrases. In this way, the translation of recorded sound to extracted and reordered elements attempts to use polyphonic sounds as monophonic approximations of single notes.

Bird Sounds Iteration-01

Click the circle to start and advance the bird sounds

Midi Sounds Iteration-01

Prior to working with bird sounds, I used midi to create the tones. One user controlled two “musicians” to advance through the piece using the keyboard. When purely using midi, the resulting audio was to distinguish into two “musicians”.

Questions of Playability

  • What is the allowable degree of asynchronization?
  • Does the computational aspect create a system which removes human variability (i.e. the code makes sure when a phrase is advanced, it comes in on the first beat of a bar – or does the musician need to keep track of this?
  • How much content from the original notation of the piece does the user need to know? What visual indicates are needed to make choices about when to advance one bird or another?