So I made it to 7 controllers. Which is close to 24! (Sort of.)
Each controller is fabricated with a top plate and bottom plate, offset with metal standoffs. While I typically like to protype the physical fabrication parts for each project, these are the first go-around. A number of things are not ideal including the fit of the buttons, the pointy corners, and the awkward height. Fortunately some controllers aren’t necessarily meant to be held directly, but the overall dimensioning needs to be refined.
I was able to add an additional shared controller for adjusting ball speed and advancing between practice and game play screens. The ball speed adjustment is particularly helpful once players have learned their unique controller and are looking for a challenge. It also factors in a level of collaboration versus playful sabotage as both players can adjust the ball speed yet it requires them to temporarily abandon their own controller, leaving them vulnerable.
During user testing, the majority of controllers didn’t work nor was switching between them easy so I hope tomorrow’s presentation can be “user testing 2.0” and continues to inform the development of more controllers (on my journery to 100!) There’s a lot left to develop in this project; it might be interesting to develop a set of controllers based on each class next semester. How could geospatial mapping inform a controller for pong? Or take existing objects and convert them into controllers? I have no doubt this will continue to be another side project…
Even though my five functional controllers are barely interesting in operation, I’ve put the more adventureous ones aside temporarily. Rather, I’ve focused on the initial sequencing of game and ensuring that the physical switching between controllers happens smoothly for the user.
Below is a screen capture of the game initialization. Additionally, the console of the Processing sketch on the right shows when a new controller is identified (~ at 1:08 into the video).
Rather than use a keyboard to advance to different modes, I’d like to have a shared controller with a button for triggering game play and a knob to control ball speed.
The handshaking code to get the name of the controller and then set its behaviour is as follows:
My focus this week has been culling through the footage and pairing shots withinthe same geographies – i.e. two videos from the station perspective or two videos from the train perspective. Below are a series of screencasts that show the mouse movement interaction of switching between one side and another. Each video within the pair is continuously looping so when watching one, the user is missing out on what’s happening in the other. Some footage is used in different sets of pairs, but through the juxtaposition, different aspects of the content are emphasized.
The subway project continues to develop around three central geographies: the train, the station and the neighbourhood and examining these each as individual contexts—illustrated from within—but also as connected contexts—illustrated by moving between.
The train is a context where many narratives converge and diverge; a continuous space of renegotiation and shuffling. It oscillates between moments of intense speed coupled with minimal but constant noise (the whurring of wind, the regular clicking of the track) to complete motionless with an outburst of excuse me’s, door dings, and alerts to “stand clear of the closing door”.
The station is an island yet functions between the spaces of the train and the neighbourhood; it is the bridge and an interstitial space. It is slow and quiet until suddenly it is not: a train whizzes by!
The neighborhood lives above ground, where the people provide the connection to the station, and secondarily to the train. However, sound seeps up from below through air grates with the rumble of a passing train or up the stairs with the people.
With Empire as a strong reference, I’m interested in what happens when applying their methods and juxtaposition techniques to alternative content? In Empire’s case, the narrative is driven by the personal, human stories. But if the subject of my piece is an object (the train) and a place (the station or neighborhood) rather than a person, are the techniques still appropriate? Can they be used to highlight the people within the very structured environment?
In order to test these questions, I’m considering using the same footage from each of the geographies (the train, the station, the neighbourhood) but reconstituting them through different frames of juxtaposition.
Pairs, but Separate
The first proposition is a series of diptychs, one for each geography. Each diptych is composed from two fixed-camera perspectives within the geography, but the user can only see one perspective at a time depending on they mouse position. Each side of the diptych is running simultaneously while watching the other. I’ve created a couple prototypes experimenting with the relationship between different shots. (Static images to be replaced with embedded videos…)
Note: I need to shoot more Neighbourhood footage to build out its geography.
A Triptych
Another proposition is a triptych in which the user has no control over what is playing and what is being watched. Footage from the train, station and neighbourhood is shown simultaneous. The train footage plays continually, but alternatives between the station footage and the neighbourhood footage when stationary or in motion, respectively. Note: The neighbourhood footage doesn’t feel appropriate for this at the moment. The emphasis is on the gateway down to the underground versus the neighbourhood as a context. Perhaps reframing the shot would change the focus and make clearer the relationship. Additionally, maybe there needs to be more alignment in movement like there is between the train and station shots. People exiting the stairs could correspond to when the train began moving in the middle shot.