Controllers for Pong, Part 1

Games are identified by a rule set, in which participants voluntary abide to a common framework of play.1 While the ruleset governs the behaviour of the players, it also determines the properties of tools and how they may be used by the players. 2 Rules also outline the goal of the game. This may include a condition for winning and losing or when a game ends.

So, are games differentiated by their rulesets or their tools? Is it a combination of both? When changing one of the components of a game—the ruleset or the tools—while maintaining the other, is a new game created? The intent is to explore this question by experimenting with various sensors to create 24 iterations on new controls for a single-player version of Pong.

Digital Games, and Non-Digital (Social) Games

In comparing digitally-mediated games and non-digital games, perhaps it becomes a question of how the ruleset is interpreted and enacted? My use of “digital games” refers to games in which the ruleset is regulated by digital technology. The ruleset takes the shape of literal code: a series of variables, if-statements and for-loops. The ruleset is fixed and constantly engaged. The execution of the code is a process of continually comparing actions of a player with the ruleset and adjusting the context (the digital environment created by the game) in response to the player’s actions, as dictated by the rules.

A social game, or a non-digital game, is a game in which people interpret how the actions of players correspond to the ruleset. When a game is not mediated by digital technology, enacting the rules relies on interpretation by the player(s). In multi-player games, the expectation is that all involved concede to the same interpretation of the rules, otherwise players would be playing “different” games. Yet human interpretation is variable and thus allows for invention and opportunity to “manipulate” or operate within a deviation of the ruleset.

Without social interpretation, can digital games accommodate invention or cheating? Initially, my assumption was no. And perhaps this is true in that one cannot change the ruleset executed by the digital technology. But again through interpretation, one can “exploit” the ruleset. For example, in CandyCrush, when a player loses a life, they must wait for a period of time to regain it back. Knowing this, the player can adjust the clock on their smartphone to be past the required time period and will gain back the life immediately. While the ruleset itself was not changed, and the computer correctly interpreted the rule by comparing the duration of time since “death” with that of the computer’s clock, a player is able to interpret the rule, manipulate the controller (the computer), and thus exploit the constraint. Since the ruleset in a digital game is rigidly adhered to, it is easier to manipulate as it will not deviate. The player does not have to “guess” how a rule may be interpreted. But here, the manipulation by the player must occur outside the game. If the game itself tracked time, it would not be “tricked” in the same way.

Controllers as Extensions

In the case of games, “mapping” refers to the relationship between controls and their effects. What effect does the action of a control have on the play of the game and the digital representation of the tools?

The original version of Pong used rotary knobs to control the paddles on screen. Turning left moved the paddle up while turning the knob to the right moved the paddle down. Here the physical action which operates in one direction — horizontal — is mapped to movement in a different direction — the vertical axis. Furthermore, the physical extent of knob rotation corresponds to the visual extents of the digital screen on which the game is played. Although the physical direction of the mapping is not identical, players understand the digital effects of their physical actions.

In Pong, there are two sets of tools: the controls physically manipulated by the players (the knobs) as well as the paddle within the digital representation of the game that hits the ball. These two tools are inextricably linked as the digital paddle is controlled by the physical knob.

When a familiar game such as Pong is played with new physical controllers that have alternative mappings to the digital representation of the paddle, does this create a new game?

On Context and Frameworks

Reading Buxton’s preface to “Sketching User Experiences” felt like someone had crawled into my head and clearly articulated the messy thoughts that have been accumulating. He writes, bolding by me:

Some academics, such as Hummels, Djajadiningrat, and Overbeeke (2001), go so far as to say that what we are creating is less a product than a “context for experience.” Another way of saying this is that it is not the physical entity or what is in the box (the material product) that is the true outcome of the design. Rather, it is the behavioural, experiential, and emotional responses that come about as a result of its existence and its use in the real world.

Before arriving at ITP, I wrote in my statement: “To design an experience—whether it be architectural, social, digital—establishes a context for relationships to occur between people, between people and space, and between people and technology. Opportunity for interactive technologies resides in amplifying how individuals position themselves in this context. “

Fundamentally I think this interest in context and frameworks—the establishment of possibilities or constrains—is constantly being questioned through my work here at ITP

For example, this (work-in-progress / yet-to-be-realized) idea of an “endless” loom provides a framework for users to alter the ruleset which creates a pattern. They are actively engaged in generating the content while simultaneously adjusting the framework. These adjustments are recorded into the system for future (and past) users to examine, creating a visual artifact of the various interactions.

The question that keeps coming up when I consider these frameworks or contexts for experiences is: how much control does the user have to alter the framework itself? On the sliding scale of reciprocity between action and reaction, where does the framework fall?

Synchronicity: Coupling Sound and Visuals

Synchrony occurs when events operate or occur in unison. At the core of synchrony is the passage of time and a change (the event) occurring within this. “Events” refer to a change in something—sound, motion, visuals, etc. Something was on, and then it wasn’t. Something was here, and then it was there. The other component of synchrony the relationship between multiple events; synchrony cannot be identified in isolation. When exploring synchrony, the questions to ask are:

  • What are the events, or what things are changing?
  • What is the duration of the relationship?

In the physical world around us rarely are events synchronized. We walk at a different pace that those around us, cars accelerate at difference speeds when the light switches from red to green. But this is why our attention can be captured by synchronicity. Since it’s so uncommon, we often take note when objects, people, actions or sounds around us are in sync. Sometimes this synchronicity is planned and part of a performance, such as an orchestra playing to the time kept by a conductor.

Sometimes it happens by chance. There’s a goofy scene in Scrubs in which the sounds playing in JD’s headphones seemingly align with the actions and movements of those around him.

I’m interested in exploring how changes in synchrony and asynchrony may affect the attention of a user or focal point of a piece of work. What interactions can be used to adjust the synchrony of events? Can users only participate in a piece if they perform synchronously? How does synchrony or asynchrony affect their behaviour?

Complete Audio and Visual Synchrony

Jono Brandel created Patatap, a keyboard-controlled sound and visual animation platform. Individual sounds and short animated elements are mapped to each key. Pressing the spacebar produces a new colour scheme as well as a new set of sounds. When keys are played systematically, the user can generate melodies and rhythm. At an event in San Fransisco, Brandel himself demonstrated this with the help of a looper (I think?)

What’s particularly evident in the performance video is how intertwined and inseparable the sounds and visuals are. Each time the colour background swipes across the screen, it reinforces the underlying beat. Perhaps this level of synchronicity is particularly suited for the type of electronic sounds available in the platform. Like each animation, each sound is triggered in isolation and appears/is heard precisely. There is overlap between sounds and visuals but the trigger and their presentation to the audience occurs individually.

Variable Synchrony and Asynchrony

In Bon Iver’s recent music video for 29 #Strafford APTS, the audio is accompanied by distorted visuals that are akin to seeing through a kaleidoscope. The visuals at the beginning of the song seem to take inspiration from Joseph Albers, composed of colour blocks with stark edges. Yet this clarity and precision is disrupted by the effects of the kaleidoscope—they become layered, multiplied and seemingly of a dream state. They transformation of visuals and effect of the kaleidoscope do not seem to be tied to the audio. Changes happen without regularity. The computer generated graphic compositions switch to recorded footage of birds, a bedroom, nature. Yet there is one striking moment of alignment between the music and visuals. After zooming into a psychedelic-like sunburst graphic that was above the bed, and seeing the pixel grain of screen which is showing this digital image, at 2:53 both the music and visual “snap out of it” – almost like waking up out of the dream. With the reappearance of words layered on the visuals, the viewer/listener is reminded that a story is being told. The dream state they were lulled in to through the use of asynchrony and blended changed is disrupted by the sudden alignment of sound and visual.

Work in Progress

To explore this idea of visual and auditory synchronization, I want to create a potentially-interactive animation to be coupled with a previous sound piece I worked on. In early prototyping, I’ve started looking at how to get objects moving independently.

I imagine building out a larger prototype in which multiple objects are synchronized to different aspects of an audio clip. Maybe changes in volume result in something changing size, or an object appears and disappears in line with the beat. Are all objects simultaneously synchronized with the audio or do they each come in and out of sync independently?

A Week of Making: Day 4

We recently learned how to transmit signals using serial communications from an Arduino to the browser and produce an “outcome” with P5.js. Physical interactions, such as turning a potentiometer or pressing a switch, resulted in signals being passed from the Arduino using Serial.print().

As a prototype to explore this functionality, my partner, Bryan, and I built a rain simulator that was controlled with a force sensor. More force produced more rain drops. (link to code)

An early iteration of the code used three bands of values from the force sensor to trigger varying amounts of rain. A sensor reading between 0 to 300 created a light drizzle, 301 to 800 created a moderate rainfall, and 800 to 1023 resulted in a monsoon. However, we wanted a more nuanced gradient of falling raindrops. By using a “greater than” condition for the modulo operation was able to produce the desired effect.

Interacting with the computer using a device other than the traditional mouse, trackpad, or keyboard was thrilling. Using an unexpected mechanism can inspire interactions originally overlooked or not considered. After building the rain-maker, I’ve been considering what other mechanisms I could use to control a game and how unexpected sensors may challenge a user’s preconception of a familiar game. For example, what if:

  • interacting with a photo resister moved the paddle for pong?
  • two force sensors controlled left-to-right movement and up-and-down movement respectively?
  • a potentiometer was used to control an object balancing on something else?

Sketch 04: Refactoring an Early Project

This week I focused on revising my first sketch to make better use of objects and functions. I maintained the original premise of drawing a number of different shapes with different colour and size properties, but changed how variation was determined and how the shapes were organized on the canvas.

Two simple functions make up the code: one which determines the shape’s visual attributes and a second function that actually draws the shape. The first function (configureShape) uses a series of restricted random calls for the properties and then reassigns these values to the shape object. The second function (drawShape) plugs these values into either ellipse, rect, or line functions and draws them to the screen.


Press “R” to restart the sketch and produce new confetti

code

Beyond this, I was interested in storing the shapes and then revealing them in different ways. Although not ideal, each shape is drawn individually, sequentially, and then added to an array. Pressing “c” shows all the circles, “s” reveals the squares, “a” shows all shapes, “r” resets what’s shown as well as the background colour. Lastly, pressing and dragging the mouse up and down on the canvas reveals individual rows.


Press “c”, “s”, “a”, “r” or click and drag up and down

code

With this second iteration, I couldn’t get it to work with the crosses. I’m not quite sure why they were behaving differently but hope to dig into it.