A Music Controller Not To Be Seen

Legibility and Feedback

When a controller is to be operated while not being seen, legibility of sensors/inputs and feedback of action are important considerations.

Legibility refers to how do you know what control does what? This may indicated through its shape, texture, and relational configuration or orientation. Regarding configuration: How are the controls oriented with respect to each other/themselves? How are the controls oriented with respect to the body? How is the entire controller oriented with respect to the body?

Feedback refers to how do you know you’ve done something? Without sight, this could be communicated through sound or touch and haptics. Haptics may be additional feedback beyond any physical alteration of the sensor itself, such as a vibration. But it may also be a result of the sensor itself. For example, a maintained push button will have a different feel relative to the enclosure when depressed versus released.

The Scale of Physical Interaction with an Object

As the controller is not seen while being operated, does that scale of interaction and controller increase? When controllers are considered as objects, interaction can occur the scale of the whole hand rather than individual fingers. Some initial thoughts I considered:

  • Is it a glove interface between two hands? Touching different fingers together triggers different functions.
  • Are objects held and placed down on a surface to trigger different events?
  • Does an object have different faces which are touched to control different functions?

Sustained versus Momentary Interactions

How are interactions different when they are sporadic events such as momentarily pushing a button or flipping a switch versus a sustained interaction such as steadily holding down a button or shaking an object? What is the difference between an event-based interaction, and interaction of duration? Do each of these have different responses to the “directionality” of a controller or interaction?


Controllers

When considering the music controller at the scale of an object interacted with by the whole hand, I considered three scenarios.

 

All Functions Concentrated into a Single Object, Functions are Fixed to Faces (Absolute Faces)

Different sides of an object trigger different commands. When all functions are combined into a single object, their configuration with respect to each other is important as the user must understand the orientation of the object itself. The tactile difference between each side is important for this orientation.
Further Development: Rather than a cube, does the object have two rounded sides for “previous” and “next” as they are momentary actions rather than sustained stairs? How is each face differentiated: many materials, texturing of the same material, shapes, etc.

Single Object

Each Function is a Different Object (Absolute Objects)

Moving any object in any axis triggers the associated function. In this instance, orientation and configuration between objects does not matter, which allows individual users to configure the objects as desired. However, the legibility of each individual object is incredibly important as their shape, size and texture allows the user to discern one from the other.
Further Development: Explore different sets of objects: all the same material yet different forms versus all different textures but the same form

All Functions Concentrated into a Single Object, Functions are Associated with Gesture and Direction (Relative Faces)

Similar to the first consideration, all functions exist within one object. However, the functions are not fixed to faces but rather the spatial direction or gesture. The orientation of the object is remapped after each gesture in order to allow it to occur in succession.
Currently in development


Schematic including socket for AtTiny84 and connection for 9V battery supply. Pins 5 and 4 on the Bluetooth connector are Rx and Tx respectively.

Circuit Boards

Keen to develop the enclosures further after this week, I chose to mill the circuit boards and use AtTiny84s as the microcontrollers. As the controller is based around interacting with an object, I prioritized making them wireless and configuring different setups in order to fabrication with a number of different enclosures.


Pathways, Post-Presentation

(Project created and post written in collaboration with Chloe and Ji Young)

PATHWAYS

We witnessed some circular movements when users were trying to identify themselves on the pathway, and some linear movements when they were trying to make connections to each other. There also were some noisy pathways when the users were getting closer to each other to create a line together as well as when moving to and from the projection on the front screen. Moreover, they used outward and expansive movement; their limbs moved away from their centers. In the meantime, the users always faced the projection on the front screen. They bumped into each other because they didn’t look side to side but used the dots on the screen to figure out where their partner was. In terms of space, the users didn’t explore the whole area of the classroom yet created a space between bodies and within their own body. At the beginning, they stood and moved parallel to the screen. After realizing they could make connections with a line, the users started using the depth of the space. Additionally, There was a rhythm to the users’ movements. They were moving quickly to identify themselves at first, and then they were slowing down to find a pose and make a connection. Once the connection was achieved with a line, they moved quickly again to make new lines.

EXPECTATIONS

We expected the users to do more exploration of depth and space and some degree of unison between the partners, but that wasn’t always the case. Therefore when the first group naturally try to connect with each other at different depth we were very excited. However there are some unexpected things happened during the test. We expected user to also play with the screen when not making connections, but the activity became purely about making the lines between each other and they didn’t really move around the space, more just moved their body from a stationary position. We also did not expect user to not look at each other when they are making the connection bezier with each other, and no one realized that the control points of the bezier curves were their other hands and feet. We played a song during the second user test, and the user did not respond to the music, they just purely focused on the screen..

DESIGN DECISIONS

To encourage our users to move their hands and feet, we tracked pulsing circles with these joints. When they users were within a certain distance of each other, the pulsing stopped and was replaced by a bezier curve connected between the users hands and feet. We made this distance quite small as we wanted people to move closer and closer together, hopefully eventually treating themselves as a single, combined body. Since we imagined our experience to be engaged by two bodies, we didn’t account for – or create a design scenario – when there was either more or less than the two skeletons. Additionally, we specified different colors for the skeleton but used random() for each of the RGB values, so occasionally the two skeletons were close in color and it became difficult to distinguish between them. Furthermore, we did not thoroughly design the quality of the line between people; we simply went with our first instinct of a thin white curve. Lastly, we didn’t thoroughly explore connecting different joints beyond their hands and feet. Fortunately, connecting the feet caused people to experiment with their balance and center of gravity!

Link to code

“Before” Diagram
“After” Diagram – the users didn’t act in unison as we expected

Retaining Context

What if when clicking links didn’t take you away from a page but layered on new information, creating a sense of trails or context?

Horizontal sections increase as more traces are added

 

Although a very rudimentary version of this idea, a slider is used to toggle between a single article and a number of linked articles from the first post. A series of iframes load content from Wikipedia.

In another iteration, static data is used to test the concept. Two arrays maintain the content: one for ‘viewed content’ (the links clicked) and one for ‘potential content’ (the links that can be clicked). Ideally, this is content is pulled dynamically (i.e. a database of blog posts, etc) but for now Wikipedia is sufficient. When a user clicks a link within an article, the corresponding content from the Potential Data array is added to the Viewed Data array, which populates the HTML page seen by the user. Rather than navigating away from the current block of content, the additional content is added horizontally in a set of increasingly-narrow columns.

(code)

Further Questions:
  • Is there a hierarchy of related content? Are certain links of a primary relation, secondary relation, and so on.
  • How extensive could the related content be before it becomes illegible? At what point are related links or content entire articles vs excerpts vs images and media?
  • How can a website network be zoomable and spatialized to show the connections. See Joni Korpi’s Zoomable UI)

Sensing You and Me, Pre-Presentation

When interacting with technology, who initiates the conversation – the machine or the person? Our project hoped to explore this question and present a study of technologically-mediated interaction between two people over time.

 

Interaction graph (diagram) showing the pair’s relational movement over time.

We hoped that two people would transition from individual and circular movement (each waving their limbs around to identify themselves on the screen) to recognizing that their individual movements were actually tied together, creating shared pathways.

Individual dots are drawn at the hands and feet of each person. Two independent bezier curves are contingently drawn between the inside hands and feet of the pair when they are within a certain distance of each other. When connected, each individual’s other hand and foot act as control handles for the bezier. Once the pair realizes that they can act together, their movements are much slower and deliberate in an attempt to manipulate the curves together. While not in perfect unison nor complete random, the pair are somewhat steering each other as they are digital tied together.

On McLuhan’s ‘The Medium is the Message’

Originally published in 1964, Marshall McLuhan’s sentiment that ‘the medium is the message’ is widely misunderstood as message as content. Note: perhaps this misreading is inevitable as McLuhan’s writing is laced with personalized jargon and somewhat lacks an easy-to-follow structure. As pointed out by W. Terrence Gordon in the 2003 critical edition imprint, McLuhan in fact dictated his text rather than wrote it.

McLuhan argues that the “message” of a technological extension is actually the resulting change in human relationships, not how that technology is used. He distinguishes the “message”, or change associated with technology, from its “content”, or use, by describing the transition from the lineal connections of the mechanical age to the instant configurations of the electronic. He distinguishes the “message”, or change associated with technology, from its “content”, or use, by describing the transition from the lineal connections of the mechanical age to the instant configurations of the electronic. This distinction, however, has been mistaken because our literacy of a particular mechanical technology — typography — has been conflated with rationality; and thus, the “message” has been conflated with “content”. Instead, McLuhan asserts, the “content” of any technology is actually another technology.

Recognizing the Message

McLuhan argued that De Tocqueville is able to understand the grammar of typography (the medium and the message) because he stood outside of the structures being dismantled and the technology (print and typography) by “which their undoing occurred and could then see the ‘lines of force’ being discerned.” (McLuhan 2003:##) But, knowing that we are living within the constrains and associations introduced by a previous medium, is it possible to recognize the message of new media and change a technology introduces into society? Or, can it only be understood through an examination of the past?

If print and typography resulted in conflating reason with sequential and uniform – and thus inhibited an understanding of simultaneous configurations with obscured sequence, what is our current hinderance? Perhaps this is the challenge of discussing VR/AR/MR. Our understanding of its message–if it even has a message?–is frustrated by our current real-time-communication-informational(?) cultural bias. We are likely not yet far enough out of current media to understand its change of scale of relationships, let alone decipher the message of a potential future media.

Key Citations

“In terms of the ways in which the machine altered our relations to one another and to ourselves, it mattered not in the least whether it turned out cornflakes or Cadillacs.” (McLuhan 2003:19)

“The American stake in literacy as a technology or uniformity applied to every level of education, government, industry, and social life is totally threatened by the electric technology.” (McLuhan 2003:20)

“Cotton and oil, like radio and TV, become “fixed charges” on the entire psychic life of the community. And this pervasive fact creates the unique cultural flavor of any society. It pays through the nose and all its other sense for each stable that shapes its life.” (McLuhan 2003:35)


  1. McLuhan, Marshall. “The Medium Is the Message.” Understanding Media: The Extensions of Man. Ed. W. Terrence Gordon. Berkeley: Gingko, 2003. 17-35. Print.