Retaining Context

What if when clicking links didn’t take you away from a page but layered on new information, creating a sense of trails or context?

Horizontal sections increase as more traces are added

 

Although a very rudimentary version of this idea, a slider is used to toggle between a single article and a number of linked articles from the first post. A series of iframes load content from Wikipedia.

In another iteration, static data is used to test the concept. Two arrays maintain the content: one for ‘viewed content’ (the links clicked) and one for ‘potential content’ (the links that can be clicked). Ideally, this is content is pulled dynamically (i.e. a database of blog posts, etc) but for now Wikipedia is sufficient. When a user clicks a link within an article, the corresponding content from the Potential Data array is added to the Viewed Data array, which populates the HTML page seen by the user. Rather than navigating away from the current block of content, the additional content is added horizontally in a set of increasingly-narrow columns.

(code)

Further Questions:
  • Is there a hierarchy of related content? Are certain links of a primary relation, secondary relation, and so on.
  • How extensive could the related content be before it becomes illegible? At what point are related links or content entire articles vs excerpts vs images and media?
  • How can a website network be zoomable and spatialized to show the connections. See Joni Korpi’s Zoomable UI)

Sensing You and Me, Pre-Presentation

When interacting with technology, who initiates the conversation – the machine or the person? Our project hoped to explore this question and present a study of technologically-mediated interaction between two people over time.

 

Interaction graph (diagram) showing the pair’s relational movement over time.

We hoped that two people would transition from individual and circular movement (each waving their limbs around to identify themselves on the screen) to recognizing that their individual movements were actually tied together, creating shared pathways.

Individual dots are drawn at the hands and feet of each person. Two independent bezier curves are contingently drawn between the inside hands and feet of the pair when they are within a certain distance of each other. When connected, each individual’s other hand and foot act as control handles for the bezier. Once the pair realizes that they can act together, their movements are much slower and deliberate in an attempt to manipulate the curves together. While not in perfect unison nor complete random, the pair are somewhat steering each other as they are digital tied together.

On McLuhan’s ‘The Medium is the Message’

Originally published in 1964, Marshall McLuhan’s sentiment that ‘the medium is the message’ is widely misunderstood as message as content. Note: perhaps this misreading is inevitable as McLuhan’s writing is laced with personalized jargon and somewhat lacks an easy-to-follow structure. As pointed out by W. Terrence Gordon in the 2003 critical edition imprint, McLuhan in fact dictated his text rather than wrote it.1

McLuhan argues that the “message” of a technological extension is actually the resulting change in human relationships, not how that technology is used. He distinguishes the “message”, or change associated with technology, from its “content”, or use, by describing the transition from the lineal connections of the mechanical age to the instant configurations of the electronic. He distinguishes the “message”, or change associated with technology, from its “content”, or use, by describing the transition from the lineal connections of the mechanical age to the instant configurations of the electronic. This distinction, however, has been mistaken because our literacy of a particular mechanical technology — typography — has been conflated with rationality; and thus, the “message” has been conflated with “content”. Instead, McLuhan asserts, the “content” of any technology is actually another technology.

Recognizing the Message

McLuhan argued that De Tocqueville is able to understand the grammar of typography (the medium and the message) because he stood outside of the structures being dismantled and the technology (print and typography) by “which their undoing occurred and could then see the ‘lines of force’ being discerned.” (McLuhan 2003:##) But, knowing that we are living within the constrains and associations introduced by a previous medium, is it possible to recognize the message of new media and change a technology introduces into society? Or, can it only be understood through an examination of the past?

If print and typography resulted in conflating reason with sequential and uniform – and thus inhibited an understanding of simultaneous configurations with obscured sequence, what is our current hinderance? Perhaps this is the challenge of discussing VR/AR/MR. Our understanding of its message–if it even has a message?–is frustrated by our current real-time-communication-informational(?) cultural bias. We are likely not yet far enough out of current media to understand its change of scale of relationships, let alone decipher the message of a potential future media.

Key Citations

“In terms of the ways in which the machine altered our relations to one another and to ourselves, it mattered not in the least whether it turned out cornflakes or Cadillacs.” (McLuhan 2003:19)

“The American stake in literacy as a technology or uniformity applied to every level of education, government, industry, and social life is totally threatened by the electric technology.” (McLuhan 2003:20)

“Cotton and oil, like radio and TV, become “fixed charges” on the entire psychic life of the community. And this pervasive fact creates the unique cultural flavor of any society. It pays through the nose and all its other sense for each stable that shapes its life.” (McLuhan 2003:35)


  1. McLuhan, Marshall. “The Medium Is the Message.” Understanding Media: The Extensions of Man. Ed. W. Terrence Gordon. Berkeley: Gingko, 2003. 17-35. Print.

The Most Verbose, The Most Concise

Verbose v. Concise
What does it mean to have many functions distributed over many controls as opposed to many functions concentrated into a single control?

The required functions are as follows:

  • An on and off control
  • When turned on, the previous brightness level is restored.
  • Each color channel can be faded from off to full brightness.
  • Overall brightness can be faded.
  • Brightness is maintained when fade control is released.
  • Fade is interruptible by other controllers.

Below describes one controller (as pictured above), and one proposed controller (to be completed shortly, when my millions of switches arrive…)

  • CONCISE (one momentary connection): 1 push button that turns an LED on and off, fades each color channel from off to full brightness, and fades overall brightness.
  • VERBOSE (many maintained connections): 1 toggle switch that turns an LED on and off, and when turned on, restores the previous brightness level; 255 toggle switches for each channel, (765 total), fade from off to full brightness; another 255 toggle switches fade overall brightness.

In building out the Concise Controller with a single button, the LED functioned not only as the ‘lamp’ itself, but also as a visual indictor for each of the functional conditions. Blinking, fading and solid levels in combination with timed intervals are used to indicate, and then ‘lock in’, changes by the user.

Breadboard Diagram

However, how the light is turned off is still a particularly inelegant solution. As an option, it is within the same set of selections as individual color channels, and while the blinking rhythm is different, I anticipate this will be a point of confusion in practice. Alternatively, I’d like to implement an “off” switch by holding down the button for a set period of time and can occur at any point. I tried to get this functional in code, but it needs more work.

Link to Github Gist


At Scale of the Controller v. Transistor
What prompted this idea of verbose and concise controllers was how states are tracked by sensors themselves.

Change is either ‘remembered’ as a constant state as with a toggle switch, or as a change in state as with a push button. (Maintained connection versus momentary connection.) Interestingly, sensors, like push buttons or force sensors, which indicate a change in state from the base condition do not have visual indicators themselves which show the change. (As opposed to a potentiometer or toggle switch in which the state or reading is reliant of the actual physical position — and thus visual indicator.)

How can either of these techniques promote a verbose or concise set of controls?

  • Brightness levels and states can either be ‘remembered’ by provisions in the code or are ‘remembered’ by hardware constitution/wiring of the sensor;
  • A toggle switch sets one of two physical states, determining an electrical connection, though still coded, as one position is identified as On while the other is identified as Off.
  • Similarly, knob and linear potentiometers: The physical connection is maintained at the adjusted position( Rotary knob: requires storage, but differently;

Conversely, a push button denotes a change in state, but isn’t physically changed once it is released; thus it relies more directly on code (physicality at another scale) to toggle between On and Off. A push button is a change, whereas the toggle switch is either on or off.

Similarly analog sensors:

  • Force sensors/soft potentiometers output a default reading unless a change/interaction is sensed (depending on wiring);
  • Like the push button, for the value to persist, it must be stored in the code.

Early Sketches