Synchronicity: Coupling Sound and Visuals

Synchrony occurs when events operate or occur in unison. At the core of synchrony is the passage of time and a change (the event) occurring within this. “Events” refer to a change in something—sound, motion, visuals, etc. Something was on, and then it wasn’t. Something was here, and then it was there. The other component of synchrony the relationship between multiple events; synchrony cannot be identified in isolation. When exploring synchrony, the questions to ask are:

  • What are the events, or what things are changing?
  • What is the duration of the relationship?

In the physical world around us rarely are events synchronized. We walk at a different pace that those around us, cars accelerate at difference speeds when the light switches from red to green. But this is why our attention can be captured by synchronicity. Since it’s so uncommon, we often take note when objects, people, actions or sounds around us are in sync. Sometimes this synchronicity is planned and part of a performance, such as an orchestra playing to the time kept by a conductor.

Sometimes it happens by chance. There’s a goofy scene in Scrubs in which the sounds playing in JD’s headphones seemingly align with the actions and movements of those around him.

I’m interested in exploring how changes in synchrony and asynchrony may affect the attention of a user or focal point of a piece of work. What interactions can be used to adjust the synchrony of events? Can users only participate in a piece if they perform synchronously? How does synchrony or asynchrony affect their behaviour?

Complete Audio and Visual Synchrony

Jono Brandel created Patatap, a keyboard-controlled sound and visual animation platform. Individual sounds and short animated elements are mapped to each key. Pressing the spacebar produces a new colour scheme as well as a new set of sounds. When keys are played systematically, the user can generate melodies and rhythm. At an event in San Fransisco, Brandel himself demonstrated this with the help of a looper (I think?)

What’s particularly evident in the performance video is how intertwined and inseparable the sounds and visuals are. Each time the colour background swipes across the screen, it reinforces the underlying beat. Perhaps this level of synchronicity is particularly suited for the type of electronic sounds available in the platform. Like each animation, each sound is triggered in isolation and appears/is heard precisely. There is overlap between sounds and visuals but the trigger and their presentation to the audience occurs individually.

Variable Synchrony and Asynchrony

In Bon Iver’s recent music video for 29 #Strafford APTS, the audio is accompanied by distorted visuals that are akin to seeing through a kaleidoscope. The visuals at the beginning of the song seem to take inspiration from Joseph Albers, composed of colour blocks with stark edges. Yet this clarity and precision is disrupted by the effects of the kaleidoscope—they become layered, multiplied and seemingly of a dream state. They transformation of visuals and effect of the kaleidoscope do not seem to be tied to the audio. Changes happen without regularity. The computer generated graphic compositions switch to recorded footage of birds, a bedroom, nature. Yet there is one striking moment of alignment between the music and visuals. After zooming into a psychedelic-like sunburst graphic that was above the bed, and seeing the pixel grain of screen which is showing this digital image, at 2:53 both the music and visual “snap out of it” – almost like waking up out of the dream. With the reappearance of words layered on the visuals, the viewer/listener is reminded that a story is being told. The dream state they were lulled in to through the use of asynchrony and blended changed is disrupted by the sudden alignment of sound and visual.

Work in Progress

To explore this idea of visual and auditory synchronization, I want to create a potentially-interactive animation to be coupled with a previous sound piece I worked on. In early prototyping, I’ve started looking at how to get objects moving independently.

I imagine building out a larger prototype in which multiple objects are synchronized to different aspects of an audio clip. Maybe changes in volume result in something changing size, or an object appears and disappears in line with the beat. Are all objects simultaneously synchronized with the audio or do they each come in and out of sync independently?

The Memory Pill, A Design Fiction

In collaboration with Yuan Chen and Jingfei Lin

What if you could buy someone’s memory? Or record every emotion you felt and play it back exactly as it was? Would you want to?

The Memory Pill temporarily alters the biological formation and experience of memories. It operates in two states: record or experience. When taking a blank pill, users record all that they experience for 24 hours. Emerging from their belly button, the 24 hours are physicalized into a memory growth . Upon removal of the physical growth, it is uploaded to the Universal Memory Bank and reconstituted into an experience pill. Browsing the Universal Memory Bank, individuals can download, print and then ingest the experience pill of someone else, or from their own catalogue of uploaded memories. But while the potential experiences of an individual have now exploded, what does it meant to share memories? How can multiple people experience a moment from a single point of view? How is the “memory” reinterpreted? Would this enable ‘true’ empathy? Is empathy ever possible—or do we only attempt to empathize and are completely able? Swallow and find out.

Storyboard

The design fiction is illustrated in through the two perspectives: recording and living/re-living. A set of locked-down shots at the beginning (Act 1) show “a day in the life” of our main character, River. The camera is mostly fixed, and viewers are seeing the action move in and out of the frame. It doesn’t capture every detail, unlike the pill! In Act Two, they day is retold through the perspective of those who have taken the experience pill at different points in time. Viewers watch the day unfold again but through a POV shots from River’s perspective. The characters provide multiple interpretations of the record event. They provide voice-over commentary as well as are seen in fixed, stationary medium shots—filmed like an interview. The mixture of these different techniques seek to express the questions posted at the beginning of the post. What constitutes a memory for multiple people?

Sound and Data

Sonification

mapping information or data into non-speech sound

Further Reading

stAllio!’s Editing Images with Sound Software Tutorial

stAllio! result from cutting and pasting the data from a bitmap file within a sound editing program
stAllio! result from cutting and pasting the data from a bitmap file within a sound editing program

Shawn Graham, Tutorial on Sonification

Sounds of Science, the Mystique of Sonification

Mark Sample, Notes toward a Deformed Humanities

The deformed work is the end, not the means to the end. The Deformed Humanities is all around us. I’m only giving it a name. Mashups, remixes, fan fiction, they are all made by breaking things, with little regard for preserving the original whole.

Daniel Temkin, Glitch && Human/Computer Interaction

Precedents

Andy Baio’s experiments of converting original MP3s to MIDI, and back again to MP3s

Jordan Hochenbaum and Owen Vallis’s “Weather Report” which translates weather data into sound

Making Sounds, Part 2: The Physical Space of Sound

Although my recent sound piece with Oriana is intended to be heard individually through headphones, I’m inspired to explore the physicality of sound from both device and spatial perspectives.

  • Can sounds be choreographed to move like wind through a space? What effect does this have if people are moving versus stationary?
  • What effect does seeing the sound-emitting device have on an installation? Why would it be important to hide the source of sounds or reveal them?
  • How can I create my own speakers to play sounds? Or a device that emits sound it generates itself? How Speakers Work, How to Make a Speaker

A new months ago, I heard Julianne Swartz’s tunnel installation piece at Mass MoCA. Within the tunnel connecting Sol LeWitt’s gallery to the rest of the museum, tiny speakers were carefully placed behind beams and steel rods. As you walk down the corridor sounds are heard from behind you, then to the left of you, or far ahead of you, or just below you. It is as if sound is physically jumping from speaker to speaker. The spatial dimension is most evident when the speakers perform “in conversation”, each taking a turn. Often the sounds are single tones, sung individually but occasionally merging together. The metal tunnel itself is an instrument, layering in reverberation and echo.

Q2 Music fortunately recorded a brief snippet on Instagram, but the piece’s physical dimension is inevitably lost in the “two-dimensional” recording.

In a separate piece from 2012, Swartz installed meandering PVC and plastic tubing through the deCordova Museum, tracing its way from the utility room, up three floors, to the gallery. At points along the way, openings in the tubing allow for “listening leaks”.

Julianne Swartz, How Deep is Your, 2012
Julianne Swartz, How Deep is Your, 2012
Julianne Swartz, How Deep Is Your, 2012
Julianne Swartz, How Deep Is Your, 2012

What’s fascinating in both these installations is the physical dimension of sound. At Mass MoCA, the sound has a spatial presence through movement, jumping around within a contained space. At the deCordova Museum, visitors visually trace the sound throughout the museum and seek out opportunities in different spaces to hear the hidden audio. But at each leak location—along the stairwell, in the lobby, at the final funnel—the audio takes on a different form as it has lost bits of itself along the way.

Further Reading/Listening:

Sampo Syreeni, Sound as a Physical Phenomenon

Robert Curgenven,Climata and Sound, Landscape and the Bastard Child Soundscape

Tim Ingold, Against Soundscape

Making Sounds, Part 1: Emotional Earfuls

Octavia E. Butler’s “Bloodchild” is a love story. Science fiction is often able to confront very real and tangible issues through the disguise of unbelievability or absurdity. In the afterword to “Bloodchild”, Octavia E. Butler explicitly states that the short story is concerned with three things: love between two different beings, having to make a difficult decision when faced with disturbing consequences, and men facing pregnancy. Fundamentally, this can be understand as a question of who we love, how we demonstrate love and what we are willing to do for those we love. These questions of love are universals, not limited to the realm of aliens or science fiction.

With this in mind, I collaborated with Oriana Neidecker to create an emotional soundscape inspired by Butler’s tale of love. How could sound communicate the emotional rollercoaster experienced by Gan, the main character? He loves T’Gatoi, perhaps more than his mother. Yet after watching and participating in the violent and gruesome birth of a Tlic by a fellow human male, he confronts whether he is willing to endure the same harrowing experience. Does he love T’Gatoi enough? His sister? His mother? Do we love anyone enough?

In structuring the piece, we identified four sections of emotion and described them in terms of colours. Discussing our intention through emotion and colour allowed us to broadly give definition to the written narrative without feeling constrained by the individual details. For each section, we established tempo and pacing, volume and legibility, degree of repetition, and how clips were cut together (blended or harshly).

  1. Grey-purple: calm, relaxed, placid, layered and blended sounds, yet on edge
  2. Deep red: chaos, anticipation, horrified, frenetic, abrupt cuts, overwhelmed
  3. Blue gradient: oscillating and alternating, waves moving in and out, thinking out loud, conflict,
  4. Green: acceptance, reconciliation, growth

The completed piece is meant to be heard individually through headphones—as opposed to earbuds—in the dark and with the listener’s eyes closed. The sound is envisioned as emanating from within each of us, deep in our ribcage. The sounds are in conflict, fighting to be heard above one another and rising in volume. They jump from right-to-left-to-right-to-left, unsure of where to land. Yet they find resolution and acceptance. Love is a series of questions.