Routine Tracking (060)

Routine Tracking uses object detection to track people in a video of Times Square.

The first of two representations isolates tracked bodies from RGB video frames. As time passes and people move across each frame, the tracked images persist and blur together. The resulting trails of movement remain only until another figure crosses the path, effectively over-writing that history. Moving figures consume much of the scene, but people who are sitting still or standing remain in front of the paths. As more people are tracked, the context of Times Square is slowly revealed.

In the second representation, people are shown as blocks of color, without a trail of their past position. They flicker in and out, changing color as the tracking algorithm identifies them as a ‘new’ person. Having seen the RGB representation prior, and understanding the movement as walking, we recognize the discrepancy between our identification of the “same” person and that identified by the algorithm. Yet the abstract representation also reveals information: although tracking the same person, the rectangle shape wiggles and changes. Even figures that are seemingly stationary are shown as vibrating rectangles. Form stretches to encompass the changing stride and gait of the tracked body.

Technicals

Object tracking is achieved using darkflow and builds on YOLO’s object detection algorithm. Tracking was executed with python to produce a CSV file with tracking ids and coordinates for each frame. Using P5.js, data was represented frame-by-frame as colored rectangles or extracted from the corresponding video frame. Rather than use the original video in conjunction with P5.js, each frame was saved as an image that could be loaded independently.