Live Body-Context (063)

Live Body-Context uses machine learning to identify and isolate people from a live video stream.

This map is a technical development on Body-Context (047), which uses a computer vision algorithm, Single Shot Multibox Detector with MobileNet, for object recognition from live video. Similar to Body-Context (047), the isolation of the figure from context, and vice versa, illustrates how each give meaning to the other.

Here, the figures do not respond to the changing representation. Whether they are isolated or removed from the context, their performance continues unchanged.

Unlike Body-Context (047), figures are not silhouetted but cropped with rectangular bounding boxes. When isolated, the immediate context still surrounds them, offering clues to their actions. However, when they are removed from the context, they aren’t recognizable: the rectangle could be obscuring anything.

Technicals

The map was built using a python server running the TensorFlow model available as an internal API and processing the WebRTC webcam stream. It builds on a tutorial by Chad Hart on webrtcH4cKS, “Computer Vision on the Web with WebRTC and Tensorflow”, which walks through the Google Object Recognition API and connecting it to a webcam and server.

Next Steps
  • Explore how different levels of awareness changes the performance — how does someone act when they are only a white rectangle representation? How do they act without a context to act within?
  • Explore using the segmentation addition to the API
  • Use a peer server provided by PeerJS to connect many webcams

One thought on “Live Body-Context (063)

Comments are closed.