Color Depth uses a depth image and corresponding RGB data to reconstruct a view from above.
As the pixels are transposed from a frontal view to a view from above, only the front edge of objects are seen, but their relative position to each other is maintained. The objects themselves are difficult discern, seemingly caught between elevational and planimetric perspectives.
Technicals
The depth image was captured using a Kinect’s infrared array to measure distance from the camera. Depth is represented as grey scale values — closest is black, farthest is white. The depth image pixels were read row by row and transposed to the Y dimension. The grey scale pixels were then colored based on the corresponding pixel in the RGB image.
Next Steps
- Explore extending the color from the front edge of an object until it is interrupted by another object’s front edge, filling in the white space.