When you share a location, why is it represented as a point on a map?
I See What You See — But Not Really (051) explores how we identify place from an image. When two people are present on the website, each sees the Google Streetview corresponding to the other’s location. It’s like FaceTime or Google Maps, but represents your location as an image instead of your face or a point on a map.
How is seeing an image of a place different from a point on a map or a live video stream? How does our understanding of place change when seen only through an image?
Only the Streetview is shown, creating an ambiguity in what is actually seen by the person on the other end. Is it a recent photo? Does the time of day correspond? Is this what they are looking at or are they inside of a building? When you see someone else’s location, are you aware that your location is also being shared?
The language on the site shifts from that of a third party (“Give yourself a name / Who are you looking for? / Waiting for Patrick.”) to that of the person on the other end (“Here I am!”).
Technicals
Sharing location data and checking whether both people have connected happens server-side, using an Express application (ELABORATE). After a user creates a name and indicates who they’re looking for, their location coordinates are automatically updated to the server. This is done with the browser’s navigator.watchLocation method. Until the other user connects, the client checks the server every second as to whether or not the second user has joined. Once both users are connected, they receive each other’s location data which is fed into the Google Streetview API to show the corresponding image.
Next Steps
- Fix styling of the form: replace default fonts and sizes, especially on mobile.
- Consider showing date and time of Streetview image capture.
- Make a recording in which one person is changing locations / walking around.
- Capture a “context” view of the current street-level conditions (for documentation purposes).