Looking Around (021)

Looking Around is a photo series of the intersections along 72nd Street. How distinct is the north of each avenue from the south? Is everyone on 72nd St headed from or going to Central Park? How close to the shore can one get at either end?

The photos have two formats: as a grid, with each intersection arranged as a column and direction arranged into rows; and in clusters, where images are radially positioned around each intersection. Pressing the spacebar toggles between formats. Clicking any image presents a rapid succession of full-size images, either of all photos within an intersection or all photos in a particular direction across intersections.

The images were collected on the afternoon of Saturday, January 27th, 2018 using an iPhone X.

The image were positioned with D3.js and plotted horizontally by intersection.

Next Steps:
  • Crop photos to squares and export two different file sizes: a downsized set for the thumbnail views, and a high-resolution set for full-size view.
  • Fix scroll within a container element and not within the whole window, so the image sequence doesn’t need to be repositioned.

Crossing 72nd (020)

Crossing 72nd captures the sound, altitude, and walked path along the length of 72nd Street in Manhattan. Each is represented as a line, and the correspondence between them isn’t always evident. Described together, the sounds of a passing bike, children’s laughter and car horns animate the turns of the path.

Clicking at any point on the walked path jumps to the corresponding time in the audio. A vertical playhead indicates the position across all three representations.

Data was collected on the afternoon of Saturday, January 27th, 2018. GPS position and altitude were recorded using a Bad Elf GPS Logger, automatically logging data at 12 second intervals. Audio was recorded using an iPhone 6S, held while walking. Additional data included a series of photos at each intersection, one in each street direction.

In QGIS, the qProf plugin was used to read the GPX data — including timestamps, altitude and position. The captured points were then culled to smooth the line, thus, the irregular spacing of points.

Next Steps:
  • Replace the recorded path points with ones taken at regular intervals along the path, using the QChainage plugin.
  • Reduce noise in the audio track, particularly in windy sections.
  • Fix scaling issue with the browser width.

Dynamic Halftone (019)

Dynamic Halftone is a web interface for transforming aerial imagery into halftone representations of shadow, vegetation, and terrain.

Each layer of the halftone uses a different data source: RGB, a “natural color” image; near-infrared, or “false color” image, used to identify vegetation; and a Digital Elevation Model, an elevational rendering using greyscale pixels (USGS 1m). From each source, only one parameter was analyzed and represented: the red hue, in near-infrared, and brightness for both the RGB and DEM images. The magnitude of each is represented by the radius of a circle while the spacing within each layer is determined by the sampling area.

In reducing each base image to a single factor and overlaying them, new patterns and forms are made evident. Where do shadow, vegetation, and terrain correspond?

Interaction

Through a set of sliders, users can modify the size of the subdivision and the relative size and location of symbols. While the parameters of each layer are configurable, does this interaction constitute map-making or simply an adjustment to the representation of the same map?

The underlying data does not change, nor is there an ability to exclude or include other data. Yet the imagery data itself is still just another representation of the physical world. As with any map, representation is inherently constrained by processing methods, the data collection techniques, the data source, and the medium of presentation. In “Dynamic Halftone”, while within a set of constraints, the end user still engages with curation and culling of information.

Technical

While previous maps in this series have used Mapbox for tiling and geolocation, “Dynamic Halftone” uses Leaflet, an open source library for interactive maps for loading and positioning the appropriate tiles.

Attempts to integrate custom raster tiles into Mapbox from QGIS were a headache for many reasons: the service only supports GeoTiffs, required uploading tile sets through a separate API, and had limited documentation. But more importantly, we were starting to feel constrained by Mapbox as a platform. Instead, Leaflet, as a library, easily accepts various formats without the restrictions involved in having to upload to a proprietary service. From QGIS, the tiles for each desired zoom level (15 through 17) were automated as pngs using the QTiles plugin.

The technique of reading a loaded web map, parsing through the pixel data, and redrawing to a new canvas element was initially developed in Tile Swap (005). But as Leaflet creates maps using basic DIV elements rather than a canvas, the multiple image tiles in each map were unioned using html2canvas.

Above, an early study shows brightness halftones changing as the map pans. The study was built earlier using Mapbox with their own satellite tiles.

Next Steps
  •  Add functionality to toggle between the RGB base image and the halftone representation.
  • Add functionality to pan and zoom to different areas of New York City.
  • “Hide” the base maps in a better way, but still make them accessible to drawing.
  • Resolve performance issues — toggling the sliders takes a while to update the image.
  • Use alphanumeric characters as symbols, either drawn from the data (numeric representation of elevation) or arbitrary, e.g. does the graphic form of “7” relate to its semantic meaning?
  • Add functionality for X-Y translation, saturation, and brightness, to be determined by magnitude.
  • Add functionality for variable subside, e.g. recursively subdivide by magnitude and threshold, which would alter density.

 

Connected Devices: 001

In preparation for one of my planned 100-day exercises, I set up a server to store and provide environmental data: light, sound and temperature recorded on my roof. All of the data is stored in a simple JSON object maintained on the server itself. I plan on moving the data storage to MongoDB so that the data is preserver in the event that my server goes down. The full code is available on Github, while snippets are provided below.

Server:

Get Requests:

  • /latest  returns the latest environmental reading.
  • /all returns all the readings.
  • /sensor/:sensor (light, sound, temperature) returns all the readings from a particular sensor.

Post Requests:

  • /addReading creates a new timestamped reading with data point for each sensor.
  • /resetReadings erases all existing readings.

Every Borough Has a Broadway (018)

Every Borough Has a Broadway (live link) juxtaposes street-level images of the Broadways throughout New York City. Is there anything particularly distinct about each Broadway? How does each Broadway change along its length? What makes the Broadway in Brooklyn different from that in Queens?

The images are randomly paired, therefore, the distinction between them is sometimes obvious, but can also be more ambiguous.

Each borough has a street named Broadway. Each is a different length, with Manhattan being the longest. As such, Manhattan appears more frequently, but that frequency isn’t apparent as the streetscape varies greatly along its length.

QGIS was used to generate a set of points along all Broadways in New York, from which two could be randomly selected. The NYCOD street centerline data set was filtered to include only streets named Broadway. Although these streets are quite long, they are constructed from many small line segments, broken at each intersection. Using these individual line segments, the direction could be calculated and added as a new property field. The formula for determining the angle of a line segment was found in a post on GIS Overflow — a discussion board for technical GIS questions. With the angle assigned, nodes were extracted from each segment to generate individual points containing the same property information. These points were then finally exported as a GEOJSON object and used in the random pairing of different Broadways. The Google Streetview API took the coordinates and oriented the camera based on the heading direction.

Next Steps:

  • Add specific address information using Google’s Geocoding API.
  • Automatically refresh the pair after a set interval.