Dynamic Halftone (019)

Dynamic Halftone is a web interface for transforming aerial imagery into halftone representations of shadow, vegetation, and terrain.

Each layer of the halftone uses a different data source: RGB, a “natural color” image; near-infrared, or “false color” image, used to identify vegetation; and a Digital Elevation Model, an elevational rendering using greyscale pixels (USGS 1m). From each source, only one parameter was analyzed and represented: the red hue, in near-infrared, and brightness for both the RGB and DEM images. The magnitude of each is represented by the radius of a circle while the spacing within each layer is determined by the sampling area.

In reducing each base image to a single factor and overlaying them, new patterns and forms are made evident. Where do shadow, vegetation, and terrain correspond?

Interaction

Through a set of sliders, users can modify the size of the subdivision and the relative size and location of symbols. While the parameters of each layer are configurable, does this interaction constitute map-making or simply an adjustment to the representation of the same map?

The underlying data does not change, nor is there an ability to exclude or include other data. Yet the imagery data itself is still just another representation of the physical world. As with any map, representation is inherently constrained by processing methods, the data collection techniques, the data source, and the medium of presentation. In “Dynamic Halftone”, while within a set of constraints, the end user still engages with curation and culling of information.

Technical

While previous maps in this series have used Mapbox for tiling and geolocation, “Dynamic Halftone” uses Leaflet, an open source library for interactive maps for loading and positioning the appropriate tiles.

Attempts to integrate custom raster tiles into Mapbox from QGIS were a headache for many reasons: the service only supports GeoTiffs, required uploading tile sets through a separate API, and had limited documentation. But more importantly, we were starting to feel constrained by Mapbox as a platform. Instead, Leaflet, as a library, easily accepts various formats without the restrictions involved in having to upload to a proprietary service. From QGIS, the tiles for each desired zoom level (15 through 17) were automated as pngs using the QTiles plugin.

The technique of reading a loaded web map, parsing through the pixel data, and redrawing to a new canvas element was initially developed in Tile Swap (005). But as Leaflet creates maps using basic DIV elements rather than a canvas, the multiple image tiles in each map were unioned using html2canvas.

Above, an early study shows brightness halftones changing as the map pans. The study was built earlier using Mapbox with their own satellite tiles.

Next Steps
  •  Add functionality to toggle between the RGB base image and the halftone representation.
  • Add functionality to pan and zoom to different areas of New York City.
  • “Hide” the base maps in a better way, but still make them accessible to drawing.
  • Resolve performance issues — toggling the sliders takes a while to update the image.
  • Use alphanumeric characters as symbols, either drawn from the data (numeric representation of elevation) or arbitrary, e.g. does the graphic form of “7” relate to its semantic meaning?
  • Add functionality for X-Y translation, saturation, and brightness, to be determined by magnitude.
  • Add functionality for variable subside, e.g. recursively subdivide by magnitude and threshold, which would alter density.

 

Connected Devices: 001

In preparation for one of my planned 100-day exercises, I set up a server to store and provide environmental data: light, sound and temperature recorded on my roof. All of the data is stored in a simple JSON object maintained on the server itself. I plan on moving the data storage to MongoDB so that the data is preserver in the event that my server goes down. The full code is available on Github, while snippets are provided below.

Server:

Get Requests:

  • /latest  returns the latest environmental reading.
  • /all returns all the readings.
  • /sensor/:sensor (light, sound, temperature) returns all the readings from a particular sensor.

Post Requests:

  • /addReading creates a new timestamped reading with data point for each sensor.
  • /resetReadings erases all existing readings.

Every Borough Has a Broadway (018)

Every Borough Has a Broadway (live link) juxtaposes street-level images of the Broadways throughout New York City. Is there anything particularly distinct about each Broadway? How does each Broadway change along its length? What makes the Broadway in Brooklyn different from that in Queens?

The images are randomly paired, therefore, the distinction between them is sometimes obvious, but can also be more ambiguous.

Each borough has a street named Broadway. Each is a different length, with Manhattan being the longest. As such, Manhattan appears more frequently, but that frequency isn’t apparent as the streetscape varies greatly along its length.

QGIS was used to generate a set of points along all Broadways in New York, from which two could be randomly selected. The NYCOD street centerline data set was filtered to include only streets named Broadway. Although these streets are quite long, they are constructed from many small line segments, broken at each intersection. Using these individual line segments, the direction could be calculated and added as a new property field. The formula for determining the angle of a line segment was found in a post on GIS Overflow — a discussion board for technical GIS questions. With the angle assigned, nodes were extracted from each segment to generate individual points containing the same property information. These points were then finally exported as a GEOJSON object and used in the random pairing of different Broadways. The Google Streetview API took the coordinates and oriented the camera based on the heading direction.

Next Steps:

  • Add specific address information using Google’s Geocoding API.
  • Automatically refresh the pair after a set interval.

Dominant Forms: Height (017)

Dominant Forms: Height (live link) uses commonly represented data — elevation — to create islands within New York City. On hover, contours of the same elevation are filtered and used to mask an aerial image.

Contours are vertical data represented in the horizontal plane; lines from which everything else is higher or lower. When combined with aerial imagery, contours cut through streets, buildings, parks, and other dominant boundaries. Steepness, and its counterpart, flatness, identify neighborhoods like Washington Heights and Red Hook. Yet, this indiscriminate dissection of the city goes unnoticed in the abstraction of orthographic representation.

The interaction is constrained to three scales, each with contours taken at different intervals. At the smaller scale, cars and buildings are bisected, whereas the larger scale identifies neighborhoods and infrastructure. Panning is restricted to focus the interaction on contour selection. A prototyping mode allows panning to identify areas of interest.

Using “Raster to Polygons” in QGIS, raster DEM data were transformed into polygons of the same elevation. Because the DEM data are tiled, an important next step is to dissolve boundaries between adjacent polygons of the same elevation.

Next Steps:

  • Replace current contours with dissolved contours in which each elevation forms a group of polygons.
  • Avoid intersections?
  • Can the angle be more precise so that it’s looking down, not to the side? Average direction of next 5-10 points rather than just the next point? How is bearing determined? (QGIS)

Compare To:

  • Sampling Prospect Park (016)
  • Dominant Forms: Areas (011)
  • Dominant Forms: Streets (forthcoming)

Sampling Prospect Park, Describing Prospect Park (016)

Sampling Prospect Park, Describing Prospect Park (live link) represents four ways of sampling elevation and two ways of describing it. Points are first differentiated by sampling method. Then, on hover, points are culled and associated in a different way: by corresponding height.

Continuous paths become fragmented, while neighbors in plan are found to have different neighbors in section. Points that were once similar are now different.

Points within a +/- 2ft range from all sampling groups are isolated. The following data sources were used: points sampled from a raster DEM (USGS 1m) at set intervals; points recorded with a GPS data logger at approximately set intervals; points recorded along a path every 12 seconds with a GPS data logger; and points sampled from spot elevations, at the highest part of a building or feature, captured by New York City. All points are located by satellite — in one way or another — and are distinct only in what was chosen to be identified by elevation.

Additional information on the data collection process is detailed in the post, “Field Data Collection (001): Prospect Park Walk“.

Next Steps:

  • Explore whether all sampling methods should have the same representation on hover, or maintain their previous shape and color.

Compare To:

  • Light In The Apartment (010)
  • Center Points vs Boundaries (forthcoming)