State of the Map US 2015 and the role of Governments in OpenStreetMap

A little over two weeks had passed since the closing of the State of the Map US (SOTMUS) conference in NYC. For those not familiar, SOTMUS is the yearly conference for the US chapter of OpenStreetMap (OSM). This period offered some much needed time to reflect on the conference as a whole: setting, presentations and sessions, exhibitors, organization and execution. On all points, I felt SOTMUS hit the mark and was a resounding success.

United Nations

Yes, I’m sure there were some minor shortcomings as evident by some of the tweets I saw (#SOTMUS). Nonetheless, for a conference organized and executed by volunteers coupled with the comments I heard, it was clearly a success. I have a new found appreciation for the hard work that goes into organizing such a large event after having assisted the organizers in securing the Surrogate Court space for the opening night (NYC DoITT sponsored IT). Kudos to the organizers! But alas I digress.

This post is not intended to be a review of the event. Many others I’m sure have already covered that and a better positioned to do so. My objective was to dive further into the role local government could play in OpenStreetMap. This post can be seen as an extension of the panel I was on at SOTMUS, which as a demonstration of interest in the topic, was the second of two panels on OSM and government. I’m sure that many would even question whether government has a role at all. To that I would say, duplicating or recreating what has already been mapped and increasingly is available on open data sites, is time consuming and wasteful. On the international landscape that is often not the case but here in the US it is.

Consider the NYC building footprint (with height) and address import. To manually digitize approximately one million buildings would have been a labor intensive and lengthy process. On-screen digitizing over aerial photography of a lower resolution then NYC possess would have also resulted in lower quality and less consistent data. Contrast that with a careful import utilizing high quality preexisting *authoritative* data that resulted in nearly complete and consistent coverage of NYC is in my opinion hard to argue against. A bulk import then frees up the community to focus on keeping OSM current and filling in the gaps where needed. Certainly a less daunting task then starting – with respect to buildings – from a nearly blank canvas.

The NYC buildings and address import was largely undertaken by Mapbox. NYC DoITT assisted with planning and answering questions (NYC addressing is a challenge) throughout the effort and of course providing the data. Part of the effort included a change notification email that gets sent out each night. The email shows the changesets from the previous day. Since a changeset can be comprised of multiple edits, wading through numerous unrelated edits (primary focus is on buildings and addresses) can be time consuming; however the change notification has proven useful and has resulted in hundreds of edits to NYC data.

Each changeset comes with a map (see example below) to guide the reviewer to the specific location of the edit. NYC DoITT staff review the changeset and apply any valid changes to the internal repository. Due to schema differences and ODbL license restrictions, the OSM data is not imported into the internal repository. The changesets are used as a guide.

OSM Change Set

Tools such as MapRoulette can also be used to bring in changes made to *authoritative* data sets. This is the method being used by the local NYC OSM community to incorporate missing bike lane data into OSM (see Eric Brelsford’s lightning talk here).

I think it is undeniable that *authoritative* government data can further enrich OSM to the benefit of many. You may then be asking yourself, what is the benefit to a local government? To me there are both direct and indirect benefits.

From a strategic perspective, it is important to have options when making decisions. In the case of data, not all local governments can afford or have the technical capabilities to manage their own geospatial data. And even when they do, there are cases where governments use external data sources for routing and logistics. To have only a couple proprietary commercial options limits choice and drives up cost. Having a robust and complete open data set provides governments alternatives. And the benefit is twofold: direct cost savings and indirect alternatives.

OSM can also benefit a much wider audience. Open data is great. And the movement towards more open data is fantastic. What is often not discussed is the barrier to enter the open data space. Not only specific to geospatial data, a person needs a variety of skills and software (there are open source options in geo such as QGIS) to work with and analyze the data. This is not an intended barrier but a result of the complexity within the current geospatial technology space. This greatly reduces the number of people downloading and working with open data. Conversely with OSM, there is a platform and an ecosystem of tools already in place. There are tools for viewing, editing, analyzing, rendering and even downloading OSM data. This allows people to focus on what they want to do with the data (e.g., make or view a map) and less about the intricacies of setting up the data to work with it. And there are an amazing set of tools from independent open source developers to commercial entities. From the elegant and simple ID editor to the Tangram map renderer. OSM can open up a wealth of possibilities and can be a viable alternative.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s