parKING

The subject of parking isn’t exactly the most sexy, but it is possibly one of the most important issues in urban planning. How much available parking there is and what it’s value is is of huge concern to cities and their planners. Abundant and free parking, it is argued, has led to the congestion that we see in most American cities. Congestion directly contributes to increased levels of pollution and smog. Areas with pollution and smog track higher rates of disease and early-death than areas without pollution. Parking, whether you realize it or not, is a deadly serious issue.
We have recently completed a project with the Portland Bureau of Transportation (PBOT) to help the City of Portland develop a methodology for estimating the parkable “curbspace” throughout Portland. The City has an excellent understanding of how much metered capacity there is, but they really don’t have the best grasp on how much “free” parking is available. Much of this is due to growth at a time when data standards were still being worked out. As a result, there are multiple datasets that need to be merged, polished, and refined before we can create a complete analysis.

Our role in the project was to establish the methodology, provide an initial analysis, and then leave the city a “plug and play” data model that could be adapted, refined and continually re-run for select areas of town or for the whole City.

Task One was to identify appropriate data inputs. Our GIS team spent some time reviewing datasets from the City as well as Oregon Metro and other sources. For now, the best possible way to determine the amount of parkable “curbspace” is to work with a “curbs” dataset that we got from Metro. Another dataset that we looked at was “blockfaces” which represents “the area between the line separating a public right-of-way from private property and the center line of a street or highway, and between the midpoint of two intersections.” The problem with this data is driveways; they are not really taken into account. The “curbs” dataset has the most accurate representation of the amount of space available. Albeit, with some degree of data cleanup. While we decided to use curbs for the purpose of estimating the quantity of parking within the city, the blockface dataset is necessary to conflate pavement attributes to curbs (road width, surface type).

Task Two involved creating an initial analysis. Before we could perform an analysis that would work across all areas of the City, we first had to perform some substantive data cleansing and enrichment to ensure that we had baseline consistency in our key reference datasets. Once we had the key data ready for action, we implemented an analysis model and progressively checked outputs for different areas, performing selective field verification and successive model tuning and tweaking until we and (and more importantly) PBOT staff were pleased with the accuracy of the calculations the model was kicking out. 

Task Three involved providing the data model to the City, which we did by producing fairly extensive user and maintainer documentation and sharing it over the course of several mentoring sessions with PBOT staff. The delivered model is designed for easy integration of new and/or improved datasets, so that as they become available, the model may be adapted to reference them and provide the best possible accuracy that ever-improving datasets will support.

I'm sure I may have lost some of you along the way with this one. But, honestly, this kind of geospatial data science, with its subtleties and complexities and the requirements to determine and deliver the best possible accuracies that data will support...we love this stuff. If you're still reading, I'm guessing maybe you do too. If you're interested in learning more about how we approach this kind of work, some of the specialized methods and tools we applied, or if you'd like to share your adventurous tales from similar work or if you might like some help from fellow fanatics in the field...please get in touch!