A.R.G.O.

Advanced Research in Government Operations

[A]dvanced [R]esearch in [G]overnment [O]perations (“ARGO”) is a startup nonprofit that builds, operates and maintains pioneering data infrastructure to transform how water reliability, street quality or other basic public services are delivered.

The next frontier of government operations

by Patrick Atwater

New York -- December 2014

What does the digital revolution mean for government operations?  The digital revolution has transformed how we live, work and play while most government operations would be eerily similar to a time travel from the early to mid-twentieth century – even with Code for America, US Digital Services, Presidential Innovation Fellowships and all the heroic work that’s happened over the past few years.

This essay argues that early efforts to pioneer digitally native government operations in America have only scratched the surface of what is possible and explores the epochal shift from analog to digital government operations through the deep connection between data and how public services are delivered.

We begin by contextualizing the digital revolution within the past few millennia of government operations as a reminder of the pace at which public institutions evolve.  This illustrates both the difficulty in changing how government operates and the fact that government operations do change.  Neither easy nor impossible, this is a hard problem.

 

A brief history of the role of public data in government operations

“The pre-modern state was, in many crucial respects, particularly blind; it knew precious little about its subjects, their wealth, their landholdings and yields, their location, their very identity. It lacked anything like a detailed “map” of its terrain and its people.”

            – James C. Scott “Seeing like a State”

Governance, at its most basic level, is the process by which we make collective decisions.  Governments collect data about the people and places they administer to inform and implement those decisions.  Since writing was used to tabulate grain in ancient Sumer, developments in data collection and manipulation have impacted these foundational aspects of how government delivers basic public services.

During the Roman Republic, the city’s central government found its increasingly large and diverse provinces difficult to tax.  Conducting an accurate census across millions of people over thousands of miles proved difficult, to say the least.  So Rome taxed entire communities rather than individuals.  Publicani or tax farmers bought the rights to tax specific provinces from the Roman government and were then able to profit from excess taxes over their bids.[1]

This system led to widespread corruption, and Rome’s first emperor, Augustus, abolished the practice.  Augustus replaced the old informal system with a salaried civil service to conduct regular population censuses of the provinces and collect centrally administered taxes.  For the next few thousand years, this sort of combination of formal organizational hierarchies and written communication would remain the state of the art in counting people and property.

The Industrial Revolution provided new tools to accomplish this old task.  In an often told tale, the US Census bureau needed mechanical computers to administer the 1890 census after the prior 1880 census was nearly not completed before the end of the decade.  The man hired to run the 1890 census, Herman Hollerith, pioneered the usage of data tabulating machines and would go on to found the antecedent of IBM. 

The twentieth century saw the development of large databases and maturation of statistics, without which the complexity of the modern bureaucratic apparatus would not be possible.[2]  Compare for instance the sophistication of the 1790 census, which aggregated little more than population counts at the county level, to the 2010 census, which offers hundreds of tables at neighborhood level granularity.[3]  In addition, statistical sampling methodology underwent radical advances in the intervening years, increasing the accuracy of measurement.[4]

The World Wide Web introduced a new paradigm into this landscape.  For the first time in human history, citizens no longer had to ask public officials for access to key government data used in public administration.  Anyone with an internet connection already had it.  The census bureau was an early adopter in posting data online. 

Governments around the world posted various data of various quality in what were the first instance of open data portals.  City government websites began as static pages with public announcements and the provision of information comparable to public access television.[5]  Clearly though much more was possible, and it didn’t take long for the fever of the “Internet Gold Rush” to make its way to government.

The excitement of the dot com bubble fueled multimillion dollar efforts like startup GovWorks' small vision to be the portal for every payment any citizen made to any government anywhere. Starting with a founder’s annoyance in paying a parking ticket, in true dot com fashion GovWorks’ offices soon had banners proclaiming their dedication to "life, liberty and the pursuit of more efficient government."  Here was nothing if not a bold effort to use new data collection tools to reinvent the administration of fee and tax payments.[6] 

In prototypical fashion, however, GovWorks soon went bankrupt.  Despite raising over nine figure sums in venture capital, the company’s plans in addition to slower than expected sales cycle made the startup’s viability contingent on further rounds of funding.  Yet the bursting of the dot com bubble dried up capital for speculative enterprises aiming at revolutionizing basic government functions. 

More quietly, however, several technology contractors solved many of the more obvious government payments problems.  You might not encounter a pretty webpage, but you generally can pay your parking tickets online today.[7]  And now millions of Americans pay their taxes using TurboTax. 

In addition, government websites increased in sophistication and in the amount of data made public.   Quality varied dramatically and acquiring raw data files often required interfacing with a government employee directly.  Although not always implemented effectively, the web’s underlying paradigmatic shift from centralized to decentralized access of government data nurtured a cottage industry of consultants, academics, technologists and media promoters proclaiming the birth of everything from “e-government” to “shared governance” to “Gov 2.0.”

Pivotally, these nebulous new ideologies agreed unequivocally on a single point: data should be open by default.  More than just putting information on the web, openness demanded that data be in the right formats and shareable in the right way so that it could actually be accessed.  Arcane issues like machine readability became the rallying cry for a disparate movement of technologists, progressive planners, academic activists and other “civic hackers.”[8]

In the past few years, this movement has gained critical mass and spawned new organizations like Code for America, new city officials like Chief Data Officers, and worked to build a new ecosystem of civic technology.  These developments raises several interrelated questions.

What are the goals of the open data movement?  Where has it succeeded and where has it fallen short?  And most importantly: how might this movement ultimately impact how government delivers basic public services?    

 

The ideology of open

 “The magic of open data is that the same openness that enables transparency also enables innovation, as developers build applications that reuse government data in unexpected ways.”

            —Tim O’Reilly “Government as a Platform

The goals of this growing movement are best articulated by one of its singular accomplishments: Barack Obama's 2013 open data executive order.  Note thousands of searchable, machine readable datasets already existed on data.gov. 

The executive order, however, changed the relationship between government and users of its data by "making open and machine readable the new default."  And perhaps most revealingly, the executive order based its argument on the following claim:

"Entrepreneurs and innovators have continued to develop a vast range of useful new products and businesses using these public information resources, creating good jobs in the process."[9]

Making data open by default is supposed to nurture this ecosystem of civic technologies.  Aneesh Chopras, then CTO of the Federal Government of the United States, describe explicitly how this process would work:

“within this ecosystem, entrepreneurs and innovators who have an idea on how they can create a service -- [for example], how to feed your kids healthy food -- might envision a set of data that's currently held by the government or could be organized better by the government as an important element to support a service. So you need a vibrant ecosystem to make requests from the government, to invoke its authority, to collect data and do it in better ways as much as you need the government to proactively look at its data sets to find ways to make information open.”[10]

There's one small problem: outside of transit and weather, web applications built on open data mostly consist of one off affairs and niche customer service improvements.  Where's the killer app for government?

The dominant assumption has been build open data portals and the startups will come.[11]  Yet only 16.5% of the companies that have flocked to open data are technology companies.  Instead open data has attracted mainly large financial institutions, consulting firms and domain specific companies.[12]  Climate Corporations recent almost billion dollar sale to Monsanto alone dwarfs the valuation of every startup affiliated with civic technology leader Code for America.

Faced with these facts one might wonder: is the open data movement really just about pushing government to hand over valuable public data for large public corporations to make a profit from?[13]  Is open data just a 21rst century version of the land grants used to subsidize railroad expansion into the West?  Regardless, when all this is said and done, will we have anything compared to the transcontinental railroad to show for our efforts? 

Put differently, what has besides advances in tools has changed in the decade since GovWork’s spectacular flameout?  The open data movement points to procurement reform in places like Philadelphia and the growth of internal government units like Boston’s Office of New Urban Mechanics as evidence that the challenges of bureaucracy are being effectively tackled.

Yet essentially every twentieth century presidential administration initiated an effort to reorganize and improve the operations of the Federal bureaucracy.[14]  And clearly challenges remain.  Why then should we be optimistic that merely creating new organizational units and processes – something each of those initiatives enacted – will be enough?

The reemergence of venture capital interested in this space after the scars left from flameouts like GovWorks may create conditions conducive to disrupting legacy government technology contractors.[15]  Instead of clunky 90’s era tools we might see greater use of more modern Instagram-esque designs.  But is what government needs really just a better user interface?

The open data movement often states that government suffers from a delivery crisis as an invocation for the importance of their work.  They’re correct in the facts.  NYU’s GovLab estimates that only $1 out of every $100 in government spending is backed by evidence that the money is spent wisely.[16]

More than that, a confluence of trends challenge governments around the world to improve delivery: aging physical infrastructure, demographic pressures on pensions, the uncertain impact of climate change, educating the next generation in a rapidly changing world, and the economic reverberations of globalization and increased automation.[17]

What apps are going to “solve” that?  To some in the open data movement, that may seem like too harsh a question.  Yet the phrase “delivery crisis” gets invoked frequently at open data events and in fact the idea sparked the start of Code for America:

“You need to pay attention to the local level because cities are in major crisis. Revenues are down, costs are up -- if we don't change how cities work, they're going to fail."[18]

If that’s the mountain we face, shouldn’t we ask what it will take to really meet it?  Code for America’s stated path to improving delivery has been to “show what’s possible.”  And they’ve succeeded.  They’ve clearly demonstrated that there is no magical barrier preventing governments from using cutting edge web tools. 

Yet as the delivery crisis illustrates, much more needs to be done.  How might this movement mature from pioneering novel customer service improvements to the normal everyday work of government?  How might the web enable new uses of public data that can drive the government delivery improvements that we need?

We will now explore concrete public data case studies to explore operational challenges and opportunities for improvement in greater detail.

 

New York's model of data-driven delivery

“The next step is to use these predictions to inform policymaking. New York is already doing this, for example by deciding where to send its cigarette-tax inspectors.”

             – The Economist "By the Numbers" April 25, 2013 [19]

NYC was an early leader in developing an open data portal and in fact many other cities copied and pasted from the City’s open data policy.[20]  In addition, the City has been developing an internal data sharing system called Data Bridge to allow analysts in different departments to access each other’s data. 

The Mayor’s Office of Data and Analytics (MODA) led that development and has been able to achieve impressive results in improving delivery by building predictive models using that data.  One high profile example has been the usage of such models to better target property inspections.  This modeling has resulted in a fivefold return in inspector man hours.  Prior to the MODA predictive model, Department of Building inspectors were issuing vacate orders thirteen percent of the time, something that indicates seriously high-risk conditions.  The MODA model improved that result to seventy or eighty percent of the time. 

This sort of model allows the city to better target scarce resources and clearly improve the delivery of public resources.  Former NYC Chief Analytics Officer and head of MODA Mike Flowers articulates the broader takeaway:

You can take out of 900,000 buildings or nine million permanent residents or 65,000 miles of roads or whatever and zero-in on the one to five percent that really pose a problem for the city from a regulatory — or even a law enforcement –standpoint and allocate our limited resources towards remediating, bringing them to bear where the real problems are. We take all of the information we know, as a city, about persons and locations and businesses, consistent with our statutory privacy obligations, and then cross-tab that with whatever outcome a specific agency is tasked with addressing. That’s the basic methodology. And we do share that.”[21]

More than these specific one off analyses, Mayor Bloomberg and the MODA team have been able to ingrain a data-driven culture in New York City government.  This more nebulous outcome isn’t as clear cut as improved fire inspections but is evident in the number of public servants and related workers who echo Mayor Bloomberg’s favorite maxim that “if you can’t measure it, you can’t manage it.”

This culture perhaps isn’t that surprising in the home of financial quants and a host of data and analytics firms.   The City lays claim to one of the world’s most prolific open data bloggers, Ben Wellington of “I Quant NY,” and also is home to DataKind, sort of a Code for America Brigade for data scientists. 

Yet what’s most impressive is how deeply the commitment to a data-driven government goes.  New York pioneered the nation’s first and world’s largest social impact bond for a prison reentry program.  This innovative financial structure uses rigorous data on prison recidivism to measure the effectiveness of a nonprofit intervention provided by the Center for Economic Opportunities.  Funding for the nonprofit program is provided by private capital, which is repaid with a return determined by how well the program achieves its outcomes.[22] 

New York City itself also provides about $100 million in year funding for research into evidence-based anti-poverty practices through its Center for Economic Opportunity.[23]  This commitment to data-driven practices is reflected across its civic leadership. 

New York’s recent applied science initiative stands an exemplar here.  Mayor Bloomberg saw the City’s clear overreliance on financial services and used that data to assemble an impressive public-private partnership to build a new technology university, Cornell Tech, and a new public data research laboratory, the Center for Urban Science and Progress.

 

Southern California’s opportunity for transformation

“I think the next frontier is comparing data across cities. It is one thing to compare our data to past performance. It’s another thing to look at a much wider dataset from similar cities. When we begin to meaningfully access how we’re doing in comparison to other cities, I think that’s going to be the revolution.”

—Los Angeles Deputy Mayor for Budget and Innovation Rick Cole[24]

The City of Los Angeles faces a host of nontrivial challenges.  Since the early 90’s, Los Angeles has been in the economic doldrums, adding essentially zero new jobs.[25]  Education outcomes vary hugely, and a child’s opportunity in life far too much a function of the zip code they’re born into.  A severe drought – one of the most serious in recorded hydrologic history – challenges the entire region to be more water efficient. 

The drought in particular clarifies the nature of the task at hand.  In an era of constrained supply, water demand management becomes key, which requires robust data to develop appropriate water rate structures that incentivize conservation.  Similar to Mike Flowers’ methodology in New York improving fire inspections, why wouldn’t we pull together data across disparate water utilities and other municipalities so we can better deal with the drought?  What reason is there for water utilities not to share usage, price and conservation program data with each other?  Aren’t we all in this together?

This challenge isn’t partisan.  Rather it’s a matter of figuring out how we as nineteen million humans are going to live together in a region with less water.  In tackling that challenge, why not integrate everything we know about households, businesses and other water consumers to better target our response to the drought (while adhering to all legal privacy safeguards)? 

What if the Los Angeles Department of Water and Power (LADWP) could compare the results of its turf rebate program with utilities across Southern California as easily as it monitors water usage in its service area?  How might that enable improvements not only in LADWP’s implementation but also catalyze collaborations in education and outreach? 

What if the small nonprofits implementing distributed stormwater capture infrastructure like permeable pavement shared design, cost, and runoff readings as a matter of normal business?  What if that was easily integrated with regional water quality data to inform project effectiveness?  How might that enable new strategies for managing distributed water projects?

What if Southern California could integrate monthly household level water consumption and retail price data across the entire region?  How might that enable a drought response that reflects the reality that rain does not adhere to the lines we draw on a map? 

That’s the opportunity of web enabled data in a nutshell: the ability to deal with the challenges the world presents us with rather than the problems our administrative units were historically built to address.  Consider how this might enable new pathways to tackling intractable public challenges in Southern California.   

What if CEQA environmental impact reports had their projections stored in a publically accessible database?  These reports impose substantial red tape on business yet nothing requires lead agencies to determine how accurate these projected environmental impacts actually were.  How might that enable us to better achieve both the goals of environmental quality and less red tape for business?

What if education nonprofits engaged in a spectrum of impact evaluation activities – everything from articulating a theory of change to statistically robust random controlled experiments – as a matter of normal business in serving Southern California’s children?  How might that enable us to better target government funding to scale successes? 

What if Southern California tracked extracurricular learning opportunities like internships, field trips, and other avenues for learning outside of school?  How might that help initiatives like Los Angele’s recent summer of learning scale opportunity regardless of the zip code a child is born into? 

How might we make this happen?  Many books have been written, task forces convened and careers dedicated to dealing with the even more numerous institutional barriers to the sort of possibilities described above.  Yet it’s important to remember that humans ultimately no different than ourselves created these institutions.  And as history shows, it may not be easy but public institutions do change.  

With that perspective in mind, consider one key roadblock that confronts governments globally: how might a municipal manager imbued with the pioneering spirit actually implement one of the ideas described about?  These challenges demand a unique skill set.  Much of this data remains trapped in legacy databases or dedicated spreadsheets.  Or often such opportunities might require deploying unique data collection strategies.  And then of course there’s the matter of actually analyzing the data. 

Many academics and consulting firms currently tackle public data challenges in specific isolated domains.  The government evidence gap – the $99 out of every $100 in spending that lacks evidence of program effectiveness per NYU’s GovLab – suggests, however, that new thinking might be needed. 

The enormous quantities of data generate by the web has given rise to a new job description: data scientist.  A hybrid researcher-programmer-statistician, these generalists drive the analytics behind features like Amazon’s recommendation engine and LinkedIn’s people you may know.  

Organizations like Data Kind and Bayes Impact, which provide structured opportunities for volunteering and below market compensation fellowships, demonstrate the data science community’s interest in public problems.  And increasingly their skill set is being put to use through more formal mechanisms in places like New York through institutions like the Center for Urban Science and Progress.

The discussion here could easily lapse into tribal disputes about the merit of different methodologies.  Yet why not simply have an all of the above approach?  Why not welcome as many statisticians, anthropologists, management consultants, mathematicians, computer scientists, sociologists, social science researchers, data scientists and other people with data skills as possible?  The challenges we face in Southern California certainly make enough room for all these approaches.

The bigger challenge will be scaling to meet the mountain that is the evidence gap.  So how might we incentivize the volume of data integration, data discovery and data analysis necessary to fill the nontrivial government evidence gap?

 

A marketplace for civic data science

"Ultimately, we should want to generate a reusable, distributed, permanent, and ever-expanding web of science, including conjectures, verifications, modifications, and refutations, and reanalyses as new data arrives. Social science should become a reified public commons. It should be possible to build new analyses from any stage of old work, by recruiting raw data into new projects, by running alternative models on already cleaned-up or normalized data tables, by using an old model's estimates to generate inputs to simulations or new analyses."

— Steve Randay Waldman writing on "Interfluidity" [26]

Why not create a marketplace that connects municipal managers with data science talent? Web based talent exchanges have transformed everything from catching a ride (Uber) to freelance designers and coders (Elance-Odesk), as the January 2015 Economist special report on the subject makes clear:

"Using the now ubiquitous platform of the smartphone to deliver labour and services in a variety of new ways will challenge many of the fundamental assumptions of 20th-century capitalism, from the nature of the firm to the structure of careers."

By providing flexibility and financial incentives that analytical talent demands, [A]dvanced [R]esearch [G]overnment [O]perations’ civic data science marketplace will make a big dent in the government evidence gap. That hypothesis needs experimentation to explore how focused financial incentives and a simpler contracting process could create scalable improvements in how governments access data science.

What conditions need to be created to facilitate productive engagements between public managers and data scientists?  What mechanisms will allow both easy matching and ensure a high quality data analysis? 

What’s the model for applying data science to city challenges that scales to meet GovLab’s evidence gap?  How can public managers sustainably contract high quality data science talent to maximize the impact of scarce resources and improve delivery?

These questions illuminate just a small sample of the uncertainty inherent in trying to create a new institution using digital tools in a new way.  Thus, rather than offer detailed blueprints of what fundamentally is terra incognita, the proper course of action is to think deeply about what we don’t know.

 

Pioneering the next frontier of governmental operations

“The world has changed far faster than government’s ability to keep pace, creating a huge space for good government reforms to better society. In William Mulholland’s era, Los Angeles could get its water through the work of a single agency acting essentially in isolation. Today, however, not only do you need coordination between multiple agencies at multiple levels of government that simply deal with water, but our world is fundamentally more connected, with profound institutional consequences.

Operating that water infrastructure is predicated on a vast array of telecommunications and electrical systems, involving several more sets of public and private actors. Even NASA and the military are involved. Refurbished predator drones are flown over the Bay Delta to gather environmental quality data. Today a dazzling array of interlocking parts work together to ensure Californians have a clean, secure, and sustainable supply of water.

The fundamental challenge California faces – getting water from where it falls to where it’s needed – hasn’t changed. But rather than having a set of institutions designed to solve that problem, we’ve settled for a byzantine structure that only exists because that’s the way things have always been. So why not unleash the famed creativity of the California people to systematically rethink how government can address the fundamental challenges – schools, prisons, water, public safety, etc. – we face as a people.”

            – "A New California Dream" by Patrick Atwater

What’s the future of delivering basic public services?  Looking deeply at the history of public data and government administration, we have strong reason to believe that today’s public administrative architecture can change.  It has in the past after all.  And recognizing the large challenges we face – in education, public finances, infrastructure, climate change and technological transformation – we have a strong rational for insisting that today’s bureaucratic structure does change.

So how do we make that happen?  Where might we look to understand how we might transform public service delivery?  Some governments have adopted innovative practices that show signs of promise but emphatically no one has figured out what a digitally native government looks like.  We might turn to science fiction or futurist thinking but those inform where we might go in several decades, not how we can meet the delivery imperative of next year. 

What if instead we took a good hard look at the world around us and asked ourselves what really is going on?  Look at existing digital technologies and think deeply about what those mean for the future of basic public administration.  Building off our earlier discussion of Roman and American censuses, consider a few open ended questions.    

If you were trying to get a simple count of the number of people living in the United States this past week, do you think the US Census Bureau or Facebook could provide a more accurate result?  And mind you Facebook not only has access to their real-time individual level dataset from half the US population but the entire public US Census dataset.

Another ponderous question: whose demographic data is richer?  Does the designed data stemming from a statistical sample of Americans responding to the US Census Bureau’s questions offer the most insight into America’s demographics?  These data tables include parameters like race, education, income and other standard characteristics. 

Or does the organic data individual Facebook users decide to share not only their friends but Facebook’s server farms offer greater potential insight?  This unstructured data includes their favorite music, how people self-identify racially (if they even choose to), who they become “friends” with and everything else that’s included as part of their virtual identity. 

Returning to the question of civic technology, this personally identifiable information is what Facebook able to target advertisements effectively and personalize your user experience.  In many ways, this personally identifiable information is a key missing ingredient of civic applications.  This data, however, runs afoul obvious privacy violations.

So why not develop civic technology within government?  The United Kingdom’s Government digital service has done a tremendous job coalescing thousands of websites into a single user friendly experience for UK’s citizens.[27]  The federalist structure of United States governments means that authority is far more fragmented.  Yet there are profoundly successful examples of technical collaboration among public utilities through the Electric Power Research Institute and the American Water Works Association.

Why not create something similar for this new digital frontier?  In the wake of Uber’s controversial actions, many cities have expressed interest in developing their own taxi pickup app.  Unfortunately for cities used to procuring “e-government logistics support systems,” developing a competitive and user-friendly natively mobile application that can scale to millions of users is a bit outside their comfort zone.  That doesn’t have to be the case. 

Governments historically have been no stranger to engineering excellence.  The Romans built aqueducts that still function today.[28]  More recently built Californian aqueducts stand as the only modern man-made structures you can see from space.   More than inspiring infrastructure, however, the greatest reason to be optimistic is the nature of the challenges we face.

As John Kennedy eloquently stated, “Our problems are man-made, therefore they can be solved by man.”  Public data offers us the opportunity to put our minds more deeply to work on the problems we face, to project ourselves beyond our narrow particular circumstance to see the world from new light, to learn in the deepest sense of the word how we might better tackle civilization’s oldest challenge: figuring out how we can live together.[29]

It is thusly that we imagine and then create a better future.

What's Your reaction? (anonymous if you like)

Name
Name


Endnotes

[1] http://www.unrv.com/economy/roman-taxes.php

[2] Imagine for instance trying to administer a complex contemporary zoning ordinance in the Roman era. 

[3] See here for the Census Bureau’s description of its data collection innovations: http://www.census.gov/history/www/innovations/data_collection/

[4] See also the canonical book “The Measurement of Uncertainty: The History of Statistics before 1900” for an incredible dive for how advances in mathematical techniques have enabled us to better understand the world around us. 

[5] See for instance the evolution of NYC’s webpage on the internet archive accessed here: http://web.archive.org/web/20000815213359/http://www.nyc.gov/

[6] See the movie Startup.com for further detail and an all-around great story.

[7] I should note here, if not already obvious, that I am speaking from a Californian perspective when discussing these issues.

[8] See Anthony Townsend’s book “Smarter Cities” for an in depth history and analysis of this nebulous movement.  Note the book in particular is a response to instrumentation and advanced infrastructure initiatives led by corporations like IBM, Siemens and Cisco.  

[9] Barack Obama’s Open Data executive order accessed from: http://www.whitehouse.gov/the-press-office/2013/05/09/executive-order-making-open-and-machine-readable-new-default-government-

[10] An interview with Aneesh Chopras accessed from: http://www.govtech.com/data/Aneesh-Chopras-on-Open-Data-Innovation-in-Government.html

[11] See for instance the US Open Data Institute’s implicit critique of the past open data model: “Moving beyond the “Field of Dreams” model of releasing datasets…” accessible from https://usodi.org/

[12] See NYU GovLab’s opendata500.com and note one might quibble about the definitions of technology corporations. Archimedes Inc. is a healthcare analytics consultancy and is classified as healthcare for instance.  Note also that startups are sparse.  The only one that jumped out to me while looking through their list was Enigma.io. 

[13] Tom Slee has a blog post in this vein that while a bit of a rant, elevates this important question: http://whimsley.typepad.com/whimsley/2012/05/why-the-open-data-movement-is-a-joke.html

[14] See James Q. Wilson’s canonical book “Bureaucracy.”

[15] See for instance the recent formation of the Gov Tech fund.

[16] “Measuring Impact with Evidence,” accessed from: http://thegovlab.org/govlab-index-measuring-impact-with-evidence/

[17] This list is mostly applicable to developed countries like the United States, Germany, or Japan.  A developing country in Africa or Asia would have some of the same issues (for instance education or economic megatrends) but would likely have challenges relating to building basic infrastructure rather than replacing existing facilities for instance.

[18] Andrew Greenhill, then Mayor’s Chief of Staff of the City of Tuscon, speaking to Jennifer Palka at the first Gov 2.0 conference.  This conversation sparked the initial idea for Code for America, an organization dedicated to placing talented technologists in city governments – a sort of Peace Corps for geeks.

[19] “By the numbers” accessed from: http://www.economist.com/news/united-states/21576694-cities-are-finding-useful-ways-handling-torrent-data-numbers

[20] Code for America Open Data Webinar accessed from: https://www.youtube.com/watch?v=XQJPBzdOrM4#t=479

[21] O’Reilley interview with Mike Flowers accessed from: http://radar.oreilly.com/2012/06/predictive-data-analytics-big-data-nyc.html

[22] See here for a fact sheet on the program: http://www.budget.ny.gov/contract/ICPFS/PFSFactSheet_0314.pdf

[23] NYU GovLab Measuring Impact with Evidence accessed from: http://thegovlab.org/govlab-index-measuring-impact-with-evidence/

[24] http://www.planningreport.com/2014/10/08/data-driven-governance-rick-cole-stephen-goldsmith-opine-citylab-2014s

[25] See the graph from the federal reserve on this Stag Hunt blog post I authored: http://staghuntenterprises.com/daily/2013/3/11/los-angeles-challenge-in-one-chart.html

[26] http://www.interfluidity.com/v2/5712.html

[27] See the following article from the BBC: http://www.bbc.com/news/uk-politics-22860849

[28] See the Roman aqueduct in Segovia: http://en.wikipedia.org/wiki/Aqueduct_of_Segovia

[29] Consider as a closing ponderance, why despite being as old as human civilization, mathematics eludes a consensus definition and etymologically most closely derives from the Greek word for learning.