ARGO @ NYC BIG APPS 2015

TLDR: I document our preparation and experiences at NYC Big Apps where we presented SQUID as a finalist in the Connected Cities category and Learnr as a semi-finalist in the Civic engagement category. This post is intended for anyone who may want to apply their creative energies for future BIG APPS competitions. We have provided links to our final pitch presentation and script and code for the demo we created for the Big Apps finals.

The pitch booths for Learnr & SQUID at NYC Big Apps semi-finals

Wednesday, Dec 2nd was an intense day for ARGO. We did not win at NYC Big Apps but it gave us an opportunity to prepare for a larger stage . Congratulations to Tommy Mitchell at Citycharge, an idea that took shape during the Occupy Wall Street movement and all the other Winners - supremely deserving! The entire BIG APPS experience was intense in a good way. While I wish we could have leveraged more of the network that BIGAPPS provides the space for, we met some pretty awesome people during the semis and finals and heard their stories. I'm sure bigapps created many happy accidents to fuel NYC's epic civic tech scene.

NYU CUSP and the awesome SONYC team were also part of the roller coaster ride. The gathering at BAM CAFE before the the big final pitch will be remembered as a palpable moment of nerves and adrenaline with a nice balance of camaraderie and competition.

Graham and I had spent the previous week agonizing over a presentation that would last 180 seconds followed by 120 seconds of Q&A. The judging panel was a collection of very accomplished people with gobs of experience in tech, policy, government and academia. Our final presentation needed to be a tight pitch which required it to be controlled and rehearsed down to the syllable while not sounding robotic. BIG APPS also allowed us to present a demo of our final product. Since SQUID relies on being outside where we could get a GPS fix - the added challenge was to show something that worked and gave the judges a peek of the idea just enough for them to "get it" during the 1-2 minutes they had to evaluate our demo. We had a weekend and 2 evenings to put this together. Game ON!

Our eventual demo consisted of SQUID connected to a USB powered LCD screen that I impulsively bought on Amazon as part of Black friday froth. The basic idea was to have an interactive demo  with some real-time visual feedback of the accelerometer readings. We overlaid the video feed with a  graph (generated in matplotlib) of the real-time accelerometer readings. 

The LCD screen displaying a live video feed overlaid with a graph of accelerometer readings and annotations

The LCD screen displaying a live video feed overlaid with a graph of accelerometer readings and annotations

Demo day rapid prototyping!

Demo day rapid prototyping!

While this may look disjointed, it was a quick way of showing the sensors at work i.e. camera / accelerometer and give someone with little prior understanding of SQUID, the aha moment (that SQUID measures street quality using data from vibrations and imagery - a supplemental document was provided just to be sure :). It also gave me a chance to get my hands dirty while coding up the demo.

Raspberry Pi, in addition to being a full fledged linux computer that leverages the accomplishments of the open source community for the past 20 years, also has a thriving python ecosystem. A great example of this is the picamera module, a python interface for the Pi's camera module.

Before putting together this demo, I only had a vague idea of what we wanted to do and there were no readymade examples that we could quickly repurpose. The basic elements of this vision of a demo were:

  • Display some Imagery superimposed with accelerometer readings concurrently.
  • Packaging the entire thing into a self-contained unit that explained itself.

picamera allows you to easily annotate text or an image on a video feed. HOWEVER, overlaying anything more complicated quickly becomes beast mode. Without going into some pretty dense and customized C++ implementations, the options I found were limited to implement fast.

To display a real-time graph of sensor readings on top of the video feed was painful but eventually worked ! In a nutshell:

I borrowed code from all over and repurposed. That's it. The screen and other trappings worked out of the box. I want to belabor this point of repurposing code and reifying a vague concept to prototype in a short time.  I do not identify as a software developer and I am not one. I find that I am way too restless and impatient to carefully implement beautiful complexity. Doing yoga does not fix this I have learnt, its innate although the design patterns from more established software implementations are a great resource.

I relate to the quick and dirty school of thought - to get a whisk of an idea and then be persistent towards a minimal viable form of execution so that it "just works". This way of doing things is also not comfortable but FUN when things come together.

This post is an attempt to document that experience and address it to a non-technical audience. I want to demonstrate the  many (messy) ways of being able to program and make stuff and to think about programming in unconventional ways that are not part of some prescriptive cookbook (although those help tremendously :) and finally to eliminate self-doubt through blind optimism and persistence. This is primarily intended for the programmatically challenged who I happily identify with and learn from.

Eric S. Raymond - one of the pioneering evangelists of Linux & the early open source movement and author of the Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary said this of programmers:

 "Good programmers know what to write. Great ones know what to rewrite (and reuse)".

That is a loaded statement and ESR is a provocative figure but it gave me the mental currency to try stuff however zany, unintuitive and often of no practical use. It may not be the most "optimal way" of doing something and that's ok.

Large swaths of  the internet, I argue, were built this way. I am going to end this thought with yet another reference to Anthony Townsend's Smart city book, a source for so many great tech origin stories, that provide further evidence on this bottom-up approach to technology development ( Function v Specification )

In the 1970s, telecommunications companies and academic computer scientists battled over the design of the future Internet. Industry engineers backed X. 25, a complex scheme for routing data across computer networks. The computer scientists favored a simpler, collaborative, ad hoc approach. As Joi Ito, director of the MIT Media Lab, describes it: The battle between X. 25 and the Internet was the battle between heavily funded, government backed experts and a loosely organized group of researchers and entrepreneurs. The X. 25 people were trying to plan and anticipate every possible problem and application. They developed complex and extremely well-thought-out standards that the largest and most established research labs and companies would render into software and hardware. The Internet, on the other hand, was being designed and deployed by small groups of researchers following the credo “rough consensus and running code,” coined by one of its chief architects, David Clark. Instead of a large inter-governmental agency, the standards of the Internet were stewarded by small organizations, which didn’t require permission or authority. It functioned by issuing the humbly named “Request for Comment” or RFCs as the way to propose simple and light-weight standards against which small groups of developers could work on the elements that together became the Internet.

The above may ring true for some big breakthrough in the Internet of Things space as well and most of that ad-hoc energy exists today in nondescript DIY community forums. So, in the spirit of early internet innovation, we humbly issue an RFC to this post and the larger thinking behind SQUID and civic data science.  Here is a video of everything coming together for the SQUID Big Apps demo.

Special thanks to Oklahomer and his contribution in the picamera space.

Our final pitch presentation & syllable controlled script

The code for this demo is available here . 

Varun

Comment
Print Friendly and PDF

Decision making within the Civic Data Science framework

TLDR: My observations on why decision making needs the same "big" treatment as data or devices. I refer to past decisions that cost NYC  dearly that can be used as learning opportunities. I also include some pedantic descriptions of decision making from academia and bring it back to relevant examples of how technology and decision making come together well. I added pictures to correct for eye-glazing effect.

At ARGO we are building towards a sustainable framework to better scope, understand and eventually deliver solutions for urban problems. So far we have "not disagreed" on the Data Discovery, Data Analysis and Data Integration buckets to define distinct data tasks that fit into a larger normative structure of  Device, Data & Decisions framework to  encompass the civic data science framework. Together along with rapid prototyping  we offer a comprehensive and flexible lens towards a digitally native service delivery model.

Device, Data & Decisions  Image credits: from the Noun Project Router by Yorlmar Campos ; Export Database by Arthur Shlain ; strategy by Gregor Črešnar ; cube by Ates Evren Aydinel;

Device, Data & Decisions
Image credits: from the Noun Project
Router by Yorlmar Campos ; Export Database by Arthur Shlain ; strategy by Gregor Črešnar ; cube by Ates Evren Aydinel;

Our process is a result of many discussions amongst ourselves and others ranging from the epistemological to the semantic.  The overarching mission at ARGO is to partner with city agencies and local governments to help them make qualitatively better decisions about delivering services better. "Better", however,  is a loaded term.

In 2009, "Better" meant spending $549,000,000 to develop a citywide wifi network that turned out to be obsolete in 5 years. 

In 2013, when Hurricane Sandy hit, "Better" meant spending billions in disaster response that was sometimes dysfunctional.

These were well-intentioned and understandably debatable decisions that were not the best use of public $$$ but as we are moving head first into the digital age where policy making today is more than ever reliant on data, these errors of the past are also immense learning opportunities. While tools to "grab the damn data" are evolving at breakneck speed, we need to consider whether our abilities to make actionable decisions are also evolving instep. This is often not the case and also not amongst the typical data science skill set.

The decision maker (often not data savvy) ends up swimming/drowning in data and left with inadequate tools to convert the <<<insert awesome predictive analysis using ridiculous amounts of data but woefully difficult to replicate or communicate >>> into decisions to move the proverbial needle on said policy intervention.

Created using wordle.net

Created using wordle.net

Whenever I sit in a room with "data scientists" or "data-{dashes}",  I often wonder how they define terms such as  "Algorithm", "Big data" & "Urban Science".  I can argue that if asked,  their definitions of the term would form the basis of inherent biases that could very well lead us down the path of the afore-mentioned billion $$$ errors. I often question my own definitions of these terms as they are heavily contextual. ( Disclosure: I spent some time supporting Algorithmic Trading systems at a big bank )

As a Master's student at Penn State's IST program, I researched decision making within crisis management. This included the study of Computer Supported Collaborative Work (CSCW), Human Computer Interaction (HCI) and Human Factors (ergonomic design). I ended up writing my thesis on a theory of team cognition called Transactive memory that seeks to better understand group behavior based on the processes by which individual members of a group makes sense of incoming information. Most of the work dealt with developing a theoretical model to better situate crisis responders to organize incoming information so that they can make effective decisions on the field.

The transactive memory command center is the application of Daniel Wegner's Transactive memory theory to an information environment where decisions are facilitated by individuals who have specific information roles to organize incoming data. This was presented along with a research colleague at a 2008 &nbsp; Department of Homeland Security University network summit focused on catastrophes and complex systems

The transactive memory command center is the application of Daniel Wegner's Transactive memory theory to an information environment where decisions are facilitated by individuals who have specific information roles to organize incoming data. This was presented along with a research colleague at a 2008  Department of Homeland Security University network summit focused on catastrophes and complex systems

A big takeaway from this study was my affinity to the Common Operational Picture - a concept that is heavily used in the military for command & control in a distributed command structure that I find to be immensely useful out of the military context,   underutilized in data intensive environments and could be useful in the civic space. As I self plagiarize from my thesis: 

Working groups solving problems together often need to achieve a common consensus on the important elements of the problem. This common understanding is necessary so that decision-making for an evolving and complex situation can be effectively enabled if knowledge about the situation is aggregated onto a common space for all the decision-makers to make use of collectively. The centralization of information that facilitates such convergent processes is referred to as the common operational picture (COP).
A COP is first and foremost a visual representation; it is a structurally emergent artifact that visually illustrates the relevant information characterizing the situation. (USJFCOM, 2008). A COP is most useful when multiple groups operating under a multi-level organizational structure require quickly accessible and actionable knowledge to rapidly make decisions.
Design and Development of a Transactive memory prototype for geo-collaborative crisis management, Adibhatla, V, Master's Thesis, 2008,  Penn State University

Anthony Townsend, in his book,  Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia refers to a similar concept ;  Topsight as described by David Gelernter in his prophetic & seminal book: Mirror Worlds: or the Day Software Puts the Universe in a Shoebox.

Some gems from Mirror World's on topsight:

Topsight is what comes from a far-overhead vantage point, from a bird's eye view that reveals the whole—the big picture; how the parts fit together.
It's easy to organize a data-gathering project, and you can count on a rush of neo-Victorian curatorial satisfaction as your collection grows. But analyzing data requires at least a measure of topsight, and topsight is a rare commodity

The desire for the ultimate topsight.  (1) Rio Operations Center, 2012. [IBM] ; (2) Mission Control Center, Houston, 1965. [NASA]. These images were taken from Mission Control: A History of the Urban Dashboard (Mattern, Shannon. "History of the Urban Dashboard." Places Journal (2015)).

Townsend makes the argument that this need to gain ultimate "topsight" in a city is what drives Rio de janeiro & IBM to build massive top-down surveillance systems using Billions of $$$. These systems eventually yield something similar to an "Informatics of domination" situation  originally articulated in Donna Haraway's' Cyborg Manifesto and referred to in "Critiquing Big Data: Politics, Ethics, Epistemology".  This is an unfortunate outcome that is reminiscent of a Robert Moses approach to constructing digital infrastructures for civic applications.

The Common Operational Picture although originating from the military where the domination narrative is not only implied but required, I argue,  can be effectively repurposed for a less grandiose, localized and practical approach to making day-to-day operational decisions in city agencies.

The Department of Sanitation's Bladerunner platform is a superb example of how a Common Operational Picture looks like in a city operations setting. The platform uses data from the GPS devices used on DSNY vehicles that transmit data using a "cellular network" (curios to know if NYCWin is used here) and then feeds into a flexible UI (the Common Operational Picture) that can be manipulated to allow DSNY managers locate and group DSNY vehicles in real-time by distinct functions (Plowing, Salting, Collection, Supervision) and attain an innocuous yet extremely usable topsight. Bladerunner too costed several million $$ to implement but I'd bet that without it DSNY managers would find themselves operationally crippled (feel free to call me out on this)

DSNY's Bladerunner platform, a Common Operation Picture for DSNY managers

Finally, we designed SQUID to follow the same principles of decision making. Providing a common operational picture on street quality, we hope, would optimize the $1,400,000,000 ($1.4 Billion) budgeted for NYC street resurfacing over the next 10 years. (Ten-Year Capital Strategy, Fiscal Years 2016-2025, The City of New York, Pg 22) . A 1% savings as a result of better decision making around street resurfacing projects would more than pay for the SQUID program not only in NYC but more so in small  to medium sized cities where their street paving dollars are limited.

I leave you with some maps that we made for NYU's GIS DAY 2015 that aggregate the sensor data from SQUID to the Neighborhood Tabulation Areas (NTAs) as provided by the Department of City Planning. We intend to  follow similar design principles used in Bladerunner to develop an effective Common  Operational Picture using SQUID data.

Thanks,

Varun Adibhatla, Argonaut

1 Comment /Source
Print Friendly and PDF

Towards a Vision Zero for NYC's Potholes

Elevator pitch : We built a low-cost sensor platform to passively measure street surface quality using an accelerometer and added a camera to be able to literally see the "ground truth". This data and its visualization, we contend, will significantly improve decision making & resource allocation around existing pothole repair operations in New York City and attain "Escape Velocity" instead of maintaining orbital velocity as Dr. Lucius Riccio elegantly describes it.

TL;DR

"290,210 potholes have been repaired this winter maintenance season"

reads the "The Daily Pothole" a Tumblr site maintained by the Department of Transportation. A comic strip illustrating the daily fill, spill and milling of the hard-working pothole crews provide comic relief to this age-old problem of cities. The Center on Municipal Government Performance describes street maintenance as the "most visible example of local government performance" and we agree. To that end, we describe the creation of SQUID - Street Quality Identification Device. Of course we needed an acronym!

The Daily Show's new host, Trevor Noah mocking NYC's potholes.

The Daily Show's new host, Trevor Noah mocking NYC's potholes.

The motivation for the device came from initial forays into measuring Bumpiness while riding the NYC Century Marathon. This semester at CUSP - we had an opportunity to take an excellent course by Arlene Ducao over at ITP titled "Quantified Self about Town" which delved into wearables, & iOT in an urban environment. At the same time Bob Richardson was teaching Civic Tech Management and he mentioned in passing how cool it would be if the city could measure all the potholes. All we needed to do was put the 2 together in our heads and be a tad naive.

The concept was simple - a device that could passively record ride quality but more importantly also take a picture so that the readings could be backed up with some visual evidence. 

We began with the Tessel which I have described in detail while demo-ing a parking sensor but we soon found out that we needed something faster and more robust to deliver our vision. Enter the Raspberry Pi. It took some familiarization with the Pi and some soldering help (Thanks Justin!) but once we had the individual sensors emitting data - all we needed to do is work towards coherence.

That is easier said that done as this proves....

The sunny morning of Monday, Apr 12, 2015 was our Mercury moment. After some advanced case-logic SQUID v1 was born. Leveraging car-sharing - we jerry-rigged SQUID to the rear of the vehicle (We were still skittish to expose it to the open). A group of Haitian protesters were campaigning outside 1 Pierrepont, (CUSP Student HQ and Hillary Clinton Campaign HQ). At the end of an hour's drive - we had our first dataset. We then threw it into Tableau where we recently discovered the ability of showing web pages on a Tableau dashboard. This is what emerged. A cobbled section of York Street in Dumbo served as a sanity check as you will see in the video.

Ok so we were able to demonstrate the basic concept. This is our lump of clay taking shape. From here on - we start sculpting. The images as you can see are low-res and cover too large  an area to discern anything.

On May 3rd as we were braving through finals - we went on another dry run this time with SQUID outside the vehicle with the camera adjusted to take higher res pictures and positioned towards the street. This time the Young republican Party were outside to greet us demanding Mrs. Clinton hand over her emails - at this point we took a Hillary protest as a good omen. We marched nay drove on to Cobble Hill which , in hindsight, was the perfectly named neighborhood for SQUID. The results speak for themselves. Here take a look ...

So we are able to map out an entire neighborhoods. What next ? Can we do this for the entire city? 

290,210 potholes

Back to that number on the Daily pothole. Its a common metric that is often used to demonstrate the city's response to poor street quality. Its a fairly big number as well but its only a number. We do not know what it means objectively. It does convey the work done by the pothole crews who tirelessly get the job done. Getting the job done "smartly" is what we are advocating here. This is a classic resource-allocation problem that is just waiting to get fixed. This is the current workflow of the life of a pothole complaint based on what we know. This is what Orbital velocity looks like, it's a glorified way of saying  maintenance mode. At this rate - we will always be playing whack-a-mole with no end in sight.

...and this is how we attain Escape velocity - towards a Vision of eliminating potholes. Sure, it's a ridiculous moonshot of an idea but even getting close means getting a lot better at fixing potholes. As per this WNYC story - "The American Society of Civil Engineers estimated in 2013 that motorists spend an estimated $500 dollars in extra costs driving on rough roads – in repairs, wear and tear and extra fuel."

As the recently released Internet of Things Manifesto states - "Design of the Win-Win-Win". We think this is possible here.

Moonshot time..

Moonshot time..

What does an accurate citywide pothole map enable?

Pothole crews (the link shows a cool video of them doing their day-to-day) probably spend a large chunk of their time driving to pothole complaints. Once they get to a complaint location they fix all the potholes on that street section. What if the adjacent street or block has even more severely damaged streets that no one bothered to report in ? There is little situational awareness for a problem that the city is committing $310.1 million dollars to. Our approach offers a new way to tackle that enduring challenge. We built the tech and we now want to focus on fixing the problem.

However much fun it may be for a citizen to report a pothole, we think that a more situated sensor platform that can be mounted on any agency's fleet can collect this data at scale. Past and current efforts use expensive military grade sensors and ominous looking, bulky technology that overengineers the core issue  - knowing where the potholes are! We built a "good enough" platform that is able to convey ground truth and is able to scale across the entire city. This is what ARGO aims to demonstrate and deliver.

Does New York City want to be the world's first city with all its pothole mapped? The only significant loss to the city - cool photo ops.

We would love to hear your feedback, ideas & opportunities to smoothen your streets using SQUID.

Thanks!

PS:  Here is the class blog for Quantified Self About Town - which offers a more in-depth look into this journey



3 Comments
Print Friendly and PDF