ARGO @ NYC BIG APPS 2015
TLDR: I document our preparation and experiences at NYC Big Apps where we presented SQUID as a finalist in the Connected Cities category and Learnr as a semi-finalist in the Civic engagement category. This post is intended for anyone who may want to apply their creative energies for future BIG APPS competitions. We have provided links to our final pitch presentation and script and code for the demo we created for the Big Apps finals.
The pitch booths for Learnr & SQUID at NYC Big Apps semi-finals
Wednesday, Dec 2nd was an intense day for ARGO. We did not win at NYC Big Apps but it gave us an opportunity to prepare for a larger stage . Congratulations to Tommy Mitchell at Citycharge, an idea that took shape during the Occupy Wall Street movement and all the other Winners - supremely deserving! The entire BIG APPS experience was intense in a good way. While I wish we could have leveraged more of the network that BIGAPPS provides the space for, we met some pretty awesome people during the semis and finals and heard their stories. I'm sure bigapps created many happy accidents to fuel NYC's epic civic tech scene.
NYU CUSP and the awesome SONYC team were also part of the roller coaster ride. The gathering at BAM CAFE before the the big final pitch will be remembered as a palpable moment of nerves and adrenaline with a nice balance of camaraderie and competition.
Graham and I had spent the previous week agonizing over a presentation that would last 180 seconds followed by 120 seconds of Q&A. The judging panel was a collection of very accomplished people with gobs of experience in tech, policy, government and academia. Our final presentation needed to be a tight pitch which required it to be controlled and rehearsed down to the syllable while not sounding robotic. BIG APPS also allowed us to present a demo of our final product. Since SQUID relies on being outside where we could get a GPS fix - the added challenge was to show something that worked and gave the judges a peek of the idea just enough for them to "get it" during the 1-2 minutes they had to evaluate our demo. We had a weekend and 2 evenings to put this together. Game ON!
Our eventual demo consisted of SQUID connected to a USB powered LCD screen that I impulsively bought on Amazon as part of Black friday froth. The basic idea was to have an interactive demo with some real-time visual feedback of the accelerometer readings. We overlaid the video feed with a graph (generated in matplotlib) of the real-time accelerometer readings.
While this may look disjointed, it was a quick way of showing the sensors at work i.e. camera / accelerometer and give someone with little prior understanding of SQUID, the aha moment (that SQUID measures street quality using data from vibrations and imagery - a supplemental document was provided just to be sure :). It also gave me a chance to get my hands dirty while coding up the demo.
Raspberry Pi, in addition to being a full fledged linux computer that leverages the accomplishments of the open source community for the past 20 years, also has a thriving python ecosystem. A great example of this is the picamera module, a python interface for the Pi's camera module.
Before putting together this demo, I only had a vague idea of what we wanted to do and there were no readymade examples that we could quickly repurpose. The basic elements of this vision of a demo were:
- Display some Imagery superimposed with accelerometer readings concurrently.
- Packaging the entire thing into a self-contained unit that explained itself.
picamera allows you to easily annotate text or an image on a video feed. HOWEVER, overlaying anything more complicated quickly becomes beast mode. Without going into some pretty dense and customized C++ implementations, the options I found were limited to implement fast.
To display a real-time graph of sensor readings on top of the video feed was painful but eventually worked ! In a nutshell:
- I generate a graph of the accelerometer readings and outputted it to an image
- I then call this graph image repeatedly using picamera's add_overlay function
I borrowed code from all over and repurposed. That's it. The screen and other trappings worked out of the box. I want to belabor this point of repurposing code and reifying a vague concept to prototype in a short time. I do not identify as a software developer and I am not one. I find that I am way too restless and impatient to carefully implement beautiful complexity. Doing yoga does not fix this I have learnt, its innate although the design patterns from more established software implementations are a great resource.
I relate to the quick and dirty school of thought - to get a whisk of an idea and then be persistent towards a minimal viable form of execution so that it "just works". This way of doing things is also not comfortable but FUN when things come together.
This post is an attempt to document that experience and address it to a non-technical audience. I want to demonstrate the many (messy) ways of being able to program and make stuff and to think about programming in unconventional ways that are not part of some prescriptive cookbook (although those help tremendously :) and finally to eliminate self-doubt through blind optimism and persistence. This is primarily intended for the programmatically challenged who I happily identify with and learn from.
Eric S. Raymond - one of the pioneering evangelists of Linux & the early open source movement and author of the Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary said this of programmers:
"Good programmers know what to write. Great ones know what to rewrite (and reuse)".
That is a loaded statement and ESR is a provocative figure but it gave me the mental currency to try stuff however zany, unintuitive and often of no practical use. It may not be the most "optimal way" of doing something and that's ok.
Large swaths of the internet, I argue, were built this way. I am going to end this thought with yet another reference to Anthony Townsend's Smart city book, a source for so many great tech origin stories, that provide further evidence on this bottom-up approach to technology development ( Function v Specification )
In the 1970s, telecommunications companies and academic computer scientists battled over the design of the future Internet. Industry engineers backed X. 25, a complex scheme for routing data across computer networks. The computer scientists favored a simpler, collaborative, ad hoc approach. As Joi Ito, director of the MIT Media Lab, describes it: The battle between X. 25 and the Internet was the battle between heavily funded, government backed experts and a loosely organized group of researchers and entrepreneurs. The X. 25 people were trying to plan and anticipate every possible problem and application. They developed complex and extremely well-thought-out standards that the largest and most established research labs and companies would render into software and hardware. The Internet, on the other hand, was being designed and deployed by small groups of researchers following the credo “rough consensus and running code,” coined by one of its chief architects, David Clark. Instead of a large inter-governmental agency, the standards of the Internet were stewarded by small organizations, which didn’t require permission or authority. It functioned by issuing the humbly named “Request for Comment” or RFCs as the way to propose simple and light-weight standards against which small groups of developers could work on the elements that together became the Internet.
The above may ring true for some big breakthrough in the Internet of Things space as well and most of that ad-hoc energy exists today in nondescript DIY community forums. So, in the spirit of early internet innovation, we humbly issue an RFC to this post and the larger thinking behind SQUID and civic data science. Here is a video of everything coming together for the SQUID Big Apps demo.