Announcing Release of New Map Kibera Research: Open Mapping from the Ground Up

Screen Shot 2017-10-12 at 12.50.43 PM

Earlier this year, I was fortunate to have the chance to research some of Map Kibera’s work over the past 8 years in the area of citizen accountability and transparency. I was interested especially in whether the maps made in Kibera had had an impact beyond the life of each project, since they were available online on OpenStreetMap and our own website, and distributed offline via paper maps and wall murals. I looked at three sectors we’d worked on —  education, water and sanitation, and security — and interviewed those who had commissioned, been given, or discovered the related maps in some way. I looked particularly at whether citizens had been able to use the maps to improve accountability and governance. This research has now been published as part of the Making All Voices Count Practitioner Research and Learning series.

The results were surprising. I found that maps had been used for a wide variety of purposes which we hadn’t known about before, including arguing for policy changes and reallocating donor resources in Kibera.

The research process involved first trying to track the maps, which was not easy. The Map Kibera team, working with research assistant Adele Manassero, asked around in Kibera, looked on social media, combed through blog posts, and otherwise tried to find out who used our maps and why. Working with open data can be messy this way. We found a number of people who said they had indeed been working with our data. Some were from various NGOs, others were in government. We also followed up on those we’d handed out the map to directly, like school heads and community leaders.

The schools map in particular, which had been the most widely distributed of our maps by far in paper form, had been used for many purposes. It had even been photocopied by the local education official to hand out to visitors. The local Member of Parliament had used it to reach out to the informal schools and create a WhatsApp group to communicate about key school information —  and to petition Parliament for increased education resources for students in Kibera.

Part of the message here is that by sharing information widely, both digitally and on paper, AND with the backing and network of a trusted local team — Map Kibera — you can make room for impacts beyond a narrow project purpose usually associated with data collection. With all of the emphasis in ICT4D on designing directly for the end user, data collection is usually still done for the end user called the donor — or the organization’s HQ, or governmental higher-ups. Sometimes this data doesn’t even get used. In this case, I found that the better we targeted a variety of local potential “end users”, like leaders of local associations, teachers, and CBOs, the more we saw uptake and impact even without intensive chaperoning of the information. Maps wound up being repurposed. This is the way, I believe, open data is supposed to work.

I suppose then the question is, why did it work this way, when so often it doesn’t? I found that collection and sharing of information by trusted local residents helped lend validity to the information, improve trust and communications between government and citizens, and bring informal sectors into the light in a way that benefitted the community. If anyone wanted to know where the data came from, the folks collecting it were right there, visible, identifiable and relatively unbiased. They could also be called upon to make corrections. Schools indeed have reached out, and recently a lot of updates were made to that data by the team. The analog method of change — being able to call someone, who lives in your neighborhood, and refer to a paper visual — is key to local social impact, even if the information collected is digital.

I’ll be blogging about a number of other things I found interesting out of this research in the weeks to come. Stay tuned!

TwitterFacebookEmail

Looking Back with Making All Voices Count

I’ve recently had the opportunity to spend some time learning and thinking about Map Kibera’s work, thanks to a practitioner research grant from Making All Voices Count.

Those who work in the NGO and nonprofit sphere like me might not be surprised to learn that we’ve had precious little opportunity to do evaluation on our work over the past 8 years. So, I welcomed this chance to look into some of the impacts and possibly unintended directions that our work has taken during this time.

While I’ve written about Map Kibera a number of times from the reflective standpoint, I hadn’t had the chance to try to track down some of the specific results. Especially when it comes to who’s using our maps and data, and for what. We have a strong commitment to open data, meaning that you can easily access our information and maps either directly through our website, or through OSM itself. That means it’s hard for us to know who has used them (and if you have, but haven’t been in touch with us to let us about it, please help me out by emailing me!).

I haven’t yet fully analyzed everything I’ve gathered through a series of interviews, focus group discussions, and by reviewing our social media and visitors logs over the years, but a few things have stood out so far. Here is just a sampling:

  1. There were a number of cases we were able to track down of data being used without our knowing about it. For open data, that’s a success, right? It means that there have been changes in actual programming for Kibera, especially in targeting interventions to specific geographical areas, finding local partners, and directing donor resources. Information by itself may not produce systemic changes, but could redirect resources in an uncoordinated way to places that most need it. In other words, it can make aid more effective. With some coordination this might be a stronger effect.

  2. With some support by intermediaries such as Map Kibera, information like maps can help produce larger systemic shifts. But the level of support required is a question – given that this isn’t typically well resourced. We had some impacts particularly within the education system which were large considering the amount of funding we had, but we couldn’t sustain intensive support. A larger question about the appropriate role of such an intermediary came up again and again.

  3. Trust among stakeholders is one major outcome, which has little to do with technology and a lot to do with relationship building. In this sense, information and maps are a kind of tool for getting on the same page, perhaps, or removing some of the bias on either side. There is a lot of mistrust in informal settlements (between citizens and government, citizens and NGOs or CBOs, schools and education officials, etc). This needs to be overcome for improvements in key sectors like education and water/sanitation, two areas I looked into.

  4. The value of “being recognized” or being made visible was something that came up repeatedly. A perceived legitimization through transparency. I think there are two things here: being able to speak out or have “voice” in the sense of self-representation; and becoming legible – that is, transparent – which may have more to do with knowing the facts than giving voice to opinions and perspectives.

  5. Keeping data up-to-date is a huge challenge. And it can hinder scaling up and expansion because of the effort required.

  6. Technology is still a challenge, in that most people don’t use the internet OR may have smartphones but still don’t use them to their full capacity. Offline outreach and printed materials are key, still.

  7. Those taking part in our projects over the years have seen a lot of personal benefit, and some of this has been unexpected (on my part). For instance, the value of contests and credentials – winning an online contest for a news story, even an obscure one without any prize, was a huge highlight, as was gaining press credentials or even a simple ID badge to be identified as a member of the organization.

Clearly there is a lot to unpack in each of the points above, and there are many more topics to explore as well! If this is of interest to you, stay tuned for the publication.

TwitterFacebookEmail

How to Improve Innovation Funding: lessons from the MakeSense project

I recently posted about the MakeSense pilot, and our challenges trying to pilot the DustDuino air quality sensor in Brazil. The project brought up some of the limitations of the innovation funding landscape, and some potential ways that donors can most support technology projects to bring the greatest impact on the ground.

MakeSense was meant to test feedback loops from “citizen-led sensor monitoring of environmental factors” in the Brazilian Amazon, providing structured, accurate and reliable data to compare against government measurements and news stories in the Amazon basin. The project centered around developing, manufacturing and field testing DustDuino sensors already prototyped by Internews, and developing a dedicated site to display the results at OpenDustMap.

It may seem obvious that it was too ambitious to try to create a mass-produced hardware prototype with two types of connectivity, a documenting website, do actual community engagement and testing (in the Amazon) AND do further business development, all for $60,000, not to mention the coordination required. But, it is also true that typical funds available for innovation lend themselves to this kind of overreach.

Indeed, a more realistic proposal would have merely stated that the team would work out software and hardware bugs and establish key relationships and processes, clearly only a first step — though a critical one — toward a “feedback loop.” However, such a proposal may not be as exciting to donors. At the same time, for projects which have already come this far — which have a viable product and need to take the next several implementation and development steps — funding is not as easily available. Instead, funders may support a different team to start over from scratch with a similar concept rather than support the crucial yet less “exciting” growth phase of a project. If they do support a growth phase, they may expect the project to generate revenue prematurely.

Consortium projects are another trend that require more consideration. Rather than simply expect a new team to know how to work well together, in spite of differences ranging from subject area expertise, geographical base, to business models to even basic assumptions about development, funders should instead consider direct support (financial and/or capacity) to consortium leadership alongside or as part of project funding. Our analysis of this project highlights the key role played by communication and teamwork, yet hardly ever does a funder request management plans or demonstrated experience in consortium leadership, nor give special attention and resources to support the collaborative process. The more partners are included, the more difficult the process becomes to the point where there may be a lack of buy-in and ownership of the project overall.

Good practice would be to support innovators throughout the process, including (reasonable) investment in team process (while still requiring real-life testing and results), and opportunities for further fundraising based on “lessons” and redesign from a first phase. As well, an expectation that the team be reconfigured, perhaps losing some members and gaining others between stages, plus defining a clear leadership process.

Supportive and intensive incubation, with honest assessment built in through funding for evaluations such as the one we published for this project would go a long way toward better innovation results.

Funders should also require transparency and honest evaluation throughout. If a sponsored project or product cannot find any problems or obstacles to share about publicly, they’re simply not being honest. Funders could go a long way toward making this kind of transparency the norm instead of the exception. In spite of an apparent “Fail Fair”-influenced acculturation toward embracing failure and learning, the vast majority of projects still do not subject themselves to any public discussion that goes beyond salesmanship. This is often in fear of causing donors to abandon the project. Instead, donors could find ways to reward such honest self-evaluation and agile redirection.

TwitterFacebookEmail

Learning from the MakeSense DustDuino Air Quality Sensor Pilot in Brazil

DustDuino

Introducing new technology in international development is hard. And all too often, the key details of what actually happened in a project are hidden — especially when the project doesn’t quite go as planned. As part of the MakeSense project team, we are practicing transparency by sharing all the twists, turns and lessons of our work. We hope it is useful for others working with sensors and other technology, and inspires greater transparency overall in development practice.

Please have a look at GroundTruth’s complete narrative history of the MakeSense pilot here on Medium, or download a PDF of the full report here

The MakeSense project was supported by Feedback Labs and the project team included GroundTruth, Internews, InfoAmazoniaFrontline SMSSIMLab, and Development Seed. MakeSense was meant to test feedback loops from “citizen-led sensor monitoring of environmental factors” in the Brazilian Amazon, providing structured, accurate and reliable data to compare against government measurements and news stories in the Amazon basin. Over the course of the project, DustDuino air quality sensor devices were manufactured and sent to Brazil. However, the team made several detours from the initial plan, and ultimately we were not able to fulfill our ambitious goals. We did succeed in drawing some important learnings from the work.

Lessons Learned:

Technical Challenges

  • Technical Difficulties are to be expected

Setting up a new hardware is not like setting up software: when something goes wrong, the entire device may have to go back to the drawing board. Delays are common and costly. This should be expected and understood, and even built into the project design, with adequate developer time to work out bugs in the software as well as hardware. At the same time, software problems also require attention and resources to work out which became an issue for this project as well, which often relied upon volunteer backup technical assistance.

  • Simplify Technical Know-how Required for Your Device

The project demonstrated that it is important to aim for the everyday potential user as soon as possible. The prototype, while mass-produced, still required assembly and a slight learning curve for those not familiar with its components, and also needed some systems maintenance in each location. Internews plans for the DustDuino’s next stage to be more “Plug-and-play” — most people don’t have the ability to build or troubleshoot a device themselves.

  • Consider Data Systems in Depth

This project suffered from a less well-thought-out data and pipeline system, which required much more investment than initially considered. For instance, the sensor was intended to send signals over either Wi-Fi or GSM, but the required code for the device itself, and the destination of the data shifted throughout the project. Having a working data pipeline and display online consumed a great deal of project budget and ultimately stalled.

  • Prioritize Data Quality

The production of reliable data, and scientifically valid data, also needs to be well planned for. This pilot showed how challenging it can be to get enough data, and to correct issues in hardware that may interfere with readings. Without this very strong data, it is nearly impossible to successfully promote the prototype, much less provide journalists and the general public with a tool for accountability.

Implementation

It is important to be intentional about technical vs programmatic allocation, and not underestimate the need for implementation funding. It is often the case that software and hardware development use up the majority of a grant budget, while programmatic and implementation or field-based design “with” processes get short shrift in the inception phase. Decision making about whether to front-load the technology development or to develop quick but rough in order to get prototypes to the field quickly, as referenced in the narrative, should be made intentionally and consciously. Non-technical partners or team members should be aware of the incentives present for technical team members to emphasize hardware/software development over often equally critical local engagement and field testing processes, and ideally have an understanding of the basic technical project requirements and operations. This project suffered from different understandings of this prioritization and timeline.

Funding Paralysis

The anticipation of a need for future funding dominated early conversations, highlighting a typical bind: funding available tends to skew to piloting with no follow-up opportunities for successful pilots. This means that before the pilot even produces its results, organizations must begin to source other funds. So, they must allocate time to business development as well, which can be difficult if not impossible, and face pressure to create marketing materials and other public relations pieces. This can also in some cases (although not with this pilot) lead to very premature claims of success and a lack of transparency. During this project, there was some disagreement among team members about how much to use this pilot fund to support the search for further investment — almost as a proposal development fund — and how much to spend on the actual proof of concept through hardware/software development and field testing.

This is a lesson for donors especially: when looking for innovative and experimental work, include opportunities for scale-up and growth funding or have a plan in mind for supporting your most successful pilots.

Teamwork

A consortium project is never easy. A great deal of time is required simply to bring everyone to the same basic understanding of the project. This time should be adequately budgeted for from the start. Managing such a team is a challenge, and experienced and very highly organized leadership helps the process. FrontlineSMS (which received and managed the funding from Feedback Labs) specifically indicated they did not sufficiently anticipate this extensive requirement. Also, implementing a flat structure to decision making was a huge challenge for this team. Though it was in the collective interest to achieve major goals, like follow-on funding, community engagement, and a working prototype, there were no resources devoted to coordinating the consortium nor any special authority to make decisions, sometimes leading to members operating at cross purposes. Consistent leadership was lacking, while decision-making and operational coordination were very hard given quite divergent expectations for the project and kinds of skills and experience. This is not to say that consortium projects are a poor model or teams should not use a flat structure, but that leading or guiding such a team is a specialty role which should be well considered and resourced.

Part of the challenge in this case was that the lead grantee role in the consortium actually shifted in 2015 from FrontlineSMS to SIMLab, its parent company, when the FrontlineSMS team were spun out with their software at the end of 2014. The consortium members were largely autonomous, without regular meetings and coordination until July 2015, when SIMLab instituted monthly meetings and more consistent use of Basecamp.

Communications

Set up clear communications frameworks in advance, including bug reporting mechanisms as well as correction responsibilities. Delays in reporting bugs with Development Seed and FrontlineSMS APIs contributed significantly to the instability of the sensors in the field. Strong information flow about problems, and speedy remote decision-making, was never really achieved. At the same time, efficiency in such consortia is paramount, so that time isn’t taken from operational matters with coordination meetings — so a balance must be struck. This project eventually successfully incorporated the use of BaseCamp.

TwitterFacebookEmail

Let’s Build the Global Goals Data Census

The Global Goals have launched and data is a big part of the conversation. And now, we want to act … create and use data to measure and meet the goals. I’m presenting here a “sketch” of a way to track what data we have, what we can do with it, and what’s missing. It’s a Global Goals Data Census, a bit of working, forked code to iterate and advance, and raise a bunch of practical questions.

image by @jcrowley

image by @jcrowley

The Global Partnership for Sustainable Development Data launched last week, to address the crisis of poor data to address the Goals. Included were U.S. Government commitments to ” Innovating with open geographic data”. In the run up, events contributed to building practical momentum, like Africa Open Data Conference, Con Datos, and especially the SDG Community Data Event here in DC, facilitated by the epic Allen Gunn. And the Solutions Summit gathered a huge number of ideas, many of which touch on data.

Among many interesting topics, the SDG Community Data Event developed many ideas and commitments, including

  • Put all the data in one place
  • Create an inventory of indicators: + What exists ; + What goals are they relevant for
  • Build a global goals dashboard

The action of taking stock of what data we have, and what we need, looks like a perfect place to start.

Global Goals Data Census

Global Goals Data Census

OKFN’s Open Data Census is a service/software/methodology for tracking the status of open data locally/regionally/globally. I forked the opendatacensus, got it running on our dev server, made a few presentation tweaks, and configured (all configuration is done via a Google Spreadsheet).

Each row of the Global Goals Data Census is a country, and each column is one of the 17 Goals. Each Goal links to a section of the SDGs Data, built off a machine readable listing of all the goals, targets and indicators.

This is truly a strawman, a quick iteration to get development going. It should work, so give it a quick test to help formulate thoughts for what’s next.

Global Goals Data Census Config

Global Goals Data Census Config

This exercise brought up a bunch of ideas and questions for me. Would love to discuss this with you.

  • Does it make sense to track per Indicator, in addition to the overall Goal? There been a lot of work on Indicators, and they will be officially chosen next year.
  • There may be multiple available Datasets per Goal of Indicator. The OpenDataCensus assumes only one Dataset per cell.
  • For the Global Goals, are there non-open Datasets we should consider, due to legitimate reasons (like privacy).
  • Besides tracking Datasets, we want to track the producers, users and associated organizations. The OpenDataCensus assume data is coming from one place (the responsible government entity). Much more complex landscape for the Goals.
  • What is the overlap with the Global Open Data Index? Certainly the Goals overlap a little with the Datasets in the Global Census, but not completely. And the Index doesn’t cover every country that has signed up for the Goals. Something to definitely discuss more.
  • Undertaking the Census, filling the cells, is the actual hard work. Who is motivated to take part, and how best to leverage related efforts.
  • Many relevant data sets are global or regional in scope. How best to incorporate in a nationally focused census? How to fit in datasets like OpenStreetMap which are relevant to many Goals?
  • There is an excellent line of discussion on sub-national data. There are also non-national entities which may want to track Goals separately. How to incorporate?
  • What other kinds of questions do we want to ask about the Datasets, beyond how Open they are? Should we track things like the kind of data (geo, etc), the quality, the methodology, etc?
  • Where could the Global Goals Data Census live? A good use of http://data.org/?
@webfoundation on open data and the SDGs

@webfoundation on open data and the SDGs

Does this interest you? Let’s find each other and keep going. Comment here, or file issues. One good upcoming opportunity is the Open Government Partnership Summit … will be a great time to focus and iterate on the Global Goals Data Census. There will be an effort to expand adoption of the Open Data Charter, “recognizing that to achieve the Global Goals, open data must be accessible, comparable and timely for anyone to reuse, anywhere and anytime”. I’ll be there with lots of mapping friends and ready to hack.

TwitterFacebookEmail