• OKFestival 2014 Stories: Sensor Journalism: Communities of Practice

    LilyPad_Arduino_Main_Board

    This blog post is written by Lily Bui, M.S. Candidate in Comparative Media Studies at the Massachusetts Institute of Technology, and it's cross-posted from the Tow Center for Digital Journalism blog.
    Last July in Berlin, Lily Bui gave a session at the Open Knowledge Festival called Sensors, Uncensored. The session discussed different methods of using sensors to generate data in support of journalistic inquiry and public concerns. The attendees of this session organized a Google group to stay in touch, which has now culminated into a sensor journalism community of practice, which Lily discusses below. This groups is international and multidisciplinary, and they have recently collaborated on a pending Wikipedia article for sensor journalism.

    As seen in the Tow Center's Sensors & Journalism report, the growing availability and affordability of low-cost, low-power sensor tools enables journalists and publics to collect and report on environmental data, irrespective of government agency agendas. The issues that sensor tools already help measure are manifold, i.e. noise levels, temperature, barometric pressure, water contaminants, air pollution, radiation, and more. When aligned with journalistic inquiry, sensors can serve as useful tools to generate data to contrast with existing environmental data or provide data where previously none existed. While there are certainly various types of sensor journalism projects with different objectives and outcomes, the extant case studies (as outlined in the Tow report) provide a framework to model forthcoming projects after.

    But it may not be enough to simply identify examples of this work.

    Invariably, just as important as building a framework for sensor journalism is building a community of practice for it, one that brings together key players to provide a space for asking critical questions​, sharing best practices,​ and fomenting connections/collaborations. Journalism certainly doesn't happen in a vacuum; it is best served by collaborations with and connections to outside sources. A sensor journalism community has already begun to emerge on a grassroots level, spanning multiple countries, disciplines, and issues of concern. We might look to the nodes in​ this community to ​outline ​a protean map of stakeholders in the field:

    Journalists.

    Since public opinion can be driven by press coverage, and since storytelling is a central part of news, journalists, documentarians, and media makers with an interest in sensor data play an important role in shaping how the public perceives certain issues. More than that, the media also have the ability to highlight issues that may have slipped under the radar of policymakers. In this scenario sensor data could potentially serve as evidence for or against a policy decision. Most sensor journalism case studies so far have relied on normative forms (print, online) to convey the data and the story, but there is much room for experimentation, e.g. sensor-based documentary, radio, interactive documentary, data visualization, and more.

    Educators.

    In the classroom, there is an undeniable opportunity to cultivate a generation of journalists and media makers who are unintimidated by hardware and technology. Not only this — the classroom also becomes an ideal place to test technology without being beholden to the same restrictions or liabilities as professional newsrooms. Educators across the U.S. have begun incorporating DIY sensors into classroom projects (see Emerson College, Florida International University, and San Diego State University projects), the results of which touch on many of the same questions that professional journalists encounter when it comes to sensor tools. The teaching practices applied to sensor journalism can also be the foundations of training models for professional journalists and civic groups seeking to investigate issues.

    Hardware developers.

    Because hardware developers design and build the tools that journalists and others would potentially be using, they have a stake in terms of how the tool performs downstream of development. Journalists can also collaborate with hardware developers in identifying tools that would be most helpful: Journalists may have specific requirements of data accuracy, data resolution, range of measurement, or the maturity of their equipment. Likewise, hardware experts can recommend tools that provide access to raw data and transparent interpretation algorithms. On the flip side, some hardware developers, particularly in the open source community, may help identify potential environmental issues of concern that then inform journalists' research. Recently, a conversation about certification of sensors, which originated within the hardware development community, crystallized around the notion of how to instantiate trust in open sensor tools (or sensors in general) when used for various purposes, journalism included.This is telling of how an open dialogue between hardware developers and journalists might be beneficial to defining these initial collaborative relationships.

    Researchers.

    Since using sensor tools and data in journalism is new, there is still significant research to be done around the effectiveness of such projects from both a scientific/technological standpoint as well as one of media engagement and impact. Researchers are also best poised, within academia, to examine tensions around data quality/accuracy, sensor calibration, collaborative models, etc. and help provide critical feedback on this new media practice.

    Data scientists, statisticians.

    Since most journalists aren't data scientists or statisticians by training, collaborations with data scientists and statisticians have been and should be explored to ensure quality analysis. While some sensor journalism projects are more illustrative and don’t rely heavily on data accuracy, others that aim to affect policy are more partial to such considerations. Journalists working with statisticians to qualify data could contribute toward more defensible statements and potentially policy decisions.

    Activists, advocates.

    Because many open sensor tools have been developed and deployed on the grassroots level, and because there is a need to address alternative sources of data (sources that are not proprietary and closed off to the public), activists play a key role in the sensor journalism landscape. Journalists can sometimes become aware of issues from concerned citizens (like the Sun Sentinel's “Above the Law” series about speeding cops); therefore, it’s essential to cultivate a space in which similarly concerned citizens can voice and discuss concerns that may need further investigation.

    Urban designers, city planners, architects.

    Many cities already have sensor networks embedded within them. Some of the data from these sensors are proprietary, but some data are publicly accessible. Urban designers, city planners, and architects look to data for context on how to design and build. For instance, the MIT SENSEable City Lab is a conglomerate of researchers who often look to sensor data to study the built environment. Sensor data about environmental factors or flow can help inform city design and planning decisions. Journalists or media makers can play a role in completing the feedback loop — communicating sensor data to the public as well as highlighting public opinions and reactions to city planning projects or initiatives.

    Internet of Things.

    Those working in the Internet of Things space approach sensor networks on a different level. IoT endeavors to build an infrastructure that includes sensors in almost everything so that devices can interact better with people and with each other. At the same time, IoT infrastructures are still in development and the field is just beginning to lay its groundwork in the public consciousness. Imagine motion sensors at the threshold of your house that signal to a network that you're home, which then turns on the devices that you most commonly use so that they’re ready for you. Now imagine that on on a neighborhood or city scale. Chicago’s Array of Things project aims to equip the city with environmental sensors that can report back data in real time, informing residents and the city government about various aspects of the city’s performance. What if journalists could have access to this data and serve as part of a feedback loop back to the public?

    By no means is this a complete map of the sensor journalism community. ​One would hope that the network of interested parties in sensor journalism continues to expand and include others — within policy, legacy news organizations, and more — such that the discourse generated by it is a representative one that can both challenge and unite the field. Different methodologies of collecting data with sensors involve different forms of agency. In some sensor journalism scenarios, the agents are journalists; in others, the agents are members of the public; and in others yet, the agents can be governments or private companies. Ultimately, who collects the data affects data collection methods, analysis of the data, and accessibility of the data. No matter what tools are used — whether they are sensors or otherwise — the issues that journalists seek to examine and illuminate are ones that affect many, and on multiple dimensions (individual, local, national, global). If we are truly talking about solving world problems, then the conversation should not be limited to just a few. Instead, it will take an omnibus of talent and problem solving from various disciplines to pull it off.

    References

    Pitt, Sensors and Journalism, Tow Center for Digital Journalism, May 2014
    Chicago's Array of Things project
    Sun Sentinel's “Above the Law” series

  • Open Knowledge Festival 2014 report: out now!

    Today we are delighted to publish our report on OKFestival 2014!

    Open Knowledge Foundation-Festival 2014 at Kulturbrauerei in Berlin.

    This is packed with stories, statistics and outcomes from the event, highlighting the amazing facilitators, sessions, speakers and participants who made it an event to inspire. Explore the pictures, podcasts, etherpads and videos which reflect the different aspects of the event, and uncover some of its impact as related by people striving for change – those with Open Minds to Open Action.

    Want more data? If you are still interested in knowing more about how the OKFestival budget was spent, we have published details about the events income and expenses here.

    If you missed OKFestival this year, don't worry – it will be back! Keep an eye on our blog for news and join the Open Knowledge discussion list to share your ideas for the next OKFestival. Looking forward to seeing you there!

  • OKFestival 2014 Stories: OpenAfrica – a call to consolidate the African open network

    This blog post is written by Michelle Willmers, Project Manager at OpenUCT Initiative, and is cross-posted from the OpenUCT Initiative blog.

    The OpenUCT Initiative has in recent months been fortunate to participate in a number of workshops and events around open access, open science and open data in the African and developing country context.

    14520946569_9ee04e351d_z
    Group Photo – Open Data in Developing Countries Research Network Workshop, Berlin, July 14th/15th 2014 (flickr)

    Most recently, myself and Francois van Schalkwyk participated in the IDRC/World Wide Web Foundation's Open Data in Developing Countries (ODDC) network meeting which took place in Berlin to coincide with the Open Knowledge Festival, a bi-annual global event organized by the Open Knowledge Foundation.

    The slogan of this year's OKFest was ‘Open Minds to Open Action’. It was inspiring to be a part of what felt like a quite historic event and there was a tangible sense of excitement around what might be possible if this extremely talented and passionate community of activists, researchers and open advocates could harness their collective energies to promote openness and improved access to information.

    Against this backdrop, network members from the African ODDC projects got together in Berlin to discuss what we could be doing to promote the open agenda and consolidate the open network in Africa – particularly in the emerging areas of open science and open data. It was acknowledged that while there have been a number of funder-driven initiatives, workshops and projects across Africa, it was incumbent upon us to move to action in joining the dots between these initiatives and doing more to consolidate the Africa network.

    In short, discussion amongst the African participants surfaced a general sense of frustration around:
    (a) Lack of cohesion in African “open” projects and research initiatives — i.e. we can see increasing pockets of sophisticated activity but no real initiative/conversation to consolidate the agenda.
    (b) Reliance on funders and partners from the North to stimulate the local conversation.
    (c) The need to take the conversation around openness outside of the purely academic context in order to include NGOs/CBOs as well as private/corporate partners.

    As a small first step to help address this issue, I volunteered to share a public list of Africa-based academics, university managers, advocates and practitioners that I have interacted with at “Open” events in recent months. The list is available here.

    It is a modest start, but we are hoping that African colleagues will add their names to this live database, which will hopefully be of use in surfacing a local network and providing contacts for partners from other continents. Africa is an enormous continent and finding key people in niche areas can be one of the main challenges in penetrating and collaborating within this environment.

    In addition to adding names to the directory, we are also encouraging anybody interested in the African conversation to tweet items of local interest using the #OpenAfrica and #scholarAfrica hashtags.

    Should you be interested, preliminary insights from the Open Data in Developing Countries initiative can be accessed here.

    A Storify from the special ODDC session at OKFest is available here.

    (CC-BY-SA)

  • OKFestival 2014: we made it! A write-up & Thank You note

    Open Knowledge Festival 2014! We built it, made it and ran it – it was a blast, thank you!

    • 1056 participants from 60 countries
    • 215 facilitators and moderators
    • 17 Programme Team members
    • 70 volunteers

    made it all happen. Who says that numbers are dry? Just by writing them down, our hearts are melting.

    1

    Group work! – Pic by Gregor Fischer

    Six weeks have passed since the end of OKFestival 2014, many of you participated in our feedback survey, we all caught up with the lack of sleep and are now hard at work with the public post-event report which will be shared on the festival website in the next few weeks (keep your eyes peeled!).

    At the festival, we tried a lot of experiments, and experimenting is both risky and thrilling – and you were up for the challenge! So we thought it was time to take a moment to have a look at what we built together and celebrate the challenges we bravely took on and the outcomes that came out of them (and, yes, there are also learnings from things which could have gone better – is there any event with bullet-proof WiFi? can a country not known to be tropical and not used to air conditioning experience a heat wave on the 2 days out of 365 when you'll run an event?)

    2
    Rocking selfies! – Pic by Burt Lum

    Summing it up:

    • an event for the whole open movement: we were keen to be the convenor of a global gathering, welcoming participants from all around the world and a multitude of folks from open communities, organisations, small and big NGOs, governments, grassroots initiatives as well as people new to the topic and willing to dive in. We wanted to create an environment connecting diverse audiences, thus enabling a diverse groups of thinkers, makers and activists to come together and collaborate to effect change.

    3
    Ory Okolloh & Rufus Pollock fireside chat – Pic by Gregor Fischer

    • hands-on and outcome-driven approach: we wanted the event to be an opportunity to get together, make, share and learn with – and from – each other and get ready to make plans for what comes next. We didn't want the event to be simply wonderful, we also wanted it to be useful – for you, your work and the future of the open movement. We’ve just started sharing a selection of your stories on our blog and more is yet to come this month, with the launch of our public post-OKFestival report, filled out with outcome stories you told us in the weeks after the event – who you met, what did you start to plan, what’s the new project coming out of the festival you’re already working on as we speak!

    4
    Meeting, talking, connecting! – Pic by Gregor Fischer

    • narrative streams: We made a bold choice – no streams-by-topic, but streams following a narrative. The event was fuelled by the theory that change happens when you bring together knowledge – which informs change – tools – which enable change – and society – which effects change. The Knowledge, Tools and Society streams aimed to explore the work we do and want to develop further beyond the usual silos which streams-by-topic could have created. Open hardware and open science, open government and open sustainability, open culture and open source, arts and privacy and surveillance.

    5
    Your vote, your voice! – Pic by Gregor Fischer

    • crowd sourced programme and participatory formats and tools (and powerpoints discouraged): We encouraged you to leave the comfort zone with no written presentations to read in sync with slides, but instead to create action-packed sessions in which all participants were contributing with their knowledge to work to be done together. We shared tips and tricks about creation and facilitation of such formats and hosted hangouts to help you propose your ideas for our open call – and hundreds of community members sent their proposals! Also, in the most participatory of the spirits, OKFestival also had its own unconference, the unFestival run by the great DATA Uruguay Team, who complemented our busy core programme with a great space where anyone could pitch and run her/his own emerging session on the spot, to give room and time to great new born ideas and plans. And a shout out also goes to a couple of special tools: our etherpads – according to the OKFestival Pad of Pads 85 pads have been co-written and worked with – and our first code of collaboration which we hope will accompany us also in future ventures!

    6
    Green volunteering power – always on! – Pic by Gregor Fischer

    • diversity of backgrounds, experiences, cultures, domains: months before we started producing the festival, we started to get in touch with people from all around the world who were running projects we admired, and with whom we'd never worked together before. This guided us in building a diverse Programme Team first, and receiving proposals and financial aid applications from many new folks and countries later on. This surely contributed to the most exciting outcome of all – having a really international crowd of the event, people from 60 countries, speaking dozens of different languages. Different backgrounds enriched everybody’s learning and networking and nurtured new collaborations and relationships.

    Wow, that was a journey. And it's just the beginning! As we said, OKFestival aimed to be the fuel, the kick-off, the inspiration for terrific actions and initiatives to come and now it’s time to hear some of most promising stories and project started there!

    You can start having taste following the ever-growing OKFestival Stories article series on our blog and be ready for more, when in the next weeks we'll publish more outcomes, interviews, quotes and reports from you, the protagonists of it all.

    Thank you again, and see you very soon!

    Your OKFestival Team

  • OKFestival 2014 Stories: Towards a standard open decisions API

    This post was collaboratively written by Jogi Poikola and Markus Laine of Open Knowledge Finland, James McKinney of Open North, and Scott Hubli, Jared Ford, and Greg Brown of the National Democratic Institute. It’s cross-posted from the OpeningParliament Blog.

    tumblr_inline_n9zmodZNjS1s1irqt

    At this year's Open Knowledge Festival — a biennial gathering of open government advocates — there was considerable interest in moving toward greater standardization of APIs (application programming interface) relating to government decision-making processes. Web APIs help promote an open architecture for sharing content and data between communities and applications. Standardization of APIs for government decision-making data would allow tools built by civic innovators or governments to analyze or visualize data about government decision-making to be used across multiple jurisdictions, without needing to re-program the tool to accommodate differing data formats or ways of accessing the data.

    Most government decision-making procedures involve similar processes (meetings, requests for public comment, etc.), decision-points (committee hearings, committee meetings, plenary sessions, etc.) and supporting documentation (agendas, draft legislation, information on voting records, etc.). Standardizing the ways that these types of information are structured allows tools for visualizing data about open government decision-making to be used across jurisdictions, as well as facilitating comparison of data and information.

    To discuss the state of play with respect to open government decision-making APIs, Open Knowledge Finland, Open North, and the National Democratic Institute organized a session at the Open Knowledge Festival 2014 in Berlin to explore the possibilities for moving toward a global standard for APIs that deal with data on government decision-making.

    ONGOING EFFORTS

    The session began with brief discussion of current projects that seek to create international standards for either tools or specific types of government data, including Popolo, Poplus, Open Decisions API used in Helsinki and Jyväskylä and the Open Civic Data project.

    Open Civic Data (OCD) provides an identifier system that can reliably identify political jurisdictions, which can be used to more easily link data on people, events, bills, and more. This project relies in part on Popolo, an international open government data specification that covers information related to the legislative branch, such as motions, votes, and organizational structures. While OCD and Popolo provide standards for data, a recently launched initiative called Poplus builds civic tech components, or small, generic technology tools that can be easily reused regardless of context, including tools to store and organize legislative bills or transcripts.

    Similarly, city of Helsinki has developed Open Decisions API (called OpenAHJO), or a set of standards on decision-making data, including agendas, meeting documents, and other relevant types of information that speak to decision-making processes in municipal governments. Currently, this API is being used in the Finnish cities of Helsinki and Jyväskylä, with more cities in Finland expected to adopt the standard in the near future. Thus far, using the API for city data has made it easier for city officials to locate and use certain data and has simplified how citizens find and engage with city data. While the API currently works exclusively with municipal level data, there is no reason why such a standard couldn't be adapted to work with different levels of government, such as national level parliaments or legislatures.

    Other relevant examples are Akoma Ntoso, an XML schema for decision-making or legislative data, and Open 311, an international effort to standardize information about the status of municipal government service requests, such as filling potholes, trimming trees or fixing streetlights. All of these projects have been highly useful for the open government community.

    FIRST STEPS TOWARDS STANDARDIZATION

    The growing demand for greater standardization — both among civil society organizations and, increasingly, governments — speaks to the utility and value of these efforts. For civil society and the public, standardized data simplifies the process of analyzing and scrutinizing government data, which can vastly improve the public's understanding of government decision-making processes. For governments, standardized decision-making data can cut costs, improve internal knowledge management practices, and encourage information sharing and collaboration across municipalities or governments. Standardized decision-making data may also promote increased collaboration and interaction between civil society and government. Given the potential value of expanding the coverage of a standard API for decision-making data, our session in Berlin focused primarily on one question: how can the open government community better support efforts toward convergence with respect to standardization of APIs for government open decision-making?

    It was agreed that the first step in further developing and spreading an API for government open decision-making data is to collect use cases and needs that individuals, organizations, and governments may have, as well as to do a more comprehensive mapping of the API and data formats and tools that are currently being used. This is an important first step, and the initial coalition that was formed during the OK Fest session aims to work collaboratively to advance this process over the coming months.

    The mapping would also help to assess how additional efforts toward standardization of APIs for government open decision-making data would build off and complement existing initiatives. There was broad consensus that demonstrating to governments how standardized decision data could simplify their work, cut costs, and generate mutually beneficial partnerships with civil society would help drive interest and buy-in. There was also agreement that, in addition to engaging the civic tech and government tech communities, it may also be helpful to engage more traditional international organizations that can play a role in credentialing and disseminating information on standards, such as the InterParliamentary Union (IPU), the international association of the world's national parliaments, and United Cities and Local Governments, the organization that seeks to be the voice of and an advocate for democratic local self-government. In general, we found an emerging international consensus that government and parliamentary information should be shared in open and structured formats, a prerequisite for functional APIs. As stated by the IPU, “open document standards, such as XML, should be used to prepare proposed legislation and other parliamentary documentation. Eventually all documentation and media should be made available using open standards.” If a standardized API gains value by being spread widely, then considering effective strategies for proliferation early in the process will be important. Such organizations would also likely be engaged in decision-making bodies that help to update any emerging global norms that include their membership.

    There are, of course, multiple questions that will need to be grappled with as this effort progresses. For example, there are questions about the definition and boundaries of government open decision-making data. There are wide varieties of decision-making processes that exist in different types and levels of democratic governments around the world, and it will be important to assess how effectively this diversity can be reflected in a shared structure for government open decision-making data. It is also a recognized that this work is highly political; therefore, we welcome any interested individuals to participate in this project. In the coming months, the organizers of the session look forward to helping continue the conversation in moving toward shared principles or standards for open government decision-making APIs.

    WHAT'S NEXT?

    While it is still early days, it was clear from the discussion there was a great deal of interest in how the open government community can collaborate to better support the efforts of governments to become more transparent and open to the citizens they serve, as well as supporting the efforts of citizens to have opportunities to become better informed and more engaged in decisions that affect their lives.

    Compile related projects

    Do you know of existing projects or initiatives that are related? Please add relevant links to the list here or tweet them with #opendecisions.

    Join the discussion

    We look forward to continuing this conversation. Currently, the discussion is happening in several international email-lists, such as the W3C Open Government list, Open Civic Data list, Poplus list, Akoma Ntoso list, Open Knowledge list, and the OpeningParliament.org list. We decided not to create yet another list, but encourage people to join existing ones. The topic will also be taken forward in related future events, such as Global Legislative Openness Week.

    Contact us

    The initial coalition will catch up via Skype later this autumn. If you wish to join, please contact us at [email protected].

  • OKFestival 2014 Stories: Three things I learned at the 2014 Open Knowledge Festival

    This blog post is written by Tariq Khokhar, Data Scientist and Open Data Evangelist at The World Bank, and is cross-posted from Open Data. The World Bank Data Blog.

    okf-logo-top

    I was lucky to be in Berlin with some colleagues earlier this month for the 2014 Open Knowledge Festival and associated fringe events.

    There's really too much to distill into a short post – from Neelie Kroes, the European Commissioner for Digital Agenda, making the case for “Embracing the open opportunity” to Patrick Alley’s breathtaking accounts of how Global Witness uses information to expose crime and corruption in countries around the world.

    A few things really stuck with me though from the dozens of great sessions throughout the week, here they are:

    1) Open data needs users and long-term commitment from governments.

    Screen Shot 2014-07-30 at 10.41.32 AM
    The “Nos Ecoles, Nos Donnees” Application in Burkina Faso

    The Partnership for Open Data hosted a fantastic session highlighting examples of open data in action in low and middle income countries.

    Tanzania

    Joachim Mangilimai a School of Data Fellow from Tanzania showcased a Swahili mobile app he'd developed to support decision making by medical staff. The app was based on guidelines published by The Population Council and built using the Open Data Kit framework. He also highlighted Shule.info, a project by Twaweza that compiles and visualizes government data on school performance that parents can use to stay better informed.

    Burkina Faso

    Malick Tapsoba, the technical manager of the Open Data Burkina Faso team highlighted the difficulties they overcame in launching their open data portal in a low capacity, low connectivity environment and how the next big challenge was to nurture a community of data users. They'd also built a great school information app called “Our Schools, Our Data” that offers gender disaggregated data on school performance. They’ve done an impressive job of kick-starting their initiative in a difficult environment.

    Mexico and The Philippines

    We also heard from Ania Calderón of the Mexican government on their “Data Squads” program providing rapid support to different government agencies to publish high quality data to the national open data portal. Finally, Happy Feraran who created the Bantay corruption reporting platform in the Philippines emphasised the importance of mobilizing the community.

    Lessons learned: There are some great open data initiatives around the world and two common themes are the need for a strong community of technologically literate data re-users, and the sustained effort needed within governments to change how they create, manage and publish data in the long term. Tim Davies has also shared “15 open data insights” from the Open Data Research Network, and you can read the ODI’s Liz Carolan’s takeaways from the event here.

    2) Spreadsheets are code, and you can unit test data

    conditional_formatting
    A Turing Machine implemented in Excel

    Jenni Tennison has declared 2014 the year of the CSV and the fringe event csv,conf was the most informative conference I've been to in a long time. With over 30 speakers on technically specialised topics to do with the creation, management and application of (mostly) tabular data there was again too much to choose from but my highlights were on “Treating spreadsheets as code” and “Unit testing for tabular data

    Spreadsheets are code

    Felienne Hermans who heads The Spreadsheet Lab (I'm not kidding) at Delft University asked that if we remember one thing from her talk it’s that “spreadsheets are code”. She thinks we should treat them as such and use software engineering approaches like tests, refactoring, and designing for maintainability. She casually demonstrated that Excel is “Turing complete” and just as powerful as any other programming language, by using it to build a Turing Machine (see picture above) and highlighting some tools that can help to improve the quality of spreadsheet applications.

    The first tool is Bumblebee which Felienne developed for optimizing spreadsheet formulas. It can do a lot but think about automatically replacing things like “SUM(F3:F7)/COUNT(F3:F7)” with the simpler “AVERAGE(F3:F7)” plus other user-defined or automatic transformations. She discussed another tool (which I now forget the name of) that helps with formula testing and at the end of her talk, mentioned the (commercial) service spreadgit that brings cloud-based git-like revision management to Excel.

    She noted that “Like democracy, spreadsheets are the worst, except for all others” and in her “Programming and data science for the 99%” course recognizes that Excel (and open alternatives like LibreOffice and OpenOffice) are going to be the main way most people do data analysis for the foreseeable future, so we should encourage people to adopt some good software engineering habits when coding spreadsheets.

    Programatically Testing data

    Karthik Ram the co-founder of the awesome rOpenSci and scientist at Berkeley shared some promising work they've been doing on the testdat R package. In short, it will let you programmatically test for and correct errors like outliers, text formatting problems and invalid values in datasets. It’s still in development but you can get an idea from Karthik’s slides.

    oldvsnewscience
    The old and new approaches to science by Karthik Ram.

    He ended with a useful reminder of the changing norms the “open science” movement is creating – where once the research paper was the principal output of a scientist, it's increasingly accepted that the code, the data and separate elements of the narrative of a scientific study will all be public and available for re-use.

    Lessons learned: Spreadsheets are code and we can adopt some software engineering practices to make much better use of them. There are a number of powerful tools and approaches to data handing being pioneered by the scientific community (e.g. Hadley Wickham just announced the tidyr tool for data cleaning in R) and those of us working in other fields can adopt and emulate many of them.

    3) The future of civic tech (probably) lies in re-usable software components

    poplus

    I had a chat with the always thoughtful Tom Steinberg of mySociety just before the “Can Open Data Go Wrong?” session and Tom told me about one way he thinks open data can go right: Poplus

    To use their own words, Poplus is an “open federation of people and organisations from many different countries.” with a “joint mission to share knowledge and technology that can help us to help citizens” The primary resource they've got at the moment are Poplus Components which you can think of as building blocks for more complex civic applications.

    The current components are:

    Represent Boundaries – a web API onto geographic areas like electoral districts

    SayIt – a service to store and retrieve written transcripts of public statements

    MapIt – a service that finds out which administrative area covers a specific point

    WriteIt – a service to write and send messages to public persons

    PopIt – a tool to keep lists of politicians and associated biographic information

    BillIt – a flexible document storage tool

    Why re-usable software components and not re-usable apps?

    So for example, could this app built to visualize secondary school performance in the United Kingdom be re-purposed to work in Tanzania or Burkina Faso? Maybe, but probably not. Why? Because the context is different enough, that the UK-based app, like many others, just doesn't quite translate to work in other countries, so it’s just easier to build a new app designed for the local context.

    This is why Poplus components are great – they abstract out functional elements of civic applications and make them easy to combine and build a more complex service on top of. Nerdier readers will remember Robert Glass' “rule of three” which states it’s three times as difficult to build reusable components as single-use components. I think Poplus understands this this and the components are carefully curated and already being re-used and combined around the world.

    Lessons learned: Open data fundamentally needs open source software. App reuse often doesn't happen because contexts are too different. Reusable software components can reduce the development overhead for creating locally customized civic software applications and a pool of high quality civic software components is a valuable public good worth contributing to.

    Finally, a big thank you to the organizers of OKFest and csv,conf for hosting such great gatherings. Were you at #okfest14 or #csvconf – what did you learn?

  • OKFestival 2014 Stories: Open Culture at the 2014 Open Knowledge Festival

    This post by Meredith Holmgren, Principal Investigator & Project Manager – Intangible Cultural Heritage, originally appeared on the Smithsonian Center for Folklife and Cultural Heritage's Talk Story blog.

    This year's Open Knowledge Festival (OKFest) brought together over one thousand participants to share their work in transparency and open access to government data. Taking place July 15 to 17 in Berlin, Germany, the festival included a wide range of panel topics, from development sector analytics and election monitoring tools to storytelling and cultural heritage policy. Opting for a dynamic framework of a festival, rather than that of a conventional conference, there were a wide range of participatory activities in addition to prepared panel presentations, including a robust “unconference” program, workshops, performances, skill sharing, and hack-a-thons.

    Photo 1
    OKFest 2014 took place in the Kulturbrauerei—a converted brewery that now serves as a complex of arts and culture venues. Photo by Meredith Holmgren

    One thematic thread that ran through the event was open cultural data, the principles of which foster free use and unrestricted public access to cultural assets stewarded by cultural heritage institutions, such as galleries, libraries, archives, and museums (a.k.a. GLAMs). With increasing demand for digital cultural assets across the world, open cultural heritage projects, such as those initiated by the Getty, the Walter's Art Museum, the Rijksmuseum, and Europeana, have gained significant traction in recent years and garnered quite a bit of attention from researchers and media outlets. With momentum building both within and outside of cultural institutions to make cultural assets more digitally accessible, and equally as much debate about the merits of declaring assets as public domain works, I looked forward to learning more about the open culture community and ongoing collaborations between cultural heritage institutions and open culture professionals from around the world.

    Indeed, the open cultural heritage events at OKFest did not disappoint. The program started with a pre-festival workshop at the Wikimedia Deutschland offices, titled “Open Data in Cultural Heritage”. Around fifty participants from across the world gathered to present their work and discuss ongoing activities. While much of the content focused on advances in the German cultural context (e.g. Wikimedia Deutschland, Deutsche Digitale Bibliothek, Museum für Naturkunde) presenters who hailed from Finland, the Netherlands, and Switzerland, in addition to Germany, shared their experiences with cultural heritage initiatives. Hearing their case studies—though exclusively from the European continent, where public support for this work is arguably the strongest—provided a unique comparative perspective toward current developments in the field, ongoing pilot projects, policy debates, and challenges encountered by a variety of open culture professionals…

    Read more on the Smithsonian Center for Folklife and Cultural Heritage's Talk Story blog.