Sessions at OpenTech 2010 about Open Data on Saturday 11th September

Your current filters are…

  • 1B: Mozilla Drumbeat

    by Gervase Markham

    Will the internet still be open in 50 years? The web is healthy now, providing raw material for new kinds of innovation, creativity, wealth and democracy. Yet there are many who see this as a threat, and would neuter or dumb down the net. The Mozilla community believes we can - and must - keep the web open. That's why we build Firefox. It's also why we're starting Drumbeat, an invitation to teachers, artists, lawyers, filmmakers and other everyday internet users to do things that will make the web better, and keep it open for the long haul. Ultimately, our goal is a strong, safe open internet: an internet built and backed by a massive global community committed to the idea that everyone should all be able to freely create, innovate and express ideas online without asking permission from others. This talk will explain how we're going to get from here to there, and how you can be part of it.

    Also in this slot:

    Where now for open video? with visionOntv - Hamish Campbell

    Open Data & The Rewards of Failure - Chris Taggart

    At 10:45am to 11:40am, Saturday 11th September

  • 1B: Open Data & The Rewards of Failure

    by Chris Taggart

    At the moment the public sector is incentivised to do big, slow projects which get a lot of launch publicity but whose almost inevitable failure doesn't harm the originators. This is the current reward of failure -- big projects, big failures, big payoffs. Small, innovative projects however bring no kudos, no increased power and any failure is associated with the originators. How do we change that? Open data. This presentation explains the current incentives, and explains how open data in the public sector is not just good for transparency, engagement and efficiency, but is a crucial part in changing those incentives, to rewarding success, and encouraging small projects where any failure is a step towards success (a la Edison).

    At 10:45am to 11:40am, Saturday 11th September

    Coverage slide deck

  • 1C: - process and properties

    by Jeni Tennison and Richard Stirling

    The project grew out of the consultancy from Sir Tim Berners-Lee and Professor Nigel Shadbolt to Gordon Brown and his Digital Engagement Team. Richard Stirling has helped spear head the project that had its beta launch in January 2010. Working closely with the Central Office of Information and the Office of National Statistics, Richard has brought from its concept idea through to a successfully launched public website, with hundreds of new open datasets being added weekly. Richard's talk will cover getting the project off the ground, highs and lows, and the logistics surrounding such a high profile piece of work both from the eyes of the Cabinet, and the public. Richard will also discuss the future of and learnings from the project. This session will be the one to talk about why (not) RDF/XML/linkedData/WebServices etc in.

    Also in this slot: - built on the API - John Sheridan

    At 10:45am to 11:40am, Saturday 11th September

  • 2A: LinkedGov: Filling in the Gaps

    by Hadley Beeman

    The Data's not enough; we need stuff around it. Government datasets are often published with budget codes, cost centre codes, and other quirks in need of explanation -- but without that explanation. The answers live with the civil servants, local government officers, and statisticians who work with those datasets. This project will set up a system for those with the answers to easily input this metadata into published government datasets. We will then make those improved datasets searchable, providing APIs for apps and visualisations and a simple web-based search tool that should be an everyday resource for those in government, research, education, community activities, or anyone curious about their country. This volunteer-led project aims to raise the profile of government opendata and transparency by: improving the quality of available data helping citizens to access the information they want building a search tool which is useful for those in government, encouraging them to publish high-quality data involving many people from the developer, government, research, education, academic and volunteer communities.

    Also in this slot:

    Open Data in Clinical Trials - Ben Goldacre and Louise Crow

    Rewiring the State - Emma Mulqueeny

    At 11:40am to 12:30pm, Saturday 11th September

    Coverage slide deck

  • 2A: Open Data in Clinical Trials

    by Louise Crow and Ben Goldacre

    Pharmaceutical companies running clinical trials are increasingly being required to register them in one or more databases. Once the trials are finished, the results ought to be published in medical journals, but there is little incentive to publish negative data, for example from trials that were terminated early due to poor drug performance or dangerous side effects. These results may quietly disappear, leading to unrealistic estimates of the effectiveness of drugs and poorer medical decisions. This talk introduces a project headed by Ben Goldacre, currently at the prototype stage, aiming to encourage publication of clinical trial data by combining information from the different trial repositories in one publicly accessible website, crosschecked with publications. This would allow the identification of trials with no published results, making these absent data points publicly visible.

    At 11:40am to 12:30pm, Saturday 11th September

    Coverage slide deck

  • 4A: and friends - applications and impacts

    by UK Open Public Data

    What's been done with open public data, and what's better in the world as a result, and what still needs your help. This is about the data that is available, and what is done with it, and what else could be freed up etc. For process/format type talk, see the "process and properties" session in the morning.

    At 2:30pm to 3:30pm, Saturday 11th September

  • 6B: Scraperwiki

    by Julian Todd and Aidan McGuire

    ScraperWiki - is an in the cloud based programming environment that allows developers to write, store, maintain and 'play' scrapers directly on the net. ScraperWiki has also created a marketplace whereby people who need data, can set a 'bounty' for that data and developers can create a scraper and earn the bounty. In addition ScraperWiki offers 'APIs' that can used by commercial organisations for an annuity.

    Also in this slot:

    Putting the Cuts in Context - Lisa Evans

    Who's lobbying? - Rob McKinnon

    At 5:00pm to 6:00pm, Saturday 11th September