by Tim Estes
Developing a social network map is fundamental to comprehensively understanding a person. Social networks are dynamic and better derived from real-world data than static configurations. However, the vast majority of this real world data is unstructured. This preso will show how Synthesys uses very large scale unstructured data to create social network maps for reporting and further analysis.
A discussion of Big Data approaches to analysis problems in marketing, forecasting, academia and enterprise computing. We focus on practices to enhance collaboration and employ rich statistical methods: a Magnetic, Agile and Deep (MAD) approach to analytics. While the approach is language-agnostic, we show that sophisticated statistics can be easily scaled in traditional environments like SQL.
Birds of a Feather (BoF) sessions provide face to face exposure to those interested in the same projects and concepts. BoFs can be organized for individual projects or broader topics (best practices, open data, standards). BoF topics are entirely up to you. Wednesday's Lunchtime BoF sessions will happen on the hotel side of the Hyatt Regency, Mezzanine Level.
by Peter Jackson
Our talk summarizes some recent thinking in the field of vertical search and illustrates it in the context of a new version of Westlaw, called WestlawNext. We argue that getting the right allocation of function between person and machine is the key to making specialist content more findable and search results more understandable.
Data modeling competitions allow companies and researchers to post a problem and have it scrutinised by the world's best data scientists. By exposing a problem to a wide audience, competitions are a great way to get the most out of a dataset. In just a few months, Kaggle's competitions have helped to progress the state of the art in chess ratings and HIV research.
If you're a new startup looking for investment, or a team at a large company seeking the green light for a new product, nothing convinces like real running code. But how do you solve the chicken-and-egg problem of filling your early prototype with real data? We'll discuss how to use open datasets and public web APIs as a proxy for the final product while you're still in the development stage.
by Kim Rees
While the majority of charts were designed to handle a variety of data, there is a certain novelty of presenting data in a very succinct way. By designing a presentation method restricted to specific data points, we can realize an economy of space and interface.
How do you build a crack team of data scientists on a shoestring budget? In this 40-minute presentation from the co-founder of Infochimps, Flip Kromer will draw from his experiences as a teacher and his vast programming and data experience to share lessons learned in building a team of smart, enthusiastic hires.
by Pete Soderling and Pete Forde
The state of open data today is a real mess. It's very difficult to find the data you need and be confident that it's timely and accurate. There is a growing list of companies now vying to become the key destinations for people to gather around new datasets and be excited together. What projects, partnerships and even ventures would be created if there was a marketplace for data?
Information is changing healthcare forever. From the study of epidemics, to machine learning that can improve diagnosis, to the sequencing of the human genome, we're doing the math of life itself. This panel of practitioners will show us what they're doing in healthcare, pharmaceuticals, and genomics, and how it will change the way we discover, treat, and eliminate disease.
by Davin Potts
This session explores how to get more done, faster with high-performance Map/Reduce and expand the universe of Hadoop possibilities with tools to speed and simplify development and deployment of analytic applications.
by Sudhir Hasbe and Bruno Aziza
Windows Azure Marketplace includes data, imagery, and real-time web services from leading commercial data providers and authoritative public data sources. Customers have access to datasets such as demographic, environmental, financial, retail, weather and sports.
Certain recent academic developments in large data have immediate and sweeping applications in industry. They offer forward-thinking businesses the opportunity to achieve technical competitive advantages. However, these little-known techniques have not been discussed outside academia–until now. What if you knew about important new large data techniques that your competition don't yet know about?
Can machines help us make better decisions? In this panel, real-world practitioners from the travel, finance, and energy industry give us an inside look at how they’re applying machine learning to their industries, optimizing the use of resources and helping with decision support.
"Many hands make light work", as the saying goes. That's true when thousands of people can collaborate on a data set. In this session, we'll look at collective interfaces that allow many distributed users to examine and share data with one another, and how that's changing traditional desktop visualization tools.
Many of the tools Google created to store, query, analyze, visualize data are exposed to external developers. This talk will give you an overview of Google services for Data Crunchers: Google Storage for developers, BigQuery, Machine Learning API, App Engine, Visualization API.
Join practitioners from a range of industries to learn how they're putting new tools and massive data sets to work. We'll hear how music, geophysics, and the legal system are all changing by putting huge, rich information into the hands of business.
by Johan Schleier-Smith
Social media websites are producing ginormous amounts of data and creating a massive demand for insight related to users, how they engage with features, where they are coming from, why they are visiting, what excites them, and so forth.
by Joshua Martell
The world's available scientific and factual data is growing at an alarming pace, but how do we use all this information? How do we incorporate it into our decision making process? Joshua Martell, will give an inside look into how Wolfram|Alpha works, what it takes to make data "computable", understand user input, and present meaningful results.
Join us in the Sponsor Pavilion immediately following sessions on Wednesday, February 2. Have a drink and some delectable nibbles, network with other Strata attendees, and visit our Sponsors who are at the leading edge of the data conversation.
As part of Strata, we'll be holding a Science Fair. It's a place to demonstrate cutting-edge technologies and cool toys — the more hands-on, the better. Whether it's software that breaks the rules of computing, a compelling new interface, or a prototype that pushes the envelope, we want to see it.
by Sam Shah
How do you go about building a product around data using Hadoop? This talk will present how LinkedIn builds and maintains such features as People You May Know. We will present our architecture for doing so (open-sourced) as well as knowledge we've gained in the process.
by Benoit Sigoure
OpenTSDB is an open-source, distributed time series database designed to monitor large clusters of commodity machines at an unprecedented level of granularity. OpenTSDB allows operation teams to keep track of all the metrics exposed by operating systems, applications and network equipment, and makes the data easily accessible.
by Rod Cope
Hadoop and HBase make it easy to store terabytes of data, but how do you scale your search mechanism to sift through these mountains of bits and retrieve large result sets in a matter of milliseconds? Careful use of the Solr search server, based on Lucene, made these requirements come to life in our production environment. Come learn how we query terabytes of data in a highly available system.
This talk demonstrates how an eclectic blend of storage, analysis, and visualization techniques can be used to gain a lot of serious insight from Twitter data, but also to answer fun quesions such as "What does Justin Bieber and the Tea Party have (and not have) in common?"
by Doug Cutting
Apache Avro provides an expressive, efficient standard for representing large data sets. Avro data is programming-language neutral and MapReduce-friendly. Hopefully it can replace gzipped CSV-like formats as a dominant format for data.
With thousands of datapoints per second from nodes around the world, how can you tell when something isn't right? The bottom line is: it's hard, but with the right tools it is achievable.
Riak Core is a general implementation of a distributed systems model, enabling you to build a customized, scalable, highly-available distributed system without too huge an investment. Justin will explain that model, its history, and how it can be used to build new data processing systems.
by Isabel Drost
With growing amounts of digital data at the fingertips of software developers the need for a scalable, easy to use framework is tremendous. This talk introduces Apache Mahout - a project with the goal of implementing scalable machine learning algorithms for the masses.
1st–3rd February 2011