by Tom Hanlon
Hadoop gives you the ability to process massive amounts of data at scale. This presentation will show you how hadoop makes use of commodity hardware to allow you to build a system that scales, that deals gracefully with failure of individual nodes, and gives you the power of Map/Reduce to process Petabytes.
It’s easy to find and create data. But what are you going to do with it? Can I ask the world complex questions such as what’s the local crime rate, distance to metro, or rating of my local school? Can you combine these all together to rate houses you may want to buy? And how do you then connect back to your government and local businesses to engage in collaborative decision making.
This talk with discuss how you should consider users and their personal interactions with data and information. We’ll also peel back the covers on how open source tools such as HBase, Cascading, Geos and Polymaps handle analyzing and streaming realtime data to maps and visualizations both on the web and to mobile devices.
To illustrate what’s possible, we’ll dive through GeoCommons, a large online community of data sharing and community analytics that uses open source mapping visualization, Hadoop analysis, and mobile interfaces to provide this to the world. Users can even build and socialize their own analysis methods to share their expert knowledge with other users. We’ll also review how global organizations like the World Bank and United Nations are using these tools to connect with citizens in developing countries to empower them to make decisions on building investment and understanding how climate science may affect their areas.
Adding security to an existing product is never easy, but our team at Yahoo added strong authentication to Apache Hadoop by integrating it with Kerberos. This project was delivered on time and is currently deployed on all of Yahoo's 40,000 Hadoop computers. Come learn how we added security to and why it matters.
In this session Dell will discuss the analysis of the data types suitable for transfer between Hadoop and EDW, EDW/Hadoop data lifecycle, Data governance between Hadoop and DBMS, and ETL performance tuning and best practices (i.e. Hadoop/DBMS connector, node and network designs, etc.)
Brisk is an open-source Hadoop and Hive distro that utilizes Cassandra for its core services. Brisk provides integrated Hadoop MapReduce, Hive and job and task tracking, while providing an HDFS-compatible storage layer powered by Cassandra. By accelerating the time between data creation and analysis with DataStax’ Brisk, users experience greater reliability, simpler deployment and lower TCO.
by Greg Fodor
The data & analytics teams at Etsy build up and tear down more than a thousand independent Hadoop clusters on EC2 each month. This talk discusses the benefits of this approach, where Elastic Map Reduce serves as a "meta-cluster" in which on-demand Hadoop clusters can be created, used, and shut down quickly and easily.
YARN is the next generation of Hadoop Map-Reduce designed to scale out much further while allowing for running applications other than pure Map-Reduce in a highly fault-tolerant manner.
An overview of the state of the art for bringing together the analytical power of the R language with the big data capabilities of Hadoop.
This talk introduces an open-source SQL-based system for continuous or ad-hoc analysis of streaming data built on top of Flume-based data collection for Hadoop. Attendees will understand how to use a new tool to extend their Hadoop data collection pipeline with real-time streaming analytics.
by Tom White
Apache Whirr is a way to run distributed systems - such as Hadoop, HBase, Cassandra, and ZooKeeper - in the cloud. Whirr provides a simple API for starting and stopping clusters for evaluation, test, or production purposes. This talk explains Whirr's architecture and shows how to use it.
by Erik Onnen
This talk will cover lessons learned in building Urban Airship's large-scale data warehouse in EC2 including PostgreSQL, Kafka, Cassandra, HBase and Hadoop.
This hands-on tutorial aims at learning the basics of the important machine learning algorithms in Mahout. It aims to help you get it up and running on a Hadoop cluster. Mahout is open source implementation of a collection of algorithms designed from ground up to sift through terabytes of data and help bring out important patterns which are otherwise not in the reach of standard tools.
25th–27th July 2011