Wednesday 24th October, 2012
11:40am to 12:20pm
Hadoop is commonly used for processing large swaths of data in batch. While many of the necessary building blocks for data processing exist within the Hadoop ecosystem – HDFS, MapReduce, HBase, Hive, Pig, Oozie, and so on – it can be a challenge to assemble and operationalize them as a production ETL platform. This presentation covers one approach to data ingest, organization, format selection, process orchestration, and external system integration, based on collective experience acquired across many production Hadoop deployments.
engineer @cloudera, #flume committer, distributed systems / data / hadoop. author of hadoop operations from o'reilly. bio from Twitter
Sign in to add slides, notes or videos to this session