Adding Flume, Kafka and Camus to your Log Processing

A session at Big Data TechCon Chicago 2015

  • Michael Keane

After migrating from proprietary processes to Apache Flume, new requirements drove Conversant engineers to insert Kafka into the log flow. In this session you’ll learn how Conversant transfers in excess of 100B log lines per day from multiple data centers using a combination of Apache Flume, Kafka, Camus, and Hadoop. Discuss how and why your organization may needed to extend the Camus API. A summary of Conversant's lessons learned, additional benefits gained along the way and how Conversant is positioned to make better, faster use of their data by adding Kafka and Camus to their log processing flow.
Level: Advanced

Tracks: Hadoop, Spark

About the speaker

This person is speaking at this event.
Michael Keane

Currently a team lead on the (Big) Data Team in the Chicago offices of Conversant, an Alliance Data Systems company (ADS),

Sign in to add slides, notes or videos to this session

Tell your friends!


Date Mon 2nd November 2015

Short URL


Official event site


View the schedule


See something wrong?

Report an issue with this session