After migrating from proprietary processes to Apache Flume, new requirements drove Conversant engineers to insert Kafka into the log flow. In this session you’ll learn how Conversant transfers in excess of 100B log lines per day from multiple data centers using a combination of Apache Flume, Kafka, Camus, and Hadoop. Discuss how and why your organization may needed to extend the Camus API. A summary of Conversant's lessons learned, additional benefits gained along the way and how Conversant is positioned to make better, faster use of their data by adding Kafka and Camus to their log processing flow.
Tracks: Hadoop, Spark
Currently a team lead on the (Big) Data Team in the Chicago offices of Conversant, an Alliance Data Systems company (ADS),
Sign in to add slides, notes or videos to this session