•  

Recipes for running Spark Streaming applications in production

A session at Spark Summit 2015

Tuesday 16th June, 2015

2:00pm to 2:30pm

Spark Streaming extends the core Apache Spark to perform large-scale stream processing. It is being rapidly adopted by companies spread across various business verticals - ad monitoring, real-time analysis of machine data, anomaly detections, etc. This interest is due to its simple, high-level programming model, and its seamless integration with SQL querying (Spark SQL), machine learning algorithms (MLlib), etc. However, for building a real-time streaming analytics pipeline, its not sufficient to be able to easily express your business logic. Running the platform with high uptimes and continuously monitoring it has a lot of operational challenges. Fortunately, Spark Streaming makes all that easy as well. In this talk, I am going to elaborate about various operational aspects of a Spark Streaming application at different stages of deployment - prototyping, testing, monitoring continuous operation, upgrading. In short, all the recipes that takes you from "hello-world" to large scale production in no time.

About the speaker

This person is speaking at this event.
Tathagata Das

Grad student at CS Berkeley. Interested in Cloud Computing. Loves movies and food. bio from Twitter

Sign in to add slides, notes or videos to this session

Tell your friends!

When

Time 2:00pm2:30pm PST

Date Tue 16th June 2015

Short URL

lanyrd.com/sdpdhh

Official event site

spark-summit.org/2015

View the schedule

Share

See something wrong?

Report an issue with this session