Tuesday 16th June, 2015
1:00pm to 1:30pm
It can be a frustrating experience for an application developer when her application: (a) fails before completion, (b) does not run quickly or efficiently, or (c) does not produce correct results. There are many reasons why such events happen. For example, Spark's lazy evaluation, while excellent for performance, can make root-cause diagnosis hard. We are working closely with application developers to make diagnosis, tuning, and debugging of Spark applications easy. Our solution is based on holistic analysis and visualization of profiling information gathered from many points in the Spark stack: the program, the execution graph, counters, data samples from RDDs, time series of metrics exported by various end-points in Spark, YARN, as well as the OS, and others. Through a demo-driven walk-through of failed, slow, and incorrect applications taken from everyday use of Spark, we will show how such a solution can improve the productivity of Spark application developers tremendously.
Sign in to add slides, notes or videos to this session