Wednesday 20th March, 2013
3:00pm to 3:40pm
Online learning techniques, such as Stochastic Gradient Descent (SGD), are powerful when applied to risk minimization and convex games on large problems. However, their sequential design prevents them from taking advantage of newer distributed frameworks such as Hadoop/MapReduce. In this session, we will take a look at how we parallelized linear regression parameter optimization on the next-gen YARN framework Iterative Reduce.
Distributed systems, data mining, and self organizing algorithms programmer; used do a lot of hadoop for the smartgrid, now a solutions architect at Cloudera. bio from Twitter
Sign in to add slides, notes or videos to this session