Why Big Data Needs To Be Functional

A session at Northeast Scala Symposium 2012

Friday 9th March, 2012

11:25am to 12:00pm (EST)

Apache Hadoop is the current darling of the "Big Data" world. Most Hadoop Applications are written in the low-level Java API or high-level languages like Hive (a SQL dialect). I examine how OOP and Java thinking is impacting the effectiveness of Hadoop, both the internals and the way you write applications. Using Scala examples from tools like Scrunch, Spark, and others, I demonstrate why functional programming is the way to improve internal efficiency and developer productivity. Finally, I look at the potential future role for Scala in Big Data.

About the speaker

This person is speaking at this event.
Dean Wampler

Minor irritant, major pedant, Big Data poser, O'Reilly author, lurks at Think Big Analytics. bio from Twitter

Coverage of this session

Sign in to add slides, notes or videos to this session

Tell your friends!

When

Time 11:25am12:00pm EST

Date Fri 9th March 2012

Short URL

lanyrd.com/sqthk

Official event site

nescala.org

View the schedule

Share

Topics

Books by speaker

  • Programming Scala

See something wrong?

Report an issue with this session