Why Big Data Needs To Be Functional

A session at Northeast Scala Symposium 2012

Friday 9th March, 2012

11:25am to 12:00pm (EST)

Apache Hadoop is the current darling of the "Big Data" world. Most Hadoop Applications are written in the low-level Java API or high-level languages like Hive (a SQL dialect). I examine how OOP and Java thinking is impacting the effectiveness of Hadoop, both the internals and the way you write applications. Using Scala examples from tools like Scrunch, Spark, and others, I demonstrate why functional programming is the way to improve internal efficiency and developer productivity. Finally, I look at the potential future role for Scala in Big Data.

About the speaker

This person is speaking at this event.
Dean Wampler

Big Data, Scala, and FP expert.

Coverage of this session

Sign in to add slides, notes or videos to this session

Tell your friends!


Time 11:25am12:00pm EST

Date Fri 9th March 2012

Short URL


Official event site


View the schedule



Books by speaker

  • Programming Scala: Scalability = Functional Programming + Objects
  • Programming Hive
  • Functional Programming for Java Developers: Tools for Better Concurrency, Abstraction, and Agility

See something wrong?

Report an issue with this session