Wednesday 16th September, 2015
12:45pm to 2:15pm
Big Data Track
Operations, analytics, and business teams are requesting ever increasing amounts of data delivered to Big Data analysis platforms and tools. This demand is driven by the desire to better understand user experience, quality of service, real time performance of systems, and the undiscovered patterns and opportunities that exist to improve service, sell products, and delight users. Capital One Technology solved for the ever increasing demand for data from consumer facing applications using Spring Extreme Data (XD) and custom Java libraries to stream data in real time from applications to Hadoop HDFS, Mongo, Kafka, Splunk, and others. In this talk we will discuss our journey from batch-oriented processes using databases to a real time data streaming solution and the significant benefits achieved. We’ll cover the implications of adopting a streaming solution, why we selected Spring XD, and the target architecture we are implementing to land data in HDFS. We’ll cover how we automated environment provisioning, the design for our client libraries (Java, Spring), our XD environment, and how we’re tapping into all that data. We’ll describe challenges we overcame on this journey including connecting to our HDFS cluster, working with multiple Mongo stores, using Kerberos, and ensuring protection and encryption of sensitive data end to end. Lastly we’ll talk about a number of use cases where we are evaluating Spring XD and see potential benefits including “rolling window” system analytics, digital message delivery, and event driven architectures.
Sign in to add slides, notes or videos to this session