At Teowaki we have a system for API usage analytics, with Redis as a fast intermediate store and Bigquery as a big data backend. As a result, we can launch aggregated queries on our traffic/usage data in just a few seconds and we can search for usage patterns that wouldn’t be obvious otherwise.
In this session I will talk about how we entered the Big Data world, which alternatives we evaluated, and how we are using Redis and Bigquery to solve our problem.
n this session, you will hear about lessons learned while building one of the first completely
reactive database drivers on the JVM. We will cover best practices learned the hard way around
topics like asynchronous architecture, backpressure, failure management, scalability and completely event-driven stacks.
You will see how to write code that produces nearly zero garbage (working with off-heap structures
and pooled memory), elegantly reacts to load spikes through implicit batching, is resilient to
failures by applying patterns like bulkheading and release valves and can serve load in the
hundreds of thousands operations per second with low latency.
If you love to jump in at the deep end, come join me to explore mostly unbeaten tracks which will
eventually lead into a reactive and scalable future of database drivers. Oh, and by the way you
will learn about awesome projects like RxJava, Netty and the LMAX Disruptor.
Orleans is a distributed virtual actor computing framework, developed by Microsoft Research, and is currently available on a public preview. The framework was used to build the server side infrastructure in the Halo 4, where it was scaled across hundreds of servers to support the global launch of the game.
Orleans was designed as a high availability, distributed system to run on premises or on Azure. It handles concurrency with asynchronous message passing between actors. It’s also relatively simple to use, and accessible to most developers with a little C# experience.
I propose doing an introduction to the Actor Model, and cover some of the basics of Orleans with slides, then move onto building a simple demonstration application around an ‘internet of things’ scenario.
by Tom Hall
Process calculi are ways of formally modelling concurrent computation, somewhat like Lambda Calculus does for computation. We all know the move to to multicore is making writing programs harder and lots of us are writing distributed systems and having to deal with all the interesting ways programs can fail.
After a brief history of different approaches to modelling concurrency, we will split the time between the formalisms and languages that implement them.
We will look at:
While one can easily work a lifetime without knowing this stuff it is stimulating stuff and I hope you enjoy it.
Apache Spark is a nascent big data framework that complements Apache Hadoop to offer fast, scalable access to big data. In recent months it's gained a lot of traction in industry as it has become an Apache incubation project. In this talk, you'll hear the good, bad and the voice beyond the hype.
This talk is about how to make Apache Spark cloud friendly, the kinds of jobs that are perfect for it and the kinds of performance and scale you can expect. Expect to see a bunch of demos on Microsoft Azure showing you the power of this framework for all aspects of big data, including interactive querying of terrabyte-scale datasets, machine learning and streaming messages in real-time.
Micro Service Architecture is now becoming the standard for a large range of companies. Amid the problems to solve when building micro-services, developers might need to think asynchronous.
Reactor offers a progressive and non opinionated concurrency handling to any JVM application - and beyond. Not only it is a handy lightweight toolkit, it is also part of the Reactive-Streams specification and as such is interoperable with friends such as RxJava or Akka.
Come discover some tips and tricks when building micro-services on top of Reactor, understand how it builds on top of Reactive-Streams specification and why this is a game changer in today's software development.
Small teams typically shy away from running applications that deal with the most sensitive and highly regulated data in the cloud.
I will share practical tips from 2 years running a SaaS application that store psychiatric medical records and detail how to overcome common security, encryption, privacy, and regulatory concerns. This talk will be useful no matter which government jurisdiction you operate in, it is not a U.S.-centric talk.
by Phil Wills
Phil will describe how the Guardian approaches scaling it's services using Scala, Elasticsearch, AWS and the Fastly CDN.
As our applications need to process ever more data in ever shorter time, it's difficult to stay sane. The architecture of our applications quickly becomes a monstrosity of different databases, queues and servers held together by string and sellotape. That may work at first, but soon gets ugly. If something goes wrong, it's hard to recover. If features of the application need to change, it's hard to adapt.
Stream processing gives us a route towards building data systems that are scalable, robust, and easy to adapt to changing requirements. In this talk, we will discuss how you can bring sanity to your own application architecture with Apache Samza, an open source framework for distributed stream processing applications.
Apache Samza is used in production at LinkedIn, building upon years of hard-won experience in building large-scale data systems. Even if you're not processing millions of messages per second, like LinkedIn is, you can still pick up useful tips on how to structure your data processing for scale and agility.
Robert Virding was one of the three co-creators of Erlang at Ericcsson. In this session he'll talk about the the circumstances that led to Erlang, the design choices they made, what the future holds for Erlang and how it all relates to building scalable distributed systems now.
by Alex Dean
Apache Kafka and Amazon Kinesis are more than just message queues - they can serve as a unified log which you can put at the heart of your business, effectively creating a "digital nervous system" which your company's applications and processes can be re-structured around.
In this talk, Alex will provide an introduction to unified log technology, highlight some killer use cases and also show how Kinesis is being used "in anger" at Snowplow. Alex's talk will draw on his experiences working with event streams over the last two and a half years at Snowplow; it’s also heavily influenced by Jay Kreps’ unified log monograph (https://github.com/snowplow/snow...), and by Alex's recent work penning Unified Log Processing, a Manning book (http://manning.com/dean/).
Alex's talk will show how event streams inside a unified log are an incredibly powerful primitive for building rich event-centric applications, unbundling local transactional silos and creating a single version of truth for a company.
Alex's talk will conclude with a live demo of Amazon Kinesis in action processing Snowplow events.
28th October 2014