Your current filters are…
Early morning coffee and a light, hangover-curing, breakfast.
by Jonas Bonér
The idea of the present is an illusion. Everything we see, hear and feel is just an echo from the past. But this illusion has influenced us and the way we view the world in so many ways; from Newton’s physics with a linearly progressing timeline accruing absolute knowledge along the way to the von Neumann machine with its total ordering of instructions updating mutable state with full control of the “present”. But unfortunately this is not how the world works. There is no present, all we have is facts derived from the merging of multiple pasts. The truth is closer to Einstein’s physics where everything is relative to one’s perspective.
As developers we need to wake up and break free from the perceived reality of living in a single globally consistent present. The advent of multicore and cloud computing architectures meant that most applications today are distributed systems—multiple cores separated by the memory bus or multiple nodes separated by the network—which puts a harsh end to this illusion. Facts travel at the speed of light (at best), which makes the distinction between past and perceived present even more apparent in a distributed system where latency is higher and where facts (messages) can get lost.
The only way to design truly scalable and performant systems that can construct a sufficiently consistent view of history—and thereby our local “present”—is by treating time as a first class construct in our programming model and to model the present as facts derived from the merging of multiple concurrent pasts.
In this talk we will explore what all this means to the design of our systems, how we need to view and model consistency, consensus, communication, history and behavior, and look at some practical tools and techniques to bring it all together.
by Andy Piper
Data is moving. Data has always been on the move, the fact that when using computers we often need data to stand still in order to do something with it is usually a reflection of our lack of skill rather than of the data itself. So how do we query fast moving data? The normal rules do not apply - or do they?
In this talk I will discuss event stream processing, particularly the challenges in processing high-velocity, high-throughput streams of data and some of the solutions that people have tried. We will also look at some of the theoretical underpinnings of stream processing, the challenges around high availability, transactionality and integration with semi-static data sources. We’ll also touch on the current buzz around “Fast Data”, Big Data’s amped cousin, and what all this means for Hadoop and other batch-oriented systems.
Reactive user interfaces need to scale from handling dozens to sometimes hundreds of updates per second, to changes in data on a daily or weekly basis, as well as handling input from users. This means that literally everything is a stream of data.
We will discuss and demonstrate how the trading applications we've built make reactivity a first-class concept to compose these streams to provide real-time information about the state of the market and the platform. We'll talk about building reactive applications that handle requests that time out, data that gets out of sync, and servers that fail - all elegantly and without refreshing the page.
by Dean Wampler
The Reactive Manifesto's *Resilient* trait says that a system must stay responsive in the face of failure. I'll discuss how various systems approach failure handling. I'll start with the theoretical foundation of *Communicating Sequential Processes* (CSP) and two modern systems inspired by it, the Go language and Clojure's core.async library. Then I'll examine failure handling in modern implementations of the Actor Model, which is dual to CSP (I'll explain what that means), as well as examine failure handling in implementations of Functional Reactive Programming (FRP) and Reactive Extensions (Rx).
by Andrew Stewart
So, you're building responsive and resilient applications, scaling to deal with an ever expanding firehose of events arriving at your front door. You're filling storage by the terabyte without even trying, and that needs to be resilient, and responsive, and scalable too. So obviously you're storing your data using... well... what? Is there really a single technology that meets all your needs for persistence? And are the 'conventional' technologies really a lost cause?
In this talk we'll look at some of the successful - and less successful - strategies for managing high-frequency, high-volume data. We will explore what is technically possible when you need to record millions of messages per second durably without a bottomless budget, review the common storage options and what they are capable of, and also look at what is possible when you're willing to roll up your sleeves and write your own storage engine.
Before the closing keynote there will be another Q&A clinic with the speakers. As with day 1, this is your chance to ask the speakers those all important questions, in a relaxed informal setting.
If you have any questions that you would like us to put to the speakers, email us at firstname.lastname@example.org
by Leslie Lamport
Architects draw detailed plans before construction begins. Software engineers don't. Can this be why buildings seldom collapse and programs often crash?
A blueprint for software is called a specification. TLA+ specifications have been described as exhaustively testable pseudo-code. High-level TLA+ specifications can catch design errors that are now found only by examining the rubble after a system has collapsed.
18th–21st November 2014