This presentation will provide a deep dive into new Apache project: Apache Ignite. Apache Ignite is the in-memory data fabric that combines in-memory cluster & computing, in-memory data grid and in-memory streaming under one umbrella of a fabric. In-memory data fabric slides between applications and various data sources and provides ultimate performance and scalability to the applications. The expected audience should include anyone who’s interested in projects like Apache Hadoop and Apache Spark to which Apache Ignite is complementary. Attendees will learn major features of Apache Ignite as well as how it integrates with the rest of Apache big data eco-system. Apache Ignite is the first general purpose in-memory computing platform in Apache Software Foundation family.
You know beautiful code when you see it. But what about lots and lots of beautiful code; is it still beautiful at a distance? “Structure” is how all the snippets of code are organized into containers (methods, classes, packages, jars, etc.) and how the containers relate to each other. The same code can be organized into anything from a hopelessly tangled mess … to a Beautiful Structure.
This talk will explore how “locality of relationship” affects coupling, cohesion, and the width of interfaces. It will show how some structural patterns dramatically increase complexity, and how alternative patterns can massively decouple and simplify. The structure of open-source Java projects (such as the beautifully-structured Spring framework) will be used to illustrate the principles.
This presentation provides an overview of the technical solutions for the orchestration of Docker based services and why it´s so important for you to rethink your IT-business in the finance sector. Creating, maintaining, and modifying many machines and containers on your developer notebook in the data center or cloud is a challenge. Our applications are constantly adapting and expanding to different use cases. The Docker ecosystem offers promising tools for service discovering, automatic scaling, failover and deployment. The talk presents the practical benefits of the latest Docker projects such as: “machine” , “swarm” , “network” , and “compose”. A comparison to other solutions, e.g. CoreOS or Kubernetes is also discussed.
by Henk Kolk
Docker doesn’t only revolutionize your application hosting, it also revolutionizes your development pipeline. ING has over 250DevOps teams, thousands of applications and a complex application landscape. ING is simplifying its application landscape in record time while introducing a webscale architecture based on anti-fragility patterns. Speed is vital in this transformation and one of ING’s key assets is it Continuous Delivery Pipeline. In this talk, we will show how ING uses test containers for confidence checks, integration testing on up and downstream services, creating dev/test environments for every feature branch, reverse proxy and CI servers. As a result, we are able to automate test processes and reduce our integration testing costs.
by Peter Lawrey
Reproducing problems you see in production, in test or development environments can be very hard. Yet without this ability, you can’t be sure what works in test will work in production.
This talk is about how you can provide this determinism by having a core architecture which is a series of replayable state machines. Each state machine has ever input serialized and recorded, with timings, so they can be reproduced to any point in the day. This not only reprovides recovery but complete reproducability in test or development. This makes find obscure issues, and even rare performance jitter much easier to reproduce. More importantly you can have confidence when you have fixed the problem you saw in production, taking out the guess work in fixng bugs.
by Ben Stopford
This talk will examine the various tools, mechanisms and tradeoffs that allow a data architecture to scale, from disk formats through to fully-blown architectures for realtime storage, streaming and batch processing. Attendees should gain a sense of the applicability of different technologies as well as how they can be sewn together.
by Jim Webber
Finance is awash with data, but much of it is subject only to discrete analysis in offline scenarios. In recent years, graph database technology (like Neo4j) has evolved to support connected analysis in real time yielding high-fidelity insights into your domain on demand and at scale.
In this talk we’ll discuss several real-world use-cases from live financial systems covering a broad range of common financial services, backed up by some live examples that show how convenient and powerful graph data can be. In particular we’ll see how graphs detect and prevent credit and insurance fraud, how they enable bank-wide real-time entitlements, how assets are managed and counterparties identified.
The live examples of these use-cases will use Neo4j’s Cypher query language to showcase how rapidly graph queries can be built and how performant they are.
In the last 30 years CPU performance has increased by roughly 30,000 times, that’s 15 iterations of Moore’s law. Sadly though a significant part of our CPU performance is spent allocating objects and then garbage collecting them just seconds later. This is the price we pay for a language that was designed to abstract us from the hardware. John will show some interesting techniques to vastly reduce the amount of objects created by using binary, even going as far as putting an entire FpML message into binary and then into a singly byte array, reducing it’s size by around 50 times and increasing serialisation performance by over 100 times. The demos will be run under Java 8 using lambdas to process though millions of complex derivative messages per second, big data in memory.
by Peter Lawrey
Many infrastructure solutions emphasise horizontal scalability as a means of achieving high throughput. This is built on the assumption that you have high dgrees of parallelism inherent in the problem you are trying to solve. In finance, from trading systems to risk management, this is often not the case. Instead you need a system which can less operations at once, but still be very fast.
Why is low latency the solution to high throughput? How do we do this? Conversely, when is high throughput used to solve low latency?
Financial organizations typically rely on tools such as Word, Excel or form-based applications for domain experts to maintain the knowledge that represents the core business value of the organization (data structures, rules, calculations, formulas). This creates friction and manual work because these documents are hard to check or translate into executable software. The underlying reason is that the documents are not backed by well-defined structures. Language workbenches solve this problem by combining structured documents and proven IDE features (code completion, semantic checks, refactoring, debugging, testing) with notations commonly found in financial applications such as tables, mathematical formulas, box-and-line diagrams, semi-structured text as well as programming language-like textual syntax. In this session I explain what language workbenches are and how they can help in the context of financial applications. I rely on example systems from several real-world domains, built exclusively on Open Source software.
Just the mention of the word “modelling” brings back horrible memories of analysis paralysis for many software developers of a certain age. But in their haste to adopt agile approaches, we’ve seen countless software teams who have thrown out the modelling baby with the process bathwater. In extreme cases, this leads to nobody really understanding the architecture and as a result the team creating a system that really is the stereotypical “big ball of mud”. In this session, Eoin will draw on long experience of trying to make models useful to teams, and present a set of principles and ideas that he and Simon Brown have developed to help you avoid these extremes on your project. He will discuss the use of models, sketches and everything in between, providing you with some real world advice on how even a little modelling can help you avoid chaos.
As somebody who has built various applications in the finance industry, the question becomes do you use NoSQL or do you use SQL? There is no right or wrong answer to this question. I will be covering which technology should be used and when. Topics covered will include data size, replication, access speed, and so on.
Functional programming offer a stateless approach to managing data efficiently, parallelism for highly scalable processing and a flexible component (microservices) design. These features enable developers to deliver the applications the modern financial markets require to maintain a competitive advantage.
We will discuss how Clojure is an effective choice for functional programming:
– a simple syntax that’s easy to learn
– managing data is core to the language
– built-in persistent data structures that maintain a stateless approach
– Software transactional memory for managing state changes seamlessly and safely
– modern build automation & development tooling
– interoperability with the Java platform & your existing applications
Clojure is extensible via macros, allowing the growth of a wide range of libraries to increase application development speed further.
by Sam Adams
Building a financial exchange like the one that LMAX Exchange runs is a unique challenge. Customers demand low and predictable latency at ever increasing volumes. Over the past four years it has grown to a system that regularly spikes beyond 10K transactions/sec with end-to-end latencies measuring just a few 100’s of microseconds. In this talk we will give an overview of the architecture that delivers this – how we’ve built a high throughput, low latency, scalable, resilient and reliable system – all in plain old Java.
In this talk we will show how we improved the transactional throughput of querying complex XML documents from 400,000TPS to 20,000,000. The documents in questions were typical 7.5K FpMP derivative trades. The talk will describe how reorganizing the content of the XML documents into a compact computationally useful format of less that 400 bytes. This compact format allowed us to make better use of the hardware. This format also provided many more tangible benefits which will also be described in this talk.
My Boss: I suggest you sit down.
I have a new role for you. We have the regulatory reporting department that is currently made up of one developer and a couple of large ailing systems. We have to ramp this up to a department capable of acting as the hub for dozens of systems in the bank so we can report their trading activities to regulators around the world. You have less than a year before the regulations come into force!
Me: I think I need to lie down.
How do you start to create a department that will eventually expand to 6 teams in the first year consisting of over 30 developers. More importantly how do you adopt and maintain XP practices such as TDD, Continuous Integration, and Pair Programming at this scale? This is an account of our successes and failures during that year.
28th–29th April 2015