Your current filters are…
by Rich Hickey
Imperative and OO developers are hearing more and more about functional programming and are often still left wondering what the fuss is all about. One of the least articulated benefits of functional programming is often the one most prized - the pleasure and sanity of programming with values, and it can be difficult to appreciate without first-hand experience. This talk will discuss value-oriented programming and its antithesis - place-oriented programming, examining the benefits and costs of each, in the small and in the large. Along the way we'll discuss the beauty of garbage, and the relationship between space and time.
Modern browsers are prime examples of using GPUs to accelerate computationally intensive tasks. But how do they actually do it? This talk sheds some more light on browser interactions with the GPU and explain what happens behind the scenes, covering the topic of acceleration of primitive drawing, the use of tiled backing store, and composited layer
by Paul Weiss
"People have been talking about ""scaling"" and ""distributed systems"" a great deal lately. We need to clarify the meaning of these terms in order to have a worthwhile conversation about them. We will do so by discussing the details of how to actually produce practical, scalable, distributed systems.
That discussion will focus on methods for designing and building robust fundamentally-concurrent distributed systems. These approaches have been learned through building user-facing web applications, data storage and processing systems, and server management tools. We will look at practices that are ""common knowledge"" but too often forgotten, at lessons that the software industry at large has somehow missed, and at general ""good practices"" and rules that must be thrown away when moving into a distributed and concurrent world. "
In the industry, we often have a static view of software architecture.
Organizations plan a set architecture with the assumption that it will be not be significantly. Even if they are savvy, and they rarely look beyond the immediate target architecture to imagine how its base assumptions might change in the future. As in any other discipline, we can often get a sense of how change happens by mining history. In this talk, Michael Feathers will outline patterns he has seen in various projects which give a good indication of the points in time when large-scale change should be undertaken.
by Michael Kopp
Do Applications that use NoSQL still need Performance Management? Is it always the best option to throw more hardware at a MapReduce Job? In both cases Performance Management is still about the Application, but BigData Technologies added a new wrinkle to it. In this session I will explain some of the main application performance problems of BigData Applications and how to solve them.
by Peter Bell
Learn how to easily leverage NoSQL data stores as diverse as MongoDB and neo4j quickly and easily using Spring Data. Whether you want to replace a relational database or enhance it for certain classes of queries, Spring Data can make it easer to work with NoSQL data stores in Java. We'll start by categorizing the classes of NoSQL stores and the kind of use cases for each and will then look in more detail at the Mongo and Neo4j integration capabilities available in Spring Data and how and why you might take advantage of them.
by Andrew Elmore
Today's enterprise systems interact with multiple external systems and usually, because the world is like that, they each talk a different language. Even where standards exist (be they SWIFT, FpML, ISO20022 or FIX to name a few) each party has their own particular usage which requires custom handling. Internal collaborating systems are often little better as siloed business units pick data representations that are geared towards their own usage of the information.
How do you handle all these different data formats without polluting your application? Moreover, as data volumes grow, how do you acquire data from a 10GB zipped CSV file containing millions of messages as efficiently as you do individual XML messages read from a queue? Can we make the logic sufficiently consistent that the operations team can monitor and manage both in the same way?
Using a worked example loading data into MongoDB, this session will show how technologies such as Spring Integration, Spring Batch & C24 Integration Objects can be used to address the problems above (and the equivalent outbound ones), along with some practical examples to demonstrate that the obvious solutions are not necessarily the best performing. You will also see how languages such as Scala & Clojure provide a more descriptive way to wire them together.
by Arun Gupta
This talk will give you an update of Java EE 7 platform development, the latest revision of the Java platform for the enterprise that is being developed by the Java Community Process, and update you on Project Avatar. The focus of Java EE 7 is on the cloud, and specifically it aims to bring Platform-as-a-Service providers and application developers together so that portable applications can be deployed on any cloud infrastructure and reap all its benefits in terms of scalability, elasticity, multitenancy, etc. The focus of Project Avatar is making HTML5, Websockets and JSON easier and more fun for Java developers and HTML developers alike.
Of course, Java EE 7 continues the ease of development push that characterized prior releases by bringing further simplification to enterprise development. It also adds new, important APIs such as the REST client API in JAX-RS 2.0, JSON, and the long awaited Concurrency Utilities for Java EE API, and plenty of improvements to other components.
"In the challenge to reach the lowest possible latencies, as we push the boundaries of transaction processing, the good old fashioned lock imposes too much contention on our algorithms. This contention results in unpredictable latencies when we context switch into the kernel, and in addition limits throughput as Little’s law kicks in. Lock-free and wait-free algorithms can come to the rescue by side-stepping the issues of locks, and when done well can even avoid contention all together. However, lock-free techniques are not for the faint hearted. Programming with locks is hard. Programming using lock-free techniques is often considered the realm occupied only by technical wizards.
This session aims to take some of the fear out of non-locking techniques. Make no mistake this is not a subject for beginners but if you are brave, and enjoy understanding how memory and processors really work, then this session could open your eyes with what is possible if you are willing to dedicate the time and effort in this amazing subject area.
The attendees will learn the basics of how modern Intel x86_64 processors work and the memory model they implement that forms the foundations for lock-free programming. Empirical evidence will be presented to illustrate the raw throughput and potential latencies that can be achieved when using these techniques. "
by Abdul Dakkak
Over the past decade GPU hardware and programming languages have become powerful and expressive enough to be useful for general computation. Still, GPU programs are written in a C dialect, and, while that achieves great performance, it can sometimes hinder productivity. To make GPU programming simple, both research and industry provided either a foreign function interface for GPU programs, ability to overload programming language operators to run on the GPU, or provide DSLs for the domain that can be translated into GPU code. In this talk we will examine the state of the art of GPU computing and research, how GPUs are being used with high level languages today, and what the future of GPU programs will be.
by Steve Ross-Talbot
Our ability to reason about things is entirely based on our understanding of what is described. It might be a UML diagram, an ER diagram, a class diagram, a BPMN diagram or simply a diagrammatic representation of a system, the architecture, rendered in Viso. The problem we have with all of these is rooted in their formal semantics and their formal relationships that connect them. More formally we really want to understand how a requirement is described and how it is met or not met. We want to understand what the possible cost of meeting a requirement will be. Today this is largely in the domain of magic, hubris and if we are lucky good judgement. But is that really enough to drive change in an enterprise.
In this session we shall examine what is enough and what enough might mean in practical terms and in terms of the benefits it might yield. In so doing we shall present a new software lifecycle and the tools that support it called the Zero Deviation Lifecyle. The aim of the ZDLC is to reduce cost, improve quality and be able to answer fundamental questions about linkage, alignment and cost.
by Glenn Block
If I told you that you can build node.js applications in Windows Azure would you believe me? Come to this session and I’ll show you how. You’ll see how take those existing node apps and easily deploy them to Windows Azure from any platform. You’ll see how you can make yours node apps more robust by leveraging Azure services like storage and service bus, all of which are available in our new “azure” npm module. You’ll also see how to take advantage of cool tools like socket.io for WebSockets, node-inspector for debugging and Cloud9 for an awesome online development experience.
In the last couple of years Hadoop has become synonymous with Big Data. This framework is so vast and popular that Microsoft recently announced, for the first time in its history, that it is going to invest in this large-scale, open-source project as its solution for Big Data.In this session we'll learn how Hadoop works on Windows Azure including an exploration of different storage options, e.g., AVS and S3, how Hadoop on Azure integrates with other cloud services, understanding key scenarios for Hadoop in the Microsoft ecosystem, and discovering Hadoop’s role in a cloud environment.
by Robin Zimmermann
HTML5 WebSocket opens up a whole new world of possibilities for real-time communication over the Web. Applications hitherto reserved for the desktop can now be moved to the Web and the Cloud: financial dashboards, news feeds, online auctions, the list is endless.
Since many of those kinds of applications incorporate messaging this has driven a surge of interest in messaging over the Web using HTML5 WebSocket. JMS has the benefit of being a standard interface with a large following in the enterprise. With the advent of HTML5 WebSocket, JMS now has an opportunity extend from the enterprise to the Web.
In this session, hear how you can use the JMS API in browser and mobile clients over the Web with similar performance to traditional enterprise desktop applications. See demos of how your Web-based application can participate with your existing messaging infrastructure as a first-class citizen.
In this talk I'll discuss the architectural underpinnings of Tumblr's distributed systems infrastructure. I'll focus primarily on Motherboy, an eventually consistent inbox style storage system, to highlight some of the most useful lessons learned. Additionally, I'll discuss some of our findings in evaluating various concurrency models as well as how our choice of Scala as our back-end language of choice, played into those decisions. This talk will be targeted towards distributed systems developers but will also have some appeal to operations engineers who are responsible for managing these JVM based systems. Some key concepts covered will include concurrency constructs available on the JVM, testing and tuning the stack, Scala in a distributed systems context, real world benchmarking and distributed tracing.
by Darren Wood
Join Objectivity, Inc. in a discussion of the latest trends in Big Data Analytics, defining what is Big Data and understanding how to maximize your existing architectures by utilizing NOSQL technologies to improve functionality and provide real-time results. There will be a focus on relationship analytics as well as an introduction to NOSQL data stores, object and graph databases, such as the architecture behind Objectivity/DB and InfiniteGraph.
by Rick Hudson
by Pramodkumar J. Sadalage
Over the life of an application as requirements change, application usage patterns alter, load and performance changes the need to change database and database architecture is inevitable. There are patterns of these changes such as ""Add Read Method"", ""Migrate Method from Database"", ""Introduce Read Only Table"" etc. In this talk we will discuss ten database architecture refactoring patterns and different implementation techniques.
by Gil Tene
Garbage Collection is an integral part of application behavior on Java platforms, yet it is often misunderstood. As such, it is important for Java developers to understand the actions you can take in selecting and tuning collector mechanisms, as well as in your application architecture choices.
In this presentation, Gil Tene (CTO, Azul Systems) reviews and classifies the various garbage collectors and collection techniques available in JVMs today. Following a quick overview of common garbage collection techniques including generational, parallel, stop-the-world, incremental, concurrent and mostly-concurrent algorithms, he defines terms and metrics common to all collectors. He classifies each major JVM collector's mechanisms and characteristics and discuss the tradeoffs involved in balancing requirements for responsiveness, throughput, space, and available memory across varying scale levels. Gil concludes with some pitfalls, common misconceptions, and ""myths"" around garbage collection behavior, as well as examples of how some good choices can result in impressive application behaviour
by Eric Evans
Once a large software legacy is built up, accumulated design problems and the intrinsic complexity of integration combine to make it more and more difficult to execute clean design. A team may set out to design a new piece of software using a domain model, and at first they are focused on strategically valuable new features. In collaboration with business innovators, they outline a new vision of some part of the domain. They distill a model and design the new functionality.
Of course, the new functionality calls for integration with existing functionality. As the new vision is stuck together with a messy legacy system, lots of expedient compromises are made. Perhaps the new work must also be integrated with an external system and the introduction of these external elements leads to loss of clarity in the model. In response, the team may try to redesign or replace more of the legacy system, and the scope expands, and the work bogs down. There are various ways this may happen, but they lead to the same place. The focus on strategic value is lost, and a fresh and clear approach to the problem is muddied to the point that it has no impact.
Netflix runs 100%* of it's operations in Amazon's public cloud, EC2, and so does reddit. Come hear Jeremy Edberg, Netflix's Reliability Architect and formerly reddit's operations manager, tell the story of their transitions from running their own datacenter to EC2, the reasons for the chance, and the issues they've had to overcome to make it work.
There is a lot of buzz around using GPUs to solve problems. How do you design and build an application that makes use of parallel processing that until recently only the few have had access to? Join me on the journey from application concept, through design to implementation of an application. Along the way we will be discussing the trade-offs and other design and implementation considerations.
This is a getting started GPGPU programming session covering application architecture, adapting approaches to leverage GPUs. The focus is on application development, design and code.
Apache Hadoop is the current darling of the ""Big Data"" world. At its core is the MapReduce computing model for decomposing large data-analysis jobs into smaller tasks and distributing those tasks around a cluster. MapReduce itself was pioneered at Google for indexing the Web and other computations over massive data sets.
I'll describe MapReduce and discuss strengths, such as cost-effective scalability, as well as weaknesses, such as its limits for real-time event stream processing and the relative difficulty of writing MapReduce programs. I'll briefly show you how higher-level languages ease the development burden and provide useful abstractions for the developer.
Then I'll discuss emerging alternatives, such as Google's Pregel system for graph processing and event stream processing systems like Storm, as well as the role of higher-level languages in optimizing the productivity of developers. Finally, I'll speculate about the future of Big Data technology.
by Chris Pinkham
Beyond debates between private versus public, cloud technologies can be primarily used to hide the complexities of the underlying infrastructure and allow a focus on services required to develop and deliver next gerenation applications and associated IT services. Web companies have made the most of these new capabilities to greatly accelerate their ability to deliver innovation without any of the traditional red tape associated with traditional IT. This session will explore how any customer can now leverage this developer friendly model and enhance their ability to better support the business needs. Furthermore, new technologies, such as Nimbula's Cloud operating system, will allow customers to leverage public clouds such as Amazon's EC2 to complement their private resources.
by Charlie Hunt
It may or may not be of surprise, but how one writes Java code can impact latency jitter. This session will offer several tips and tricks on how to write Java that avoids or lessens latency jitter. These are tips you can put into practice immediately and be able to realize results.
by Mohammad Rezaei
For certain types of problems, there is no better solution than a highly tuned, shared-memory parallel algorithm. This talk will showcase code that achieves concurrency via lock free algorithms. We'll explore how complex aggregation can be performed using fine grained coordinated parallelism. To accomplish this goal, we'll discuss advanced data structures, like a brand new lock-free hashmap with interesting features such as concurrent resizing. By using some of Java's less commonly used API, such as AtomicFieldUpdaters, we'll demonstrate how a real world application can be written to not only work with a dozen cores, but be ready for a 1000.
There is a lot of angst about writing concurrent code. The message of this talk is one of courage and hope without leaning on some future language or technology. Writing concurrent code requires a different mindset. The approaches detailed in this talk will hopefully serve as an introduction that will strengthen the audience's resolve to explore multi-threaded programming.
18th–22nd June 2012