In the beginning there was Nothing and then he said let there BeeScala. A zoom-in/zoom-out journey on how this project was brought to life.
Today, there exists a gap between high-level distributed computing frameworks and low-level distributed programming models. On one side of the spectrum, we have high-level frameworks such as Map-Reduce, Spark, distributed file-systems and databases, and peer-to-peer networks. On the other side, we have low-level distributed programming models, such as remote procedure calls (RPCs), and actors, which are the basis for building distributed systems. There does not seem to be a strong middle ground — a set of reusable intermediate components is missing. High-level frameworks are complex systems, built from low-level primitives during countless engineer hours, whose efforts are repeated every time a new distributed system is created.
Since the appearance of the actor model some 30 years ago, this gap between the high-level and the low-level distributed computing did not significantly decrease. While sequential programmers today build their programs from iterators, monads, zippers, generic collection frameworks, parser combinators, I/O libraries, and UI toolkits, distributed systems engineers still think in terms of low-level RPCs and message passing. While sequential programming paradigms realized the importance of structured programming and high-level abstractions long ago, distributed computing has still not moved far from message passing — its own assembly. This underlying cause for this situation is the following: existing low-level distributed programming models expose primitives that do not compose well.
In this talk, I present the recently proposed reactor programming model. I will focus on its main strengths — modularity and composability, and show how to build reusable message protocols and the distributed computing stack from a handful of simple, but powerful programming primitives. I will demonstrate that these primitives serve as powerful foundation for the next generation of distributed computing.
The goal of the presentation is to have a quick introduction to Slick in version 3.x. Lots of things have changed since version 2.x so even if you are familiar with previous version it still may be useful to take a look at how things have changed. Presentation is to be pragmatic, so after going through it (together with code samples) you should be able to start using it in your project with no problems. We will rather focus on how basics of Slick work and how you can build relevant queries / operations / patterns rather than Slick internals.
This talk will first touch on a few historic bugs, and how various QA techniques might have helped avoid them. Afterwards there will be a short overview of Scala static analysis tools, along with tips on how to configure and include them in your development process, and how you can help improve these tools in the future.
We will start from the very basics and learn how Akka actor model applies properly in business logic, software infrastructure as well as in managing UI. In the end we will take a look at some of the features under development and what we are trying to achieve.
by Rok Piltaver
In this talk I will show how to use Apache Spark and Scala to implement scalable data processing applications. Concepts will be illustrated with the following use case: analyzing user interactions with 150M mobile ads per day. We will also discuss how object-oriented and functional programming lures developers into writing software that is easy to maintain and enables adding new features quickly.
The Scala language and its environment have been evolving quite significantly over the past few years. The adoption of the language is slowly growing and it can now even be found in use in rather conservative enterprise settings. At the same time there have been quite a few criticism of the language, its ecosystem and its practicability in larger teams. Many developers are still avoiding to have a more serious look at Scala and its ecosystem for a variety of reasons ranging from the fear of good tooling support to the apprehension of advanced category theory principles. This talk is a reflection upon six years of working professionally with Scala in projects of various size and shape. It aims at conveying some of the learnings and practical insights gained during that time as well as to debunk some of the many preconceptions that surround the language and its ecosystem.
Being able to monitor your application’s behavior is nice; knowing that everything is being measured and reported somewhere makes you feel like you are doing the right thing, but, are you? Simply measuring everything like there is no tomorrow doesn’t bring any good unless you are analyzing that data! In this talk we will learn how to interpret the metrics data collected by Kamon and how to apply this knowledge when troubleshooting real world performance problems.
by Holden Karau
This talk will start with a quick introduction to the two different building blocks of distributed computing in Apache Spark, as with the relative performance differences. This talk will cover on the performance impacts of Datasets, which are becoming the core building block of much Apache Spark starting with Spark 2.0, as well considerations the RDD API. This talk will finish up with exploring the new structured streaming API. Prior knowledge of Spark isn’t required, but a background with Spark will make it more exciting.
by Adam Warski
Almost all web & mobile applications need some kind of *session support*: after logging in, state should be maintained which allows to identify the user on the server during subsequent requests in a *secure* way, so that the data cannot be tampered with.
`akka-http` is a great toolkit for building reactive mobile/web backends, using an elegant DSL; `akka-http-session` builds on top of that to provide secure session management.
We’ll discuss how session storage can be implemented, what are the security challenges (with an emphasis on cookies) and what kind of solutions `akka-http-session` provides. We’ll also do a quick introduction to `JWT` (Json Web Tokens), one of the supported formats for encoding session data.
Finally, no presentation can be complete without a **live demo** showing how using `akka-http-session` looks like in practice.
Event Sourcing (and CQRS) has become a hot topic. But what does it really means, why should we care and which new possibilities it opens for us? In this session we will introduce you to the main principles of CQRS and Event Sourcing. You will learn how to model your domain in terms of Commands and Events and how to build a reactive applications in Scala using Fun.CQRS and its reactive Akka backend.
by Pawel Dolega
In this talk I will introduce you to the concept of a finite-state machine. Why is it worth to be used? It allows the developer to design and code a process manager in a very simple and expressive way. We will see a real life example of a business process implemented with it. We will also make the process fail-proof by using persistence. All of it will be done using Akka Persistence.
Evolutionary algorithms open windows to where machines and biology meet. In this talk we’ll explore how evolutionary algorithms mimic and borrow from the way that Mother Nature solves problems – the road, from solving puzzles, to social sciences, to designing new kinds of satellite antennas. We’ll see how we can use plain Scala to code evolutionary algorithms, and see the existing libraries that can help us save some time.
Spark SQL is now the de-facto driving force behind Apache Spark 2.0’s success. It comes with enough cool features to keep you busy for few days and made Spark MLlib even more pleasant to use. In Spark 2.0, Spark SQL comes with Datasets, encoders, logical and physical plans. They are the frontends to the other low-level components called Catalyst optimizer and Tungsten that are supposed to make your queries be faster. During this presentation you will find out how your structured queries end up as Datasets, the difference between Datasets, DataFrames and RDDs, and finally how Spark SQL’s Catalyst optimizer could make your queries faster when properly structured.
by Jan Machacek
Jan will talk about microservice messaging patterns applied, with an example in Scala, Kafka, Cassandra and Deelpearning4j. We will build a system that turns tweeted images into stories, using CNNs and RNNs. We will have a distributed domain in Akka / Scala with storage in Apache Cassandra, computer vision components in Deeplearning4j; all connected with Kafka.
The talk will show the architectural and code smells that were the result of half-harted reactive implementation and the way to address them.
This talk will be an all-encompassing tour of ScalaCheck. I’ll start with a brief introduction for those who have never used the tool before. I’ll then illustrate some interesting ways to design properties, to make sure you get the most out of the library, showing how it differs from other unit testing frameworks like JUnit, Specs2 or ScalaTest. I’ll also talk about how ScalaCheck integrates with other libraries, specifically some from the Typelevel suite, and I’ll finish by introducing a new library to help ScalaCheck work with dates and times, and show some techniques for working with that. By the end of my talk you’ll definitely have all the ammunition you need to be using ScalaCheck from the outset on your current project!
25th–26th November 2016