How does Elasticsearch work in a resilient and performant way?
This talk is a deep dive into its internals:
* What are the different node types?
* How is the master node selected?
* How are indexes spread over shards and how are they allocated?
* What is the replication protocol?
* How is data actually written and queried?
* How do failovers or added nodes work?
* How do snapshots work in the background?
While we will focus on the current implementation, we will also dedicate some time to mistakes of the past. What has gone wrong and how did we fix it.
This talk gives an in-depth overview of distributed systems and their implementation in Elasticsearch.
Sign in to add slides, notes or videos to this session