Your current filters are…
by Steve Harris
In talking to our users it is clear that applications are getting more and more data hungry. According to IDC, data requirements are growing at an annual rate of 60 percent. There is good news though. Server class machines purchased this year have a minimum of 8 Gig of RAM and likely have 32 Gig. Cisco is now selling mainstream UCS boxes with over 380 Gig of RAM. Memory has gotten big and extremely cheap compared to things like developer time and user satisfaction.
Unfortunately a problem exists as well. For Java/JVM applications it is becoming an ever increasing challenge to use all that data and memory due to GC Pauses.
In this talk I'm going to cover the problems we identified and the technology we built to solve those problems.
A bit about it's history and the history of the problem
The where, when why of BigMemory
Throughput, latency, Garbage Collection, SLA and scaling characteristics
Configuration of Ehcache with BigMemory in an existing application with just a few lines of config code
Ehcache's tiered storage architecture: MemoryStore, the OffHeapMemoryStore and the DiskStore
Ehcache BigMemory with scale-out
Implications for your caching architecture
14th–15th October 2010