by Alex Snaps
Whether in the cloud or on your laptop, a common attribute of computer systems is “more”: more storage, more memory, more cores and, last but not least, more data to store and process. In fact, according to IDC, the total amount of digital information in the world has been growing at a 60% compound annual growth rate, roughly increasing ten fold every five years. That your application must deal with ever increasing amounts of data is a sure thing. But, how—and at what cost?
To move data from storage to the processing unit and back again is costly—not only from disk to memory, but also from memory to the actual registers of the CPU. Again, whether in big enterprise apps or apps on your phone, to avoid to many round trips from storage to processing units, engineers try to keep the data close to where it is needed: in caches. Your hard drive uses a cache; CPUs have two, three or more levels of caching to minimize access to RAM. Enterprise applications are no different and use caches and pools to minimize costly operations.
With Terracotta’s Enterprise Ehcache, you can snap in caching to speed up your application 10x, scale up to effectively use all the memory on your servers without GC limitations and scale out from one node, to 1000s, even to the cloud―without re-writing code or compromising performance or reliability. This session will deep dive into how Terracotta’s enterprise suite of performance and scalability technologies works, how we have met our customers’ stringent performance and SLA demands in the face of terabytes of data and how we’ve put all these best practices back into our open source projects for your application’s benefit.
9th June 2011