Thursday 22nd March, 2012
2:50pm to 3:05pm
High performance computing has traditionally been the most demanding area of computing that pushed the edge of the envelope. In these circumstances a lot of pioneering work has been done that ultimately filter down to more pervasive and affordable applications. In this fireside chat we talk to a scientist from Los Alamos National Laboratories about how they reduce the time and cost of getting to critical insights by reducing the distance and the latency of compute memory. The conversation will look to successes in current deployments and what this signals for computer architectures for big data problems in the long-term.
Official Twitter account of insideHPC.com, with HPC news from HPC people, for HPC people. bio from Twitter
Sign in to add slides, notes or videos to this session