by Dan Yoder and Carlo Flores
We've all seen load tests of a single "hello world" HTTP server using tools like ab or httperf. But what about load testing for real world Web applications and testing architectures that go beyond a few processes on a single machine? What about testing elastic "on-demand" architectures that add capacity as load grows? How does testing in the cloud affect your results? At what point does bandwidth become a bottleneck instead of CPU or memory? And what are we really measuring? What is the difference between connections and request per second? And how do those ultimately relate to infrastructure cost, which is the real bottom line?
We're going to try and answer some of those questions. In building spire.io, we wanted to simulate real use and validate that our cutting edge architectural decisions were really going to pay off. Along the way, we ran into some significant obstacles and some surprising results. Among other things, we built an easy-to-use node.js HTTP client that addresses a big gap in the Ruby toolkit - a fast, flexible HTTP client - that makes it much easier to generate massive amounts of load. We'll also talk about our work on Dolphin, a soon-to-be-released library for simulation-based load testing. We'll conclude with some candidates for best practices for doing load testing "with a vengeance."
2nd–4th February 2012