Your current filters are…
by Yehuda Katz
The SproutCore framework has evolved over the past five years to be an extremely high-performance framework that focuses on making it possible to build native-like applications in the browser.
This means handling problems like working with extremely large data-sets, inconsistent connectivity, and complex DOMs. Lately, it has meant figuring out how to properly use new browser features that can make a big difference to perceived performance, like hardware acceleration.
In this talk, Yehuda will cover some of the techniques that SproutCore has used historically to enable extremely complex applications to perform well in the browser, as well as what new technologies the team is looking at to leverage the latest browser technologies in building compelling content for the web.
This keynote will be a whimsical whirlwind tour through the evolution of a “career” in web operations.
by Douglas Crockford
There are lies, damned lies, and benchmarks. Tuning language processor performance to benchmarks can have the unintended consequence of encouraging bad programming practices. This is the true story of developing a new benchmark with the intention of encouraging good practices.
by John Rauser
Modern monitoring software makes it easy to plot a statistic like average latency every minute—too easy. Fancy dashboards of time series plots often lull us into a false sense of security. Underneath every point on those plots is a distribution, and underneath that distribution is a series of individuals: your customers. If you don’t take the time to look deeply at your data, you don’t truly understand your business.
by Manny Gonzalez and Vik Chaudhary
Keynote demonstrates how you can improve the end-user experience of your latest smartphone apps. This year at Velocity, Keynote debuts Mobile Device Perspective 5.0 (MDP), a cloud based testing and monitoring platform for ensuring the end to end quality of iPhone, Android and BlackBerry mobile apps accessing online content, streaming video, music and games. Learn how MDP’s mobile monitoring capabilities are designed for testing and optimizing smartphone access with gestures and touch events, using real mobile devices connected to the latest 3G and 4G networks in multiple mobile markets across the globe.
by Mark Burgess
The key challenges for infrastructure designers and maintainers today are scale, speed and complexity. Mark Burgess was one of the first people to look for ways of managing these issues based on theoretical analysis. Much of his work has gone into the highly successful software Cfengine, which is still very much a leading light in the industry. In this session, Mark will ask if we have yet learned the lessons of infrastructure management, and, either way, what must come next.
by Tim O'Reilly
Tim O’Reilly shares his insights into the world of emerging technology, presenting his take on what matters most – and what will be most disruptive – to the tech community.
As web applications continue to become more interactive and sophisticated, real-time messaging and updates are becoming increasingly prevalent. One of the hottest new APIs in HTML5, is WebSocket, which enables true duplex communication without the overhead, complexity, and extraneous latency of HTTP-based solutions. In this talk, we will see how the WebSocket removes these barriers to create optimal real-time delivery of messages from servers to desktop and mobile web browsers. Although WebSocket is an exciting new API, we will see how we can easily fallback to HTTP-based techniques when WebSocket is not available with Dojo’s Socket API. The server-side is equally important, and real-time messaging has pushed the need for asynchronous I/O in the server. We look at how we can create scalable real-time applications using the Node.js platform that is so perfectly suited for Comet, using the Tunguska library. The presentation will cover the use of streaming abstractions to minimize buffering. We will also consider the performance implications of topic-based publish-subscribe distribution versus filtering techniques.
SPDY was proposed by Google back in November 2009 to reduce the latency and load time of web pages. It was provided as part of the Chromium open-source project and is enabled in Chrome by default.
We at Cotendo took on the challenge, implemented the server side, and extended our proxies to support SPDY, providing SPDY to HTTP “translation”. Guess what? It really speeds things up. But like all new good things, there is still work to do. We will share insights from our implementation, optimization of SSL-based traffic and present performance data both from Google’s and our customers’ deployment.
We believe the introduction of SPDY as a new application layer presents a unique opportunity to rethink web design concepts and front-end-optimization (FEO) techniques. We will discuss some optimizations we developed and suggest some guidelines on how you can approach these new types of optimizations.
by Lew Cirne
New Relic’s multitenant, SaaS web application monitoring service collects and persists over 90,000 metrics every second on a sustained basis, while still delivering an average page load time of 1.5 seconds. In this session I will discuss how good architecture and good tools can help you handle an extremely large amount of data while still providing extremely fast service. I’ll show you how we scale to support customer growth, how we monitor our system, and what traps to look out for.
You should come away from this session with an understanding of how to:
We at ImmobilienScout24 are Germany’s leading real estate listing portal. We run >700VMs hosting >100 services for operations, quality assurance and development based on Red Hat Linux, Java, Tomcat, Oracle/MySQL and all the other usual Open Source web solutions.
Currently we are in the process of packaging our entire software stack in RPM packages and even deploy configurations through RPMs.
We found out that this technology helped us a lot to work together with our developers in building our services. Unlike before the developers are now fully involved in all operational decisions about their applications and actually build their software RPMs themselves through automated build tools.
The integration between configuration, application stack and base operating system brought many additional benefits for provisioning, automated testing, auditing and others.
Having learned about other organization that use packages for deployment we would like to use the Velocity Conference as an opportunity to talk about package deployment and give those, who do not choose recipe-based deployment like chef and puppet, a place to talk to others doing the same.
A possible result of this BoF could be a collection of best practice approaches to package deployment.
14th–16th June 2011