by Brian Doll
Web applications are being shipped faster, deployed instantly to the cloud and are catering to the ever-growing needs of a technologically connected audience.
How do you manage it all? How do you maintain high performance in your application, stay on top of your server performance and ensure your end users are getting the best service you can deliver? Find out with New Relic!
by Tim Morrow
Betfair recently launched a beta version of their Sports betting website and is in the process of rolling out the site to new customers. We would like to present our journey so far and our results:
Faster page load times Improving our operational insight into the performance of our website Significantly increasing our rate of deployment through continuous delivery How it impacted our bottom line
Our talk will explore the initiative in great detail. We will discuss:
by Jon Jenkins
The complexity of website content (as measured by the HTTP Archive) continues to increase. Meanwhile, the number of mobile browsing devices is increasing at more than 25% each year. Due to their limited processing power, memory, storage and network bandwidth these devices pose new challenges in terms of web performance as sites become more complex. To date, the approach to solve these issues has been to create mobile versions of web sites with limited content and simplified layout. This talk will present a new way we are looking at the mobile browsing challenges at Amazon.com. We will present data about site latency among various classes of devices and usage of mobile versions of the site. Much of the presentation will focus on technical approaches to dramatically reduce web site latency for this class of users.
How do you measure web page performance?
We all talk a lot about how fast our web pages are but are we comparing apples with apples?
There are at least 6 different ways to measure web page performance, and multiple metrics to gather such as initial render, “above the fold” time or onLoad event time.
The goal of this talk is to put web performance measurement in perspective by identifying and classifying the different ways of measuring web performance.
So what dimensions can we compare them on?
2. What metrics do they collect?
3. “Completeness” of measurement – e.g. can they measure 3rd party or CDN content?
4. Ease of implementation / use?
5. Scalability (e.g. are they suitable for “real-user monitoring” type solutions or are they more developer/single-user oriented?)
6. Are they suitable for Mobile devices?
7. Are they suitable for measuring web API’s?
8. What’s the cost?
9. Suitable for SME vs Enterprise?
We will also talk about the “active monitoring versus real-user monitoring” debate… what are the pro’s & con’s of each approach and will one eventually supplant the other?
1. Reading/writing to the DOM
2. Function calls
3. Lookups (scope lookups, property lookups, array item lookups)
by Andrew Oates, jmarantz and Matthew Steele
Page Speed Analysis tools provide performance metrics for web-sites, and mod_pagespeed and Page Speed Service automatically rewrite web sites by applying web performance “best practices”. But exactly what is the impact on web site latency and usability from implementing these transformations? We’ll share what we’ve learned from running Page Speed and mod_pagespeed in the field.
What is the impact of specific web performance optimizations on metrics like page load time and time to first paint? What is the interaction between latency, page structure, and network-level decisions such as when to flush web pages so that browsers can start rendering? In this talk, we’ll present our latest findings and show developers how to apply these findings to their own web sites.
We’ll conclude with a discussion of new features being added to Page Speed analysis & rewriting tools to translate these learnings into a faster web.
8th–9th November 2011