Your current filters are…
We'll take a whirlwind tour through the world of web operations and try to get a handle on both why it is a challenging occupation and how to do it better.
by Andrew Oates
In this lightning demo we’ll cover the newest Page Speed Online features, including waterfall analysis and critical path highlighting. We’ll show you how to use Page Speed Online to analyze what’s in the critical path of the page load or the first paint, and which Page Speed suggestions should be implemented in order to reduce the time spent on that critical path.
Today, web developers have a large number of diagnostic tools available to help debug their web applications, starting with browser-specific debuggers like FireFox’s FireBug and WebKit’s Web Inspector.
Well, desktop web developers anyway. Most mobile platforms provide little in the way of diagnostic tooling for their browsers, leaving mobile web developers in a bit of a lurch.
weinre (WEb INspector REmote) is an open source tool to help bridge the web debugging tool gap in mobile. It repurposes the Web Inspector user interface to allow you to interact with a live mobile web application, from the luxury of a desktop browser window.
Come see a live demo of weinre at this session!
Sure, front-end performance focus is great. But when your site surges in popularity—a foregone conclusion for us all, right?—web scale preparation can avoid a ton of grief. Think you can’t test your site with hundreds of thousands of visitors from all over the world without weeks of sleepless nights coding open source tools? This demo from Robert Castley, Solution Engineer and Technologist at Keynote, shows you how to design, run, and analyze a complete Web load test in 5 minutes, with time to spare for tea and biscuits.
Everybody talks about fast websites, but how fast is exceptionally fast? What prevents us from being fast? How good are we in delivering exceptional performance? What do we have to do to become really fast? This fast-paced talk will answer all these questions in only five minutes. Which is the maximum time a user waits for a website to load.
by Brian Doll
Web applications are being shipped faster, deployed instantly to the cloud and are catering to the ever-growing needs of a technologically connected audience.
How do you manage it all? How do you maintain high performance in your application, stay on top of your server performance and ensure your end users are getting the best service you can deliver? Find out with New Relic!
by Joshua Bixby
If you’ve been looking for compelling data that will help you make a case for investing in mobile performance, this session is a must-see.
Mobile web users are demanding. 85% say they expect sites to download at least as quickly on their mobile devices as they do on their home computers. Almost half say that poor performance makes them less likely to return to the site. And one third say they would visit a competitor’s site next. Yet despite high user expectations, m-commerce sites continue to lag in performance, with the average site taking more than 9 seconds to load.
Despite the technical constraints of delivering speedy mobile websites, some companies are emerging as leaders. But significant data is still hard to come by. To fill this gap, Joshua has dived into Strangeloop’s customer analytics to evaluate the relationship between performance improvements and business KPIs for m-commerce sites.
In this session, Joshua will present brand-new data in detailed case studies that show how real-world companies have optimized the performance of their mobile sites, and as a result have experienced dramatic improvements in key performance indicators such as:
Attendees will walk away with a clear before-and-after picture that shows why speed is a critical factor in mobile success.
by Jon Jenkins
The speed at which dev ops works is critical. It enables greater innovation and faster reaction in your business. Join Jon Jenkins as he talks about why it’s important to iterate fast, deploy fast, be wrong a lot, and learn from your mistakes.
by Tim Morrow
Betfair recently launched a beta version of their Sports betting website and is in the process of rolling out the site to new customers. We would like to present our journey so far and our results:
Faster page load times Improving our operational insight into the performance of our website Significantly increasing our rate of deployment through continuous delivery How it impacted our bottom line
Our talk will explore the initiative in great detail. We will discuss:
We’ll go over the fundamentals of monitoring and performance assessment. We’ll talk about what to monitoring, why to monitor it and how one should go about monitoring their systems, networks, applications and business.
by Jon Jenkins
The complexity of website content (as measured by the HTTP Archive) continues to increase. Meanwhile, the number of mobile browsing devices is increasing at more than 25% each year. Due to their limited processing power, memory, storage and network bandwidth these devices pose new challenges in terms of web performance as sites become more complex. To date, the approach to solve these issues has been to create mobile versions of web sites with limited content and simplified layout. This talk will present a new way we are looking at the mobile browsing challenges at Amazon.com. We will present data about site latency among various classes of devices and usage of mobile versions of the site. Much of the presentation will focus on technical approaches to dramatically reduce web site latency for this class of users.
by Evan Elias
When correctly architected, horizontal partitioning of data can lead to dramatically better fault tolerance and isolation, while significantly improving the manageability and flexibility of a web site. A well-sharded MySQL architecture can perform comparably to newer NoSQL solutions, while still offering the expressiveness of SQL and the time-tested robustness of InnoDB.
The road to complete sharding of Tumblr’s primary data has been a long one, and I’ll discuss how, when, and why we transitioned from a single database, to several functionally-partitioned datasets, and finally to a massively horizontally-sharded architecture.
Along the way, we’ve developed a set of tools that has made the sharding process significantly easier to execute and monitor. The bulk of this session will focus on the practical aspects of sharding, digging into the nitty-gritty of the tools we use to:
The session will broadly appeal to engineers at rapidly-growing, MySQL-backed sites. The information presented is all drawn from first-hand experience, and mostly cannot be found in existing books or web sites. Some level of audience familiarity with MySQL is assumed, but previous experience with sharding, InnoDB internals, or MySQL replication is not required.
How do you measure web page performance?
We all talk a lot about how fast our web pages are but are we comparing apples with apples?
There are at least 6 different ways to measure web page performance, and multiple metrics to gather such as initial render, “above the fold” time or onLoad event time.
The goal of this talk is to put web performance measurement in perspective by identifying and classifying the different ways of measuring web performance.
So what dimensions can we compare them on?
2. What metrics do they collect?
3. “Completeness” of measurement – e.g. can they measure 3rd party or CDN content?
4. Ease of implementation / use?
5. Scalability (e.g. are they suitable for “real-user monitoring” type solutions or are they more developer/single-user oriented?)
6. Are they suitable for Mobile devices?
7. Are they suitable for measuring web API’s?
8. What’s the cost?
9. Suitable for SME vs Enterprise?
We will also talk about the “active monitoring versus real-user monitoring” debate… what are the pro’s & con’s of each approach and will one eventually supplant the other?
This session will demonstrate how many networking concepts and techniques related to Quality of Service (QoS) can be mapped and applied to web clients, web services, applications and runtimes providing an end-to-end resource management conceptual framework that offers resource protection (via call shaping & policing) as well as request prioritization based on contextual & dynamic classification. It puts forward the argument for applications to be viewed more like the underlying networking layer, adopting many traffic engineering and quality of service optimizations techniques. It presents a mapping guide from one domain to another and then shows how this has been implemented in a solution that offers a production QoS solution for applications developed in Java, Scala, JRuby/Ruby and Jython/Python as well as all other JVM languages and eventually other runtimes/platforms including Microsoft’s .NET/Azure.
This session will also outline how QoS as applied to runtimes can be mapped to all execution points in the delivery of web based software services from client devices to backend services. It will demonstrate a working proof of concept using Apache web servers and app containers.
1. Reading/writing to the DOM
2. Function calls
3. Lookups (scope lookups, property lookups, array item lookups)
Your website just went down. As you try to understand what has gone wrong, you quickly realize something is different this time. There’s no clear reason why your site should be down, but indeed it is.
This talk is about the story of our team’s first unprepared fight against a DDoS attack.
The idea (what I’d like to convey)
I’d like to give a honest and detailed view of what happened when we were DDoS’ed. It was our first experience at this scale, and we were probably unprepared.
I’d like this to be clear from the presentation, and give lots of details on how we searched our way around and figured out our next steps as the attack itself evolved.
The outcome in the end was positive, and we were able to make some high-level changes to our infrastructure and architecture to (at least try to) better protect our systems in the future.
intermediate to advanced
most suited for operations engineers
by Andrew Oates, Matthew Steele and jmarantz
Page Speed Analysis tools provide performance metrics for web-sites, and mod_pagespeed and Page Speed Service automatically rewrite web sites by applying web performance “best practices”. But exactly what is the impact on web site latency and usability from implementing these transformations? We’ll share what we’ve learned from running Page Speed and mod_pagespeed in the field.
What is the impact of specific web performance optimizations on metrics like page load time and time to first paint? What is the interaction between latency, page structure, and network-level decisions such as when to flush web pages so that browsers can start rendering? In this talk, we’ll present our latest findings and show developers how to apply these findings to their own web sites.
We’ll conclude with a discussion of new features being added to Page Speed analysis & rewriting tools to translate these learnings into a faster web.
by Ingmar Krusch and Schlomo Schapiro
Ten years of continuous growth leave many stretch marks on any website, like increasing maintenance overhead, lengthy and complex internal processes and lots of code and configuration that nobody knows about. A Windows-Linux migration in the past also does not help to achieve a clean platform.
On the other hand, being a market leader does not leave any room for relaxation but requires us to stay ahead. We need to be faster than the competition and make sure that our own size and being part of a larger corporation does not slow us down.
Continuous Delivery is not just a buzz word but the answer to many of our current problems. Today we envision our data center and the various IT departments as building blocks in a Continuous Delivery Platform (CDP) that strives to shorten the time it takes to convert an idea into productive code.
This talk starts from a big picture of a typical web company and drills down into the technical and organizational challenges that stood in the way of creating a CDP. Our developers turning agile and doing everything through SCRUM was only the start of a series of profound changes that touched all IT departments and beyond.
DevOps helped us a lot to explain to our management and colleagues what is going on and what we want. But only a brand-new deployment and configuration management brought the actual break-through to shared responsibility and teams developing operational thinking.
An important learning was that engineers come together through solving common problems as a team. In our case we had to deal with two main concerns: Linux and Java.
Linux being the choice operating environment we started to do things the Linux Way and package all our software, configuration files and even content into RPM packages, thus greatly simplifying the problem of deployment and system management. Since our developers build the RPM packages they also get much more involved in site operations and suddenly the whole DevOps idea actually works out for us.
Java being the choice coding language (with a huge code base :-() and a lot of “Java thinking” lead us to write a bridge between Java and Linux: Our Nexus YUM plugin translates between a Java world that knows only Maven and a Linux world that likes to install packages via YUM. The automated build process in TeamCity creates RPM packages and puts them into the Nexus which serves the same RPM packages as a YUM repository to our servers. This simplifies the handover from development to operations and is a big performance boost for our delivery chain.
These and other technical solutions come together with many organizational changes – e.g. giving developers more access in the data center – to create the foundation for our Continuous Delivery Platform and enable everyone to focus on working the software while relying on a reliable delivery tool chain below.
Another important learning is the way how to ensure the management buy-in into what essentially started out as an internal grassroots movements. Previously closely guarded kingdoms are now open to common responsibility based on a trust relationship between development and operations.
With even product owners seeing the business value of web operations, ImmobilienScout24 is now in a much better position to deliver business value as fast as possible to our web platform.
We would like to share with you the details of our journey, the ideas that helped us along the way and the code that we wrote. This talk will be useful to both managers and engineers who want to embark on a similar path.
by Aaron Peters
What is the best way to get JS into the page?
It’s been well know that scripts block. In older browser external scripts block next objects from being downloaded and in all browsers both external and inline scripts block DOM parsing and page rendering. Since 2008, per the recommendations of Yahoo! and Steve Souders’ book, the advice has been to combine external scripts and move the script to the bottom of the HTML in order to get optimal page performance. Then came the dynamic loading of scripts: appending a script to the DOM without blocking the UI thread. Script loaders like LABjs and Yepnope emerged, enabling web developers to load multiple script tags in a non-blocking way, while preserving execution order and even with the ability to couple external scripts with inline JS code. And the story continues. There is the defer attribute, the async attribute and async=false is coming soon.
With so many ways to get JS code into the page, how do you, the web developer, decide which technique(s) to use to get optimal performance for your pages? As a starting point, you need a solid understanding of the various techniques and how they behave in the popular browsers under different conditions.
My presentation will cover:
8th–9th November 2011