In a desperate effort to return to a Proper and Normal universe, I'm going to try to actually do the Friday weeknotes blog entry on a Friday. It's crazy. But I have to try. As Jake managed to slip all the way into Tuesday (slacker) this means that I only have 3 days to talk about, but I'll try to pad it out with rambling as much as I can.
In the aftermath of the move, Wednesday was nice and quiet — with Jake speaking at Web Rebels, Sophie on holiday, and Nat at Ikea buying (as far as I can tell) ALL OF THE THINGS, it was just Simon and me in our newly-assembled office. There was a brief bit of excitement with DNS failures in the afternoon, but mostly we sat and worked quietly on things to the soothing background sounds of sawing and sanding and other construction. (We're going to have a roof deck with wireless and power sockets! So awesome!)
Thursday was BUILD FURNITURE day — this is a larger office so there's room for more desks, bookshelves, and pot-plants now, and as a result it's looking "much less scrappy than the last one". Thanks must be given especially to Nat's Dad Steve, who did a huge amount of van-renting, driving, tool-providing and tea-demanding during the move. Thanks, Steve!
Also we discovered a new sandwich place. Mmmm, sandwiches.
Today is Friday, and the most exciting thing that's happened all day is the huge poster of Gazza that's appeared on a wall opposite the office. Apparently he's the new face of something. There have been photographers on the roof, which I'm sure will be lovely once we're allowed up there.
One of the persistent problems in our deployment setup right now is that we have a single point of failure — we're running the Lanyrd load balancer on just one of our EC2 boxes, so if it falls over, the entire site would go down. So far this hasn't happened, but when you're hosting on EC2 you're really not supposed to assume that the machines are 100% reliable.
The simplest solution to this is to use even more of Amazon's infrastructure, and load balance using Elastic Load Balancer rather than using one of our own boxes. It's got a few minor disadvantages over our current approach — for instance the site access logs will be split over multiple servers rather than all conveniently on one — but they're minor compared to the advantage of getting rid of a single point of failure.
The biggest problem with moving to ELB is that you're not given a permanent IP for the load-balancer endpoint — you have to use a CNAME, and this prohibits pointing the domain apex (lanyrd.com, vs www.lanyrd.com) at it without risking horrible embarassing breakage. There's really only one solution to this (assuming we really want to keep our site at lanyrd.com vs adding a www.), and that's to move even more of your infrastructure onto Amazon's cloud, and give them your DNS hosting as well.
Route 53 is Amazon's DNS hosting service, and it's pretty good. I've certainly used much nastier DNS interfaces. The clever thing that Amazon do here that noone else can do is strongly link their DNS and load balancer, so I can point my A records to "this ELB endpoint" rather than hard-coding anything in particular, so you can reliably load-balance your site even if you're not willing to use a prefix.
So this is what I'm working on right now. Removing single points of failure gives me a warm fuzzy feeling. My eventual long-term goal is to get the whole of lanyrd.com served via SSL, and this is a big step towards that. Better than kittens.
You need to sign in to comment on this entry