Your current filters are…
by Alex Martelli
Grace Murray Hopper’s famous motto, “It’s easier to ask forgiveness than permission”, has many useful applications – in Python, in concurrency, in networking, as well of course as in real life. However, it’s not universally valid. This talk explores both useful and damaging applications of this principle.
I start by introducing the motto “It’s easier to ask forgiveness than permission” and the woman who used it, Rear Admiral Grace Murray Hopper, also known as the “mother of Cobol” and the author of the first ever programming-language compiler.
I then move on to the Python context, where the motto supports the proper usage of exception-catching rather than preliminary checks; and the “rule that proves the exception” introduced by abstract base classes.
Expanding the subject, I show how “optimistic concurrency” applies that motto (while locking would “ask permission”, in essence, STM “asks forgiveness"), and how collision-detection focused networking protocols have similarly triumphed over more highly structured, “ask permission” ones like token-ring.
Moving to the fuzzier context of real life, I then show how this daring approach does not work quite as well as in the technical realm – except when applied correctly, in the right circumstances… and I try to evince a general law describing what the right circumstances for its application are, comparing and contrasting with the similar issue of “do it right the first time” versus “launch and iterate” (and the latter’s cognate “fail, but fail fast” principle).
Spotify’s current catalog contains 15 million songs. Original storage of audio and metadata is over 500 terabytes and we’re transcoding 500 000 new audio streams a day. At it’s best the system can make an album playable just few minutes after it’s delivery.
This talk is about building the music pipeline, all the way from the labels, who deliver music and metadata XML to our system, to the clients. Problems here are the concurrency, massive amount of data, enriching the metadata to provide better quality and to actually deliver 100 gigabytes of indexes daily.
by Andrew Dalke
The future is here! Or rather, concurrent.futures became part of the Python standard library with 3.2. This style of asynchronous programming, also known as promises, has been around for decades but is only recently becoming popular in a number of languages and libraries.
My presentation is meant for a Python programmer who knows nothing about futures. I’ll structure it around processing web server logs, and show several ways to Python code can make more effective use of a multi-core machine. In some cases the multi-threaded executor is good enough, but in others the right solution is the multi-process executor. Because of the unified API, it’s a one line change to switch from one to the other.
It isn’t hard to write your own executor for different compute models. I’ll show that by developing a new one which works on top of the PiCloud API. At the end I’ll describe some of the more experimental work I’m doing to use promises in a dependency graph, where certain computed properties are dependent on others.
Even though concurrent.futures came in 3.2, Python 2.x users can use the API through Alex Grönholm’s ‘futures’ backport.
2nd–8th July 2012