A completely misguided meme has long been going around: that Python doesn't have, or need, any Design Patterns. This terrible meme may spring from not realizing what the Gang Of Four state so plainly in their historical "Design Patterns" book: which design patterns are useful DOES depend on the programming language one targets -- design is NOT independent of implementation, as the epic-fail "Waterfall" Methodology Pattern would suggest. What patterns apply to a design, depends to some extent on what implementation technologies will be used to realize that design.
If you focus on some "classic DPs" that are basically workarounds for some other language's lack of garbage collection, or for a clumsy static-typing system, those may indeed be worthless for Python. But many other DPs are perfectly useful and applicable, and Python's strengths as a language afford riffing on them to develop highly Pythonic, powerful, productive variants.
In this talk, I analyze some of my favorite pattern families -- e.g., Template Method and its variants, Dependency Injection and its ilk, Callback and friends -- in a highly Pythonic context. Non-pattern Idioms, and Patterns that aren't really Design Patterns but rather Architecture or Methodology ones, make cameo appearances.
Goals: remove from your system any residue of the pernicious meme about Python not having or needing design patterns. Prereqs: experience designing and developing software; intermediate-level Python knowledge.
by Simone Leo
Hadoop is the leading open source implementation of MapReduce,
Google's large scale distributed computing paradigm. Hadoop's native
API is in Java, and its built-in options for Python programming --
Streaming and Jython -- have several drawbacks: the former allows to
access only a small subset of Hadoop's features, while the latter
carries with it all of the limitations of Jython with respect to
Pydoop (http://pydoop.sourceforge.net) is an API for Hadoop that makes
most of its features available to Python programmers while allowing
CPython development. Its core consists of Boost.Python wrappers for
Hadoop's C/C++ interface.
The talk consists of a MapReduce/Hadoop tutorial and a presentation of
the Pydoop API, with the main goal of bridging the gap between the
Hadoop and Python communities. A basic knowledge of distributed
programming is helpful but not strictly required.
La popolarità dell'analisi dei network è cresciuta molto con la recente diffusione dei social network. Si tratta di un argomento multidisciplinare, con importanti contributi dai ricercatori di svariate aree come fisica, sociologia, matematica ed informatica
Tuttavia, l'analisi dei network è anche uno strumento utile per i programmatori. Le tecniche base che saranno introdotte in questo talk possono infatti essere usate ad esempio per i) testare la robustezza e la resistenza ai fallimenti di un network, e ii) comprendere a fondo la struttura di un social network, cosa che può portare ad intuizioni su mode e trend a partire dai moderni servizi di networking.
Insieme a questi concetti, sarà mostrato del codice Python che sfrutterà sia dei tool esistenti per l'analisi dei network che dei package di calcolo numerico. L'attenzione sarà principalmente sul codice, mostrato e discusso insieme alla teoria su cui è basato.
Gli unici prerequisiti consigliati per seguire questo talk sono delle abilità matematiche di base e la conoscenza a livello introduttivo del linguaggio di programmazione Python.
by Ezio Melotti
Python is an open source language, where everyone can contribute, and thanks to Mercurial now it's even easier.
With this talk I want to unveil what happens "behind the scenes" of CPython and how you can get involved and be part of the open source community that allows Python to be one of the most popular programming languages.
I will explain:
* what is the workflow of the CPython development;
* how to get a clone of Python;
* how to use Mercurial to do all the most common operations;
* what is the structure of the main CPython repository;
* what other are repositories are used;
* how to use the bug tracker to report and find bugs;
* how to use remote Mercurial repos to contribute code;
* what tools are used;
* how to get in touch with the core developers;
* what are the plans for the future;
by Jamu Kakar
Storm is an object relational mapper for SQL databases, with builtin support for PostgreSQL, MySQL and SQLite. It was designed and implemented as part of the Landscape project at Canonical in mid-2006 and was open sourced in mid-2007. Since then it's been used in a variety of projects, in production for many years, and has received numerous enhancements and bug fixes. The features of Storm will be explained with a series of examples and with discussion about what's happening in each one. In addition to describing the concepts and features that a developer needs to understand, a variety of best practices will be shared, to help developers make the best use of Storm.
The examples in this talk assume that participants have a good understanding of SQL, transactions, relationships between tables and other common database concepts.
The Larch Environment is a visual interactive programming environment for Jython/Python. Its purpose is to make programming more visual. To this end, protocols for presenting objects visually have been devised. A programming environment, that builds on the idea of the standard visual console, allows a programmer to experiment with ideas, and develop programs at the same time. Additionally, a way of embellishing source code with visual content is presented.
by Anselm Kruis
Stackless Python supports pickling of a wider range of types than conventional C-Python, including stack frames and code objects. On this basis it is possible to extend further the pickle.Pickler class in order to serialise classes, modules, packages up to certain limits. The sPickle package (http://pypi.python.org/pypi/sPickle) provides such an extended Pickler. The code was developed as part of a commercial project and recently released as free software by science + computing ag. Currently it requires Stackless Python 2.7.
In my presentation, I'll first demonstrate some applications of the sPickle package including serialisation of modules and executing parts of a program on a remote computer using RPyC and Paramiko.
In the second part of my speech, I'll give some insight in the internal operations of sPickle and the lessons learned during its development. Extending the Pickler showed to be like opening a can of worms. You have take care of many odds and ends to get it right. I'll point out some weak points in the implementation of the conventional pickling code and I'll also show the limits of the current sPickle implementation.
by Francesco Bochicchio
Intendo presentare un programma di utilità che ho sviluppato per aiutare me e i miei colleghi nel nostro attuale progetto.
Questo programma esegue le seguenti attività:
- Analizza un documento Microsoft Word - generato automaticamente - allo scopo di estrarne le informazioni relative alle strutture dati da usare per comunicare con dispositivi e/o programmi software. Tali informazioni sono memorizzate in un modello UML, generato interfacciandosi con il CASE tool usato in azienda (Rational Rose).
- Utilizza i dati in un modello UML - di solito una versione migliorata a mano di quello generato automaticamente - per generare un set di classi C++, una per messaggio, che forniscono i metodi per serializzare/deserializzare i messaggi usando le API specifiche del progetto.
Il programma è stato scritto in Python 2.x ed utilizza i seguenti moduli esterni:
- pywin32: per interfacciare sia MS Word che il tool CASE usando lo standard COM.
- ply : per analizzare il file in cui il tool CASE memorizza il modello,
Cloud Computing and Large Scale environments require sometime applications based on complex and distributed architectures... and this usually means a huge overhead in the design and confusion out of control in the code (network wise race conditions, single points of failure and so on)
Introducing elements like *MQ and IPC frameworks in this kind of applications is the only way to reduce the complexity and enable a fluid design (in other words: mess-under-control)
The talk is focused on describing how to design a distributed application in different scenarios, using ZeroMQ (a modern broker-less MQ system) as core framework, with examples and demos.
Dependency injection is a technique that has been around since long, and it's
widely used in many programming languages and environments, but it's not that
widespread in the Python world.
Many think that using dependency injection will force writing large-and-complex
xml blobs, break encapsulation, or reduce code readability, or just that it's
unneeded in an highly expressive language like Python is.
On the contrary, I'll show you that DI:
- doesn't require any library or framework;
- encourages peer role identification;
- helps keeping a class focused and cohesive;
- encourages separation between wiring from applicative code;
- makes your code more reusable, expressive and testable;
- doesn't break encapsulation;
- turns part of your coding efforts into configuration
Large applications, by the way, might just get a great maintenance boost by
using a real DI container, hence I'll briefly cover Pydenji, the Python(ic)
dependency injection toolkit, and what it can do for your application.
A basic knowledge of object oriented design and SOLID principles is required in order
to fully appreciate the content of this talk.
by Kay Schluehr
Folklore says that having a problem and trying to solve it with regular expressions gives you two problems. However not applying regular expressions to advanced textual search'n replace doesn't solve your problem either. One step above you have large portions of recursively structured text aka "source code" and using context free grammars and tools supporting them gives you two problems but not using them also doesn't solve your original problem. Maybe you get uneasy at that point because what I say implies parsers and computing science and what not and you still wake up in the night believing that you have to learn automata theory but you are lucky it was just a nightmare. Otherwise you are laughing about the little diatribe against regexps and use them without much deliberation, verifying your SQL input, mining source code and do all the other things they are not made for.
In my talk I'm addressing daily use of grammars outside of the scope of compiler implementation or natural language processing. My talk covers:
I'm touching this from the lightweight, "pythonic" angle and you might wonder why not everyone uses those techniques already for decades in their daily work. I can't answer this, I wonder about this too.
by Yves Hilpisch
In financial engineering and derivatives analytics, C/C++/Java/VBA and other languages are still dominating. Visixion has developed with DEXISION (http://www.dexision.com) the first full fledged derivatives analytics suite with Python as core language.
DEXISION is an On Demand application that is completely Open Source based (LAMP). For derivatives valuation, it uses Monte Carlo simulation -- an approach known to be computationally demanding. However, Numpy provides the performance and functionality needed to implement financial simulation algorithms in a fast and compact manner.
The talk illustrates the architecture of our analytics suite and demonstrates how to implement fast and compact simulation algorithms with Python and Numpy. The talk shows that the Python/Numpy combination reaches sufficient speed for productive financial applications -- something still widely doubted.
by Mark Shannon and Mark Shannon
CPython can be made faster by implementing the sort of
optimizations used in the PyPy VM, and in my HotPy VM.
All the necessary changes can be made without modifying the language or the API.
The CPython VM can be modified to support optimizations by adding
an effective garbage collector and by separating the
virtual-machine state from the real-machine state (like Stackless).
Optimizations can be implemented incrementally.
Since almost all of the optimizations are implemented in the interpreter,
all hardware platforms can benefit.
JIT compiler(s) can then be added for common platforms (intel, ARM, etc.).
For more information see http://hotpy.blogspot.com/
All problems have simple, easy-to-understand, logical wrong answers.
Subclassing in Python is no exception. Avoid the common pitfalls
and learn everything you need to know about how subclass in Python.
Mobile apps are the hot item of the day -- and the best mobile apps are backed by a great website. Python web developer Nate Aune and iPhone developer Anna Callahan will show you how we built a simple music web app in Django with a native iPhone app that communicates with it. Attendees of this talk will see a concrete case study of building an application that exposes an API for mobile devices.
Our web app exposes a JSON API for sending and receiving data from the mobile device. We’ll talk about why we chose Django and the TastyPie API package, and discuss other Python-based frameworks that could be used to build the API such as Pyramid, Flask and Bottle. We’ll also compare REST and custom APIs to understand best practices for building APIs designed for mobile devices.
In this talk I'll describe our successful experience in introducing Python
into a system for blood collection tube labeling in laboratory and hospital
environments, based on IHE Technical Frameworks –the industry standard
for modeling and streamlining healthcare processes– and designed to avoid
human errors and ensure process traceability.
During the talk I will explain why we chose Python in the first place,
how we've been able to leverage the language's features and
characteristics for our specific field and what problems and limitations
I will show specific instances -showing code examples too– of Python
usage in different parts of the project, including a low-level driver
for laboratory automation machinery, an asynchronous messaging module,
the implementation of IHE-compliant actors and the inevitable end-user
web application, implemented with Django.
Using Python greatly helped us in building our system, allowing
very rapid prototyping cycles for both hardware and software, but during
the talk I'll also point out what we found was missing, and what
would be nice to have to ensure Python has its proper place as a viable
platform for designing streamlined healthcare workflows
based on established international standards.
by Claude Gilbert
Python is a great language for writing programming frameworks. Python frameworks are normally addressed to software developers who are Python professionals. I developed a software package in a scientific institution, designed to be used by non-programmers, but also designed to enable customisation through programming by some users. I finally designed a three level package:
One of the challenges was to offer an application with an easy to use interface, not graphical, not web-based and not requiring Python programming. This interface was necessary for batch processing.
This talk addresses how this project was carried out, the technical solutions adopted and how Python was introduced in an operational scientific institution (http://www.ecmwf.int) where most users were Fortran programmers. Python was introduced as early as 2004 and it was a challenge to gain acceptance. I will also make a parallel with a project I am currently working on for NASA (http://gmao.gsfc.nasa.gov/).
*Desperately trying to forget technical details* summarises how I tried, using Python, to help Meteorology scientists to focus on their domain of expertise instead of constantly solving technical problems.
The disciplines of Meteorology and Climate involve numerical modelling of physical phenomena. The amount of data going in and out of the model is considerable. The organisation and the storage of data is complicated, their post-processing is a challenge. Scientists need to access and process input and output data to monitor the trends of the input data and to evaluate the performance of their models. Those statistics, diagnostics, plots and verifications are crucial to the improvement of the quality of the models. Finding the right data, decoding it, transforming it to be ready for use are necessary steps to initiate the pre-processing. All these actions are fundamentally the same between different prediction centres, but the data organisation and file formats can differ.
The London Python Code Dojo is a community organised monthly meeting for Python programmers in the UK. Variously described as social coding, developer training, "Scrapheap Challenge" for Pythonistas and "I didn't learn coding like this when I was a lad", we've forked the traditional code-dojo format and turned it into something very different.
This talk will explain and explore what happens in the dojo, how it's organised and why various changes were made to the classic dojo format. Reference will also be made to influences from music education and philosophy of education.
Hopefully, by the end of the talk you'll all want to go organise a dojo!
20th–26th June 2011