Multi-threaded servers compete for the global interpreter lock (GIL) and incur the cost of continuous context switching, potential deadlocks, or plain wasted cycles. Asynchronous servers, on the other hand, create a mess of callbacks and errbacks, complicating the code. But, what if, you could get all the benefits of asynchronous programming, while preserving the synchronous look and feel of the code – no threads, no callbacks?
In this talk we’ll look at how Ruby 1.9 Fibers, combined with EventMachine, can enable us to build a fully asynchronous web-server, while preserving the feeling of synchronous code – the best of both worlds. A cooperative, pure IO-scheduled web server to power your next Rails, or Rack application!
by Bryan Smith
The speech would be segmented as follows:
Introduction of the Sheeva CPU
More I/O than imagined
Practical Applications and Impractical Applications
Actual Power Consumption
Booting the Sheevaplug
The future of Plug Computing
I will discuss the fine points and rough edges of the Sheevaplug and my experiences with the device over the past year. This device always brings smile to people’s face, as it tends to reel the mind with its usage potential. I will also provide a power consumption chart to compare its power consumption with other servers and thin client devices.
This device can be used for nearly anything you use your current computer for in the Linux/Unix infrastructure. I’ll outline many of the usage scenarios and share my experience therein.
I will speak thoroughly but briefly about the I/O of this device, because it is really a STRONG selling point of the unit. I’ll show other interesting projects and give credit to those who have deployed these units in commercial space and personal projects.
During the presentation I’ll boot the device from several different medium to show the speed at which it boots and its versatility as a server/desktop/embedded platform.
The device is wonderful but there are many caveats and things to avoid while exploring the Sheevaplug. I’m not selling this device so I will give the most accurate, spin free presentation as possible.
The OpenCV library is a collection of routines intended for real-time computer vision, released under the BSD License, free for both private and commercial use. The library has a number of different possible applications including object recognition and tracking.
This talk will walk attendees through the cross-compiling and building a static distribution of the library which you can link to your application and make use of from both the iPhone Simulator and the iPhone (and iPod touch) device itself.
We’ll then go on a discuss how to use the OpenCV library to build a simple application to perform face recognition on images taken directly using the iPhone’s own camera.
This presentation is an walks the attendees through cross-compiling and building the OpenCV library for the iPhone. It assumes some experience with programming the iPhone and with Objective-C, and experience with the command line build tools. Previous experience cross-compiling for code for multiple platforms is not required. It would suit experience iPhone developers.
by Jill Tarter
For years you’ve been leaving your computers turned on in order to process data packets for UC Berkeley’s SETI@home – that’s great! Please keep it up!
Did you ever want to get more involved?
Do you think about the ‘why’, ‘how’, ‘what if?’ of SETI and want to offer improvements? Do you like contributing to open source code development projects? Do you think SETI projects should be looking for different kinds of signals, and do you have great algorithms for finding them in noise? Are you good at seeing hidden patterns, do you have some time to examine data coming from SETI observations with the Allen Telescope Array in real time?
Do you wear headphones and listen to music while you work to sharpen your concentration? Could you imagine listening to data instead and responding to anomalies? If any of these ideas described you, then you should check out setiQuest.org – we want to make the SETI Institute’s SETI programs on the Allen Telescope Array better and more comprehensive. We have some ideas of things we can/should do, but we need resources and your help to get the job done. We have some ideas of things we’d like to do, but we don’t yet understand whether or how we can make them happen; maybe you hold the key. There are things you know about signal processing, data manipulation and distribution, and crowdsourcing that we need you to help us learn and implement, so we can improve our searches.
In short, thanks for all those years of paying for the electrons coming out of your wall socket, but now we also want your ‘thinkons’! We are eager to move ahead and conduct better searches so that together we can find any signals that are out there! We want you to become part our team and start adding ‘Earthling’ to your personal profiles when you identify yourself to the world. It’s time to change the humanity’s point of view of who we are (individually and collectively) to one that is more cosmic and inclusive.
by Andrew Hart
Modern health care information is highly heterogeneous, distributed, and difficult to leverage in downstream analyses that are critical to quality of patient care, including diagnosis, treatment, and outcome prediction. There are many data types to deal with (medical flowsheets, free text notes, lab results, measurements recorded by clinicians and automatically captured by instruments, waveforms) and a variety of competing standards and formats for organizing and transmitting this data (HL7, SNOWMED, ICD-9, UMLS) – not to mention proprietary and vendor-specific stores. The information landscape is growing at a rapid pace, but medical informatics nevertheless lags far behind other domains in its ability to leverage massive amounts of data to improve service and build effective data-driven tools. One of the largest obstacles to a medical information revolution is inaccessible data locked in proprietary silos, unavailable to clinicians, researchers, and information systems alike. Any progress will require that this information be unlocked from independent systems that collect and manage it.
NASA’s Jet Propulsion Laboratory (JPL) and the Whittier Virtual Pediatric Intensive Care Unit (Whittier VPICU) group at Children’s Hospital Los Angeles have been collaborating since 2003 in the development of open source grid software for the description, organization, management, sharing, and analysis of highly granular data from pediatric intensive care units (PICUs). We are leveraging one of NASA’s flagship grid software technologies, the Object Oriented Data Technology (OODT) framework, to assist in this regard. OODT is hosted at the Apache Software Foundation (ASF) and is a podling within the Apache Incubator. OODT provides a set of loosely coupled components for data capture, discovery, access, and distribution that can be instantiated and connected via modern web protocols and data formats (REST, RDF, etc.) for a particular deployment in a domain. The OODT framework enables CHLA clinicians, researchers, and software to access large amounts of data from a variety of proprietary sources (e.g., hospital-wide EHR systems, bedside monitors, unit-specific applications, homegrown databases) in a unified manner and can be extended readily to enable sharing of data between institutions.
Besides OODT, our project plans to utilize other open source software, including search technologies from Apache Lucene (Solr, Tika, etc.), and common platforms (Ubuntu, Redhat, etc.) as a means for building reliable, value-added software at low cost. We also have begun development of a common semantic architecture for describing similar data from disparate sources, and we plan to continue expanding this into a comprehensive ontology for all PICU clinical data and to share it as a free, open standard.
Our long-term goal is to construct a national distributed data-sharing network to drive the next generation of research into data-driven decision support tools and comparative effectiveness and outcomes analysis. Whittier VPICU and JPL both have experience in building such national collaborative networks: CHLA helped to develop a network of over 80 PICUs that share limited datasets for performance evaluation; JPL has used OODT to construct a variety of scientific data sharing networks, including the National Cancer Institute’s Early Detection Research Network (EDRN).
In this talk, we will discuss the current state of OODT and its successful deployment in projects such as EDRN, the motivation for its use as a means of unlocking and unifying health data at CHLA and across other institutions, and our experiences in leveraging open source software to provide a foundation for building advanced data-driven clinical decision support systems to improve the quality of pediatric intensive care going forward.
by John Mertic
Here’s the scenario: you’ve wrote a PHP application that is designed to run on Linux, Apache, and MySQL Now you have a customer that wants to run it on Windows. Or using Oracle. Or they like Memcache instead of APC. How do you do it, without sacrificing performance, stability, simplicity, and your own sanity?
In this talk, we’ll look at how we approached this problem at SugarCRM, and what lessons we learned in the process.
by Paul Fenwick
Technology advances through the creation of new inventions. Devices, protocols, machines, and ideas all increase the breadth of human knowledge, and make life easier for us all… At least in theory.
In reality, the advance of progress is littered with bad ideas. A profound lack of foresight, thinking, or just plain common sense has left us with countless artefacts of mis-design. What’s worse, we often build upon such twisted horrors in the creation of new technology.
Leeches that predict the weather, fire alarms that trap their users, poorly thought-out medicines, toys which irradiate, mangle, or intoxicate their users, and a bizarre array of poorly conceived ideas are just a few of the inventions that litter the otherwise noble pursuit of applied science.
Join internationally acclaimed speaker Paul Fenwick as we examine some of the worst design decisions ever made.
by Melanie Swan
Biology is the next important area where open source models could substantially advance progress and solve existing challenges. DIYbio genomics is the idea of having open collaboration platforms for citizen science using genomic data. Enough individuals are in possession of SNP genotyping data from direct-to-consumer genomic services such as 23andme, deCODEme, Navigenics, etc., that collaborative citizen science projects can begin. Collaborations could extend to a number of domains including ancestry, health conditions, and athletic performance.
by Phillip Longman
There is a global consensus that integrated electronic health records are essential to improving health outcomes, providing better quality of care, insuring patient safety, and reducing costs. Unfortunately, the healthcare software industry has yet to produce effective clinical and administrative solutions which are desperately needed worldwide. The lesson for ARRA funding is pouring more money into the same approaches and solutions will continue to yield the same results.
The VA stands out as a role model for how software can be developed and improved to create the kind of fundamental paradigm shift that is essential to catalyzing real change. The session will explore how the VA used open, collaborative software development and improvement processes to drive clinical and cost improvement and transform itself into one of the best managed health systems in the world.
19th–23rd July 2010