Data visualization is often where people realize the real value in underlying data. Good data visualization tools are therefore vital for many data projects to reach their full potential.
Many companies have realized this and are looking for the best solutions to address their data visualization needs. There is plenty of tools to choose from, but even for relatively simple charting, many have found themselves with limited options. As the requirements pile up, options become limited: Cross-browser compatibility, server-side rendering, iOS support, interactivity, full control of branding, look and feel … and you’ll find yourself compromising, or – worse yet – building your own visualization library!
Building our data publishing platform – DataMarket.com – we’ve certainly been faced with the aforementioned challenges. In this session we’ll share our findings and approach for others to avoid our mistakes and learn from our – sometimes hard – lessons learned.
We’ll also share what we see the future of online data visualization holding: the technologies we’re betting on and how things will become easier, visualizations more effective, code easier to maintain and applications more user friendly as these technologies mature and develop.
Data isn’t just for supporting decisions and creating actionable interfaces. Data can create nuance, giving new understandings that lead to further questioning—rather than just actionable decisions. In particular, curiosity, and creative thinking can be driven by combining different data sets and techniques to develop a narrative around a set of data sets that tells the story of a place—the emotions, history, and change embedded in the experience of the place.
In this session, we’ll see how far we can go in exploring one street in San Francisco, Haight Street, and see how well we can understand it’s geography, ebbs and flows, and behavior by combining as many data sources as possible. We’ll integrate basic public data from the city, street and mapping data from Open Street Maps, real estate and rental listings data, data from social services like Foursquare, Yelp and Instagram, and analyze photographs of streets from mapping services to create a holistic view of one street and see what we can understand from this. We’ll show how you can summarize this data numerically, textually, and visually, using a number of simple techniques.
We’ll cover how traditional data analysis tools like R and NumPy can be combined with tools more often associated with robotics like OpenCV (computer-vision) to create a more complete data set. We’ll also cover how traditional data visualization techniques can be combined with mapping and augmented reality to present a more complete picture of any place, including Haight Street.
These days users won’t tolerate slow applications. More often than not, the database is the bottleneck in the application. To solve this many people add a caching tier like memcache on top of their database. This has been extremely successful but also creates some difficult challenges for developers such as mapping SQL data to key-value pairs, consistency problems and transactional integrity. When you reach a certain size you may also need to shard your database, leading to even more complexity.
VMware vFabric SQLFire gives you the speed and scale you need in a substantially simpler way. SQLFire is a memory-optimized and horizontally-scalable distributed SQL database. Because SQLFire is memory oriented you get the speed and low latency that users demand, while using a real SQL interface. SQLFire is horizontally scalable, so if you need more capacity you just add more nodes and data is automatically rebalanced. Instead of sharding, SQLFire automatically partitions data across nodes in the distributed database. SQLFire even supports replication across datacenters, so users anywhere on the globe can enjoy the same fast experience.
Stop by to learn more how SQLFire gives high performance without all the complexity.
This session is sponsored by VMware
by Leigh Dodds
There are many different approaches to putting data on the web, ranging from bulk downloads through to rich APIs. These styles suit a range of different data processing and integration patterns. But the history of the web has shown that value and network effects follow from making things addressable.
Facebook’s Open Graph, Schema.org, and a recent scramble towards a “Rosetta Stone” for geodata, are all examples of a trend towards linking data across the web. Weaving data into the web simplifies data integration. Big Data offers ways to mine huge datasets for insight. Linked Data creates massively inter-connected datasets that can be mined or drawn upon to enrich queries and analysis
This talk will look at the concept of Linked Data and how a rapidly growing number of inter-connected databases, from a diverse range of sources, can be used to contextualise Big Data.
28th February to 1st March 2012