Sessions at Strata 2012 about Big Data on Thursday 1st March

Your current filters are…

Clear
  • Democratization of Data Platforms

    by Jon Gosier

    Big data isn’t just an abstract problem for corporations, financial firms, and tech companies. To your mother, a ‘big data’ problem might simply be too much email, or a lost file on her computer.

    We need to democratize access to the tools used for understanding information by taking the hard-work out of drawing insight from excessive quantities of information. To help humans process content more efficiently and to help them capture more of their world.

    Tools to effectively do this need to be visual, intuitive, and quick. This talk looks at some of the data visualization platforms that are helping to solve big data problems for normal people.

    At 8:50am to 9:05am, Thursday 1st March

    In Mission City Ballroom, Santa Clara Convention Center

    Coverage video

  • 5 Big Questions about Big Data

    by Luke Lonergan

    How are businesses using big data to connect with their customers, deliver new products or services faster and create a competitive advantage? Luke Lonergan, co-founder & CTO, Greenplum, a division of EMC, gives insight into the changing nature of customer intimacy and how the technologies and techniques around big data analysis provide business advantage in today’s social, mobile environment – and why it is imperative to adopt a big data analytics strategy.

    This session is sponsored by Greenplum, a division of EMC²

    At 9:05am to 9:15am, Thursday 1st March

    In Mission City Ballroom, Santa Clara Convention Center

    Coverage video

  • Helping Banks Build Better Relationships

    by Schwark Satyavolu

    Big Data provides big banks with the means to monetize the transaction data stream in ways that are both pro-consumer and pro-merchant. By utilizing data-driven personalization services, financial institutions can offer a better customer experience and boost customer loyalty. For example, integrating rewards and analysis within a consumer’s online banking statement can save a consumer on average $1,000 per year just by comparing plans, pricing, and usage habits within wireless, cable, and gas categories. Financial institutions benefit by increasing their relationship value with customers. Merchants benefit from increased analytics and are able to reward loyal customers with deals that matter most based upon their purchasing habits.

    These data driven services increase a bank’s relationship value with customers. 94% of consumers indicate that they’d use a specific card that was ties to money-saving discounts over a card that did not and 3 in 4 admitted that they’d switch banks if their bank did not offer loyalty rewards.

    Big data is not just big stakes for loyalty—it can be used to drive customer acquisition and increase market share (or credit card ‘share of wallet’) which drive other banking revenue streams.

    Furthermore, data driven offerings help promote the conversion of non-online customers to online banking and billpay, a cost reduction potential of $167 per account per year or $8.3 billion annually according to Javelin.

    At 11:30am to 12:10pm, Thursday 1st March

    In Mission City B4, Santa Clara Convention Center

  • Big Data Applications in Action

    by Gary Lang

    Gary Lang, Senior VP Engineering, MarkLogic, will discuss the concept of Big Data Applications and walk through three in-production implementations of Big Data Applications in action. These applications include how LexisNexis built a next-generate search application, how a major financial institution simplified its technology infrastructure for managing complex derivative trades, and how a major movie studio implemented an Enterprise Data Layer for access to all of their content across multiple silos.

    This session is sponsored by MarkLogic

    At 1:30pm to 2:10pm, Thursday 1st March

    In Ballroom H, Santa Clara Convention Center

  • Big Data Big Costs?

    by Vineet Tyagi

    Enterprises today are well on their way to putting Big Data to work. Many are experimenting with Big Data, if already not in production. The data deluge is forcing everyone to ask the key question – What is the cost of big data analytics? This session will address some of the key concerns in creating a Big Data solution that will provide for lower cost “per TB Data Managed and Analyzed”

    The session will talk about why nobody wants to talk about the costs involved with Hadoop, NOSQL and other options.It will also exemplify how to reduce costs, choose the right technology options and address some of the unsaid issues in dealing with BIG Data.

    This session is sponsored by Impetus Technologies

    At 2:20pm to 3:00pm, Thursday 1st March

    In Ballroom G, Santa Clara Convention Center

  • Big Data Meets Big Weather

    by Siraj Khaliq

    One doesn’t normally think about Big Data when the rain falls, but we’ve been measuring and analyzing Big Weather for years. Due to recent advancements in Big Data, cloud computing, and network maturity it’s now possible to work with extremely large weather-related data sets.

    The Climate Corporation combines Big Data, climatology and agronomics to protect the $3 trillion global agriculture industry with automated full-season weather insurance. Every day, The Climate Corporation utilizes 2.5 million daily weather measurements, 150 billion soil observations, and 10 trillion scenario data points to build and price their products. At any given time, more than 50 terabytes of data is stored in their systems, the equivalent of 100,000 full-length movies or 10,000,000 music tracks. All of this is meant to provide the intelligence and analysis necessary to reduce the risk of adverse weather on U.S. farmers, which is the cause of more than 90% of crop loss.

    The Climate Corporation’s generation system uses thousands of servers to periodically process decades of historical data and generate 10,000 weather scenarios at each location and measurement, going out several years. This results in over 10 trillion scenario data points (e.g. an expected rainfall value at a specific place and time in the future), for use in an insurance premium pricing and risk analysis system amounting to over fifty terabytes of data in our live systems at any given time. Weather-related data is ingested multiple times a day directly from major climate models and incorporated into The Climate Corporation’s system. Under the hood, the The Climate Corporation’s Web site is running complex algorithms against a huge dataset in real-time, returning a premium price within seconds. The size of this data set has grown an average of 10x every year as the company adds more granular geographic data. Hear The Climate Corporation CEO David Friedberg discuss how to apply big data principles to the real-world challenge of protecting people and businesses from the financial impact of adverse weather.

    At 4:00pm to 4:40pm, Thursday 1st March

    In Mission City B4, Santa Clara Convention Center

  • Personalized Medicine and Individual Cancer Care, it is a data problem

    by Peter Kuhn

    Personalized Cancer Care: How to predict and monitor the response of cancer drugs in individual patients.

    1. Biology: Cancer spreads through the body by cancer cells leaving the primary site of cancer, traveling through the blood to find a new site where it can settle, colonize, expand and eventually kill the patient.

    2. Challenge: the concentration of the cancer cells is about 1 to 1 million normal white blood cells or 1 to 2 billion cells if you include the red blood cells. This makes for about a handful of these cells in a tube of blood (assuming that you have given blood before, you can picture this pretty easily). A cell is about 10 microns in diameter

    3. Opportunity: if can find these cells, we could always just take a tube of blood and characterize the disease in that patient at that point in time to make treatment decisions. We have significant numbers of drugs going through the development pipeline but no good way of making decisions about which drug to take at which time.

    4. Solution: create a large monolayer of 10 million cells, stain the cells, then image them and then find the cells computationally by an iterative process. It is a simple data driven solution to very large challenge. It is simple in the world of algorithms, HPC and cloud, and setup to revolutionize cancer care.

    http://4db.us and http://epicsciences.com for more info.

    At 4:50pm to 5:30pm, Thursday 1st March

    In Ballroom E, Santa Clara Convention Center