Your current filters are…
by Douglas Merrill, Jessica Jackley, Paul Leonard and Ryan Gilbert
Technology and mathematics are transforming consumer lending. Historically, it has been nearly impossible for people with bad credit to get loans. Yet, these are often the people who need it most - to buy groceries or pay bills.
Until now, lenders determined who should get loans through a simple underwriting function based on a small amount of credit data. When this data is missing or wrong, banks deny the loan, leaving people to payday loans or pawn shops - very expensive options that put people further in debt.
Millions of people are being denied credit because underwriting hasn’t evolved. Why use only a handful of variables when we have vast amounts of data provided by the customer, the Internet, and social media? All data is credit data and we should use it all to make better underwriting decisions.
Analyzing vast amounts of data, however, requires complex machine learning more akin to search engines than your corner bank. The future of financial services is to become more like a recommendation engine, and less like a place where you stand in line to deposit checks.
The panelists will discuss how to use large-scale data analysis to re-invent underwriting and replace today’s antiquated methods. Better underwriting will open up good credit to people who don't have a lot of good options and materially improve the financial lives of the people who need it most.
by Adam Honore, Armando Gonzalez, Jacob Sisk and John Kittrell
Trading on news is not new. Terminals have had news readers attached from the time trading went electronic. What is new is who, or what, is trading on news. Born from a hybrid of technological capability, electronification of the markets, algorithmic trading, and a little influence from the intelligence community, black box trading systems are now applying semantic analysis to trade on news items without a single human ever reading the story. While only 2% of trading firms were doing this two years ago, roughly one-third are exploring it today. This session looks at the data, drivers, and technology behind trading on unstructured content.
by Bruce Smith
Social media applications encounter messy user-generated data in blog posts, status updates, tweets, user profiles, etc. These documents contain free-form text that obeys no particular rules of grammar, punctuation or spelling.
If the data is so messy, how can a computer program recognize adult content or hate speech or spam? How can a computer program tell the difference between an advertisement and a product review? How can a computer program distinguish between a positive and a negative product review?
Machine learning offers some solutions. For example, given sample tweets labeled (by people) as spam or non-spam, machine learning tools can generate a program (or model) that attempts to duplicate the human judgments. You could use this kind of model in your application to filter out tweet spam.
In this talk we will describe
•Some common machine learning algorithms
•Machine learning tools – free and commercial
•Acquiring and managing training data
•Extracting useful features from your documents
•Choosing the right technique for a problem
•Measuring quality and improving your model over time
•Integrating a machine learned model with your application
Coming out of this session, you will know where you might use machine learning in your applications, and you will know how to get started.
Our goal is to present how we use the “huge data” collected by using open APIs of Twitter and other online services to empirically test and improve existing models in social sciences such as economics and sociology. Bringing together natural language processing and macroeconomics, on top of the troves of machine-generated data, we also propose building innovative applications to track consumer sentiment and industry dynamics.
11th–15th March 2011