by Danny Lange
Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology. Amazon Machine Learning provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology. Once your models are ready, Amazon Machine Learning makes it easy to get predictions for your application using simple APIs, without having to implement custom prediction generation code, or manage any infrastructure.
We will build an end-to-end social media listening application powered by ML technology. This application will continuously monitor all tweets that mention your company's Twitter handle, and predict whether or not your company's customer support team should reach out to the poster. By using a ML model as your first tier of support you can lower support costs and increase customer satisfaction. The same application integrates Amazon Machine Learning with Amazon Mechanical Turk, Amazon Kinesis, AWS Lambda, and Amazon Simple Notification Service (Amazon SNS).
by David Jones
There’s a lot of noise about big data and cutting edge algorithms optimisations. Returning to the basics, this presentation shows you might not need as much data as you think to get real world benefits. Learn about machine learning in ecommerce, PredictionIO and how we used off the shelf, well implemented algorithms to get a 71% increase in revenue with an online wine retailer.
Using Google's Cloud Machine Learning Services, users can set up an entire Machine Learning pipeline quickly and with limited or no Machine Learning expertise. It is also possible to build applications on top of the Prediction API that allow for non-technical users to leverage the power of Machine Learning to help solve real world problems.
By using black-box Machine Learning via Google’s Machine Learning Services, it is possible to build an end-to-end Machine Learning pipeline with little to no ML expertise. The service automatically handles complex tasks such as data preprocessing, feature selection, classifier selection, parameter tuning, model evaluation, model hosting, and model updating.
As an example of the type of apps that can be built on top of the Prediction API, SmartAutofill spreadsheets add-on allows for easy, one-click application of Machine Learning directly from a Google spreadsheet.
The tutorial shows a smarter process for cleaning and ingestion of dirty data. PySemantic module validates and cleans data based on human-readable rules and constraints, significantly simplifying data ingest. It provides a simple schema for defining constraints on a dataset, which is enforced before and during the ingestion of the data. Thus, the data read by a program is guaranteed to conform to certain restrictions, which saves a lot of repetitive effort. The process makes data cleaning efficient, and is a scalable system for data ingest.
The BigML team has been working hard for the past four years to democratize machine learning – making it more consumable, programmable, and scalable. Our well defined workflow and powerful visualizations make it quick and easy for anyone to rapidly prototype ML solutions. However, the stunning user interface makes it easy to overlook the fact that the heart of BigML is a powerful and extensible Machine Learning API. In fact, our UI team uses the same API we expose to our customers.
In this tutorial you will learn how to perform classification, clustering, and anomaly detection tasks – all using the BigML REST API. By the end of this presentation, you will be ready to code your own predictive application in python using BigML.
by Yan Zhang
This talk introduces the landscape and challenges of predictive maintenance applications in the industry, illustrates how to formulate (data labeling and feature engineering) the problem with three machine learning models (regression, binary classification, multi-class classification) using a publicly available aircraft engine run-to-failure data set, and showcases how the models can be conveniently trained and compared with different algorithms in Azure ML.
by Michael Wang
I will examine predictive technologies in the light of the history of technology and its prediction. No matter how shiny and new, a new technology is still a technology, and there are general patterns that seem to recur. We can learn from those patterns if we pay attention to them.
In particular I will look at the challenge of predicting the impact of new technologies, talk about how they evolve, and the role that modularity, standards and interoperability play in their evolution.
I will talk more specifically about some of the particular challenges of making APIs and interfaces for predictive technologies such as machine learning, and speculate on the prospects for making machine learning a service, and more of a mature engineering discipline. In passing I will briefly demonstrate some recent machine learning work from NICTA.
by Nicolas Hohn
This presentation will focus on anomaly detection for network data streams where the aim is to predict a distribution of future values and flag unlikely situations. Challenges both in terms of data science and engineering will be discussed, such as the accuracy, robustness and scalability of the prediction API. An example of a production deployment will also be discussed.
by Alex Housley
After operating for three years as a “black box” predictive API, Seldon recently open-sourced it’s entire predictive stack. Alex will talk about Seldon’s journey from closed to open: the challenges and pitfalls, architectural considerations, case studies, changes to business models, and new opportunities for partnership across the full stack - between both open and closed technology providers.
by Michael Wang
by Brian Gawalt
Build a better, faster, more efficient predictive API with the Actor model of programming. Latency, logging, full utilization are all easily handled with this framework. Upwork (formerly Elance-oDesk) freelancer availability model — anticipating who's looking for work right now — is now a real-time service, without costly or complicated build-out of our stack or our datacenter, thanks to the Actor model.
by Sharat Chikkerur
In this talk, we describe AzureML: a web service enabling software developers and data scientists to build predictive applications. AzureML provides several unique features. These include (a) Collaboration (b) Versioning (c) Graphical authoring(d) Push button operationalization and (e) Monetization. We outline the design principles, system design and lessons learned in building such a system.
Diversity in machine learning APIs works against realising machine learning's full potential by making it difficult to compose multiple algorithms. This paper introduces the Protocols and Structures for Inference (PSI) service architecture and specification for presenting learning algorithms and data as RESTful web resources that are accessible via a common but flexible and extensible interface. This is joint work with Dr. Mark Reid of the Australian National University and NICTA and Dr. Barry Drake of Canon Information Systems Research Australia.
[Sponsored presentation] In the past year, Machine Learning has been getting attention as a necessary tool for doing something useful with the ever growing volume of data. This misleads some to believe that Machine Learning is new, but the truth is that the core algorithms and concepts have been around for a long time. What is new though is the confluence of Machine Learning and Cloud Computing which for the first time in history is making learning from large data possible thru the use of programmable APIs.
Since 2011, BigML has worked to implement this vision of a programmable web powered by a seamless machine learning layer in the cloud which will enable future smart apps to adapt themselves to a changing context in real-time as new information arrives. In this presentation we will trace the history of Machine Learning from it’s origins to the present and discuss the future evolution that must occur in terms of simplicity, programmability, importability / exportability, compostability, specialization and standardization in order for it to make an impact in the “real world” and make this vision come alive.
6th–7th August 2015