Sunday 25th October, 2015
3:00pm to 3:30pm
"It is relatively easy to build a first-cut machine learning model. But what does it take to build a reasonably good model, or even a state-of-art model ?
Ensemble models. They are our best friends. They help us exploit the power of computing. Of course, ensemble methods aren't new. They form the basis for some extremely powerful machine learning algorithms like random forests and gradient boosting machines. The key point about ensemble is that consensus from diverse models are more reliable than a single source. This talk will cover how we can combine model outputs from various base models(logistic regression, support vector machines, decision trees, neural networks, etc) to create a stronger/better model output.
The primary goal of the talk is to answer the following questions:
1) Why ensembles produce better output?
2) How ensembles produce better output?
3) When data scales, what's the impact? What are the trade-offs to consider?
4) Can ensemble models eliminate expert domain knowledge?
This talk will cover various strategies to create ensemble models.
Using third-party Python libraries along with scikit-learn, this talk will demonstrate the following ensemble methodologies:
Real-life examples from the enterprise world will be show-cased where ensemble models produced better results consistently when compared against single best-performing models.
There will also be emphasis on the following: Feature engineering, model selection, importance of bias-variance and generalization.
Creating better models is the critical component of building a good data science product. This talk will cover the modeling methodologies from this mindmap for building a data science product: https://atlas.mindmup.com/2015/0...;
Sign in to add slides, notes or videos to this session