Saturday 24th October, 2015
5:00pm to 7:00pm
Highly-constrained, large-dimensional, and non-linear optimizations are found at the root of most of today’s forefront problems in statistics, quantitative finance, risk, operations research, materials design, and other predictive sciences. Tools for optimization, however, have not changed much in the past 40 years -- until very recently. The abundance of parallel computing resources has stimulated a shift away from using reduced models to solve statistical and predictive problems, and toward more direct methods for solving high-dimensional nonlinear optimization problems.
This tutorial will introduce modern tools for solving optimization problems -- beginning with traditional methods, and extending to solving high-dimensional non-convex optimization problems with highly nonlinear constraints. We will start by introducing the cost function, and it’s use in local and global optimization. We will then address how to monitor and diagnose your optimization convergence and results, tune your optimizer, and utilize compound termination conditions. This tutorial will discuss building and applying box constraints, penalty functions, and symbolic constraints. We will then demonstrate methods to efficiently reduce search space through the use of robust optimization constraints. Real-world inverse problems can be expensive, thus we will show how to enable your optimization to seamlessly leverage parallel computing. Large-scale optimizations also can greatly benefit from efficient solver restarts and the saving of state. This tutorial will cover using asynchronous computing for results caching and archiving, dynamic real-time optimization, and dimensional reduction. Next we will discuss new optimization methods that leverage parallel computing to perform blazingly-fast global optimizations and n-dimensional global searches. Finally, we will close with applications of global optimization in statistics and quantitative finance.
The audience need not be an expert in optimization, but should have interest in solving hard real-world optimization problems. We will begin with a walk through some introductory optimizations, learning how to build confidence in understanding your results. By the end of the tutorial, participants will have working knowledge of how to use modern constrained optimization tools, how to enable their solvers to leverage high-performance parallel computing, and how to utilize legacy data and surrogate models in statistical and predictive risk modeling.
~~introduction to optimization~~ (30/45 min)
* the cost function
* local and global optimization
* monitoring and diagnosing convergence and optimization results
* solver tuning and compound termination conditions
~~penalty functions and constraints~~ (30/60 min)
* box constraints
* applying penalty functions
* reducing search space with constraints
* applying symbolic constraints
~~leverage asynchronous and parallel computing~~ (30/45 min)
* parallel function evaluations and solver iterations
* solver restarts and saving state
* dynamic real-time optimization
* automated dimensional reduction
~~ensemble optimization and global searches~~ (30/45 min)
* blazingly-fast global optimization
* using global search to find all minima, maxima, and turning points
* building a surrogate model through optimal surface interpolation
~~optimization in parameter sensitivity, statistics, and risk modeling~~ (30/45 min)
* the cost metric
* statistical and probabilistic constraints
* information constraints from surrogate models and legacy data
* application to quantitative finance and statistics
Mike has been a research scientist at Caltech since 2002, and is co-founder of the UQ Foundation, a non-profit for the advancement of predictive science. Mike is the author several python packages, including mystic (highly-constrained non-convex optimization and uncertainty quantification), pathos (parallel graph management and execution in heterogeneous computing), and dill (serialize all of python). His software is the backbone of several research projects on risk analysis and predictive science, and is leveraged by several third-party packages in machine learning and parallel computing. He has over fifteen years of teaching experience, and has given hundreds of conference and workshop presentations.
Sign in to add slides, notes or videos to this session