Skip to content

Data-Driven Risk-Averse Optimization


Management of optimization of risk is of fundamental importance in both learning and decision making tasks, which arise in many application areas, such as finance, energy, smart grid/cities, transportation, supply chain management, network control, resource allocation, telecommunications, robotics, big data, general machine learning, and artificial intelligence.

Nevertheless, except for certain batch methods, the design of data-driven algorithms for risk-averse optimization is still in its infancy, especially in nonstationary and real-time settings, or when computational efficiency is an operational constraint.

The current state of the art presents a high potential for the development of new methods for optimization of risk. To date, we have contributed in data-driven optimization of a new class of convex risk measures, termed mean-semideviations, strictly generalizing the classical central mean-upper-semideviation risk measure. Mean-semideviations extend the common approach of uncertainty quantification via an “expected cost”, and provide an intuitive, powerful, application-driven and operationally significant alternative to both classical, risk-neutral stochastic decision making, and (empirical) “risk” minimization -based machine learning.

We have introduced the MESSAGEp algorithm, a compositional subgradient procedure for iteratively solving convex mean-semideviation problems to optimality.

The MESSAGEp algorithm may be seen as a decoupled variation of the general purpose T-SCGD algorithm of Yang, Wang & Fang ([YWF18]), originally analyzed under a generic setting. By exploiting problem structure, we have proposed a new, substantially more flexible set of assumptions, as compared to [YWF18], under which we have provided a complete asymptotic characterization of the MESSAGEp algorithm, including explicit convergence rates, extending and improving on the state of the art. We have also rigorously shown that the new framework strictly generalizes [YWF18], allowing for significantly less restrictive problem requirements and establishing the applicability of compositional stochastic optimization for a strictly wider spectrum of convex mean-semideviation risk-averse problems, as compared to the state of the art.

This is currently on-going work with many interesting directions of fundamental, both theoretical and applied importance. Examples of active current interest include risk-aware statistical regression, training of deep neural networks, splitting schemes and distributed risk-aware optimization, reinforcement learning, with applications in network optimization, control and management, wireless communications, resource allocation, machine learning, and artificial intelligence.

Support: DARPA Lagrange program, U.S. Navy / SPAWAR Systems Center Pacific under Contract No. N66001-18-C-4031.

Selected Publications:

  1. D. S. Kalogerias and W. B. Powell, “Zeroth-order Algorithms for Risk-Aware Learning,” under preparation in 2019.
  2. D. S. Kalogerias and W. B. Powell, “Recursive Optimization of Convex Risk Measures: Mean-Semideviation Models,” arXiv, March 2018 (under review).

Back to Research