2018 journal article

ASTRO-DF: A Class of Adaptive Sampling Trust-Region Algorithms for Derivative-Free Stochastic Optimization

SIAM Journal on Optimization.

Sara Shashaani

author keywords: derivative-free optimization; simulation optimization; stochastic optimization; trust region
Source: ORCID
Added: October 10, 2019

We consider unconstrained optimization problems where only “stochastic” estimates of the objective function are observable as replicates from a Monte Carlo oracle. The Monte Carlo oracle is assumed to provide no direct observations of the function gradient. We present ASTRO-DF---a class of derivative-free trust-region algorithms, where a stochastic local model is constructed, optimized, and updated iteratively. Function estimation and model construction within ASTRO-DF is adaptive in the sense that the extent of Monte Carlo sampling is determined by continuously monitoring and balancing measures of sampling error (or variance) and structural error (or model bias) within ASTRO-DF. Such balancing of errors is designed to ensure that Monte Carlo effort within ASTRO-DF is sensitive to algorithm trajectory: sampling is higher whenever an iterate is inferred to be close to a critical point and lower when far away. We demonstrate the almost sure convergence of ASTRO-DF's iterates to first-order critical points when using stochastic polynomial interpolation models. The question of using more complicated models, e.g., regression or stochastic kriging, in combination with adaptive sampling is worth further investigation and will benefit from the methods of proof presented here. We speculate that ASTRO-DF's iterates achieve the canonical Monte Carlo convergence rate, although a proof remains elusive.