On variance reduction for stochastic smooth convex optimization with multiplicative noise

Jofre, A; Thompson, P

Keywords: acceleration, complexity, multiplicative noise, variance reduction, stochastic approximation, dynamic sampling, Smooth convex optimization, Composite optimization

Abstract

We propose dynamic sampled stochastic approximation (SA) methods for stochastic optimization with a heavy-tailed distribution (with finite 2nd moment). The objective is the sum of a smooth convex function with a convex regularizer. Typically, it is assumed an oracle with an upper bound sigma(2) on its variance (OUBV). Differently, we assume an oracle with multiplicative noise. This rarely addressed setup is more aggressive but realistic, where the variance may not be uniformly bounded. Our methods achieve optimal iteration complexity and (near) optimal oracle complexity. For the smooth convex class, we use an accelerated SA method a la FISTA which achieves, given tolerance epsilon>0, the optimal iteration complexity of O(epsilon-(1/2)) with a near-optimal oracle complexity of O(epsilon(-2))[ln(epsilon(-1/2))](2). This improves upon Ghadimi and Lan (Math Program 156:59-99, 2016) where it is assumed an OUBV. For the strongly convex class, our method achieves optimal iteration complexity of O(ln(epsilon(-1))) and optimal oracle complexity of O(epsilon(-1)). This improves upon Byrd et al. (Math Program 134:127-155, 2012) where it is assumed an OUBV. In terms of variance, our bounds are local: they depend on variances sigma(x*)(2) at solutions x* and the per unit distance multiplicative variance sigma(2)(L). For the smooth convex class, there exist policies such that our bounds resemble, up to absolute constants, those obtained in the mentioned papers if it was assumed an OUBV with sigma(2):=sigma(x*)(2). For the strongly convex class such property is obtained exactly if the condition number is estimated or in the limit for better conditioned problems or for larger initial batch sizes. In any case, if it is assumed an OUBV, our bounds are thus sharper since typically max{sigma(x*)(2),sigma(2)(L)}<

Más información

Título según WOS: On variance reduction for stochastic smooth convex optimization with multiplicative noise
Título de la Revista: MATHEMATICAL PROGRAMMING
Volumen: 174
Número: 01-feb
Editorial: SPRINGER HEIDELBERG
Fecha de publicación: 2019
Página de inicio: 253
Página final: 292
Idioma: English
DOI:

10.1007/s10107-018-1297-x

Notas: ISI