Some decomposition methods for revenue management
Abstract
Working within a Markov decision process (MDP) framework, we study revenue management policies that combine aspects of mathematical programming approaches and pure MDP methods by decomposing the problem by time, state, or both. The "time decomposition" policies employ heuristics early in the booking horizon and switch to a more-detailed decision rule closer to the time of departure. We present a family of formulations that yield such policies and discuss versions of the formulation that have appeared in the literature. Subsequently, we describe sampling-based stochastic optimization methods for solving a particular case of the formulation. Numerical results for two-leg problems suggest that the policies perform well. By viewing the MDP as a large stochastic program, we derive some structural properties of two-leg problems. We show that these properties cannot, in general, be extended to larger networks. For such larger networks we also present a "state-space decomposition" approach that partitions the network problem into two-leg subproblems, each of which is solved. The solutions of these subproblems are then recombined to obtain a booking policy for the network problem.
Más información
Título según WOS: | ID WOS:000249464700004 Not found in local WOS DB |
Título de la Revista: | TRANSPORTATION SCIENCE |
Volumen: | 41 |
Número: | 3 |
Editorial: | INFORMS |
Fecha de publicación: | 2007 |
Página de inicio: | 332 |
Página final: | 353 |
DOI: |
10.1287/trsc.1060.0184 |
Notas: | ISI |