2009 journal article

Technical Note: A Computationally Efficient Algorithm for Undiscounted Markov Decision Processes with Restricted Observations

NAVAL RESEARCH LOGISTICS, 56(1), 86–92.

By: L. Davis *, T. Hodgson n , R. King n  & W. Wei

co-author countries: United States of America πŸ‡ΊπŸ‡Έ
author keywords: Markov Decision process; heuristics; optimal control
Source: Web Of Science
Added: August 6, 2018

Abstract We present a computationally efficient procedure to determine control policies for an infinite horizon Markov Decision process with restricted observations. The optimal policy for the system with restricted observations is a function of the observation process and not the unobservable states of the system. Thus, the policy is stationary with respect to the partitioned state space. The algorithm we propose addresses the undiscounted average cost case. The algorithm combines a local search with a modified version of Howard's (Dynamic programming and Markov processes, MIT Press, Cambridge, MA, 1960) policy iteration method. We demonstrate empirically that the algorithm finds the optimal deterministic policy for over 96% of the problem instances generated. For large scale problem instances, we demonstrate that the average cost associated with the local optimal policy is lower than the average cost associated with an integer rounded policy produced by the algorithm of Serin and Kulkarni Math Methods Oper Res 61 (2005) 311–328. Β© 2008 Wiley Periodicals, Inc. Naval Research Logistics 2009