2021 journal article

Fast Online Reinforcement Learning Control Using State-Space Dimensionality Reduction

IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 8(1), 342–353.

co-author countries: Japan πŸ‡―πŸ‡΅ United States of America πŸ‡ΊπŸ‡Έ
author keywords: Dimensionality reduction; large-scale networks; reinforcement learning (RL)
Source: Web Of Science
Added: April 26, 2021

In this article, we propose a fast reinforcement learning (RL) control algorithm that enables online control of large-scale networked dynamic systems. RL is an effective way of designing model-free linear quadratic regulator (LQR) controllers for linear time-invariant (LTI) networks with unknown state-space models. However, when the network size is large, conventional RL can result in unacceptably long learning times. The proposed approach is to construct a compressed state vector by projecting the measured state through a projective matrix. This matrix is constructed from online measurements of the states in a way that it captures the dominant controllable subspace of the open-loop network model. Next, an RL controller is learned using the reduced-dimensional state instead of the original state such that the resulting cost is close to the optimal LQR cost. Numerical benefits as well as the cyber-physical implementation benefits of the approach are verified using illustrative examples including an example of wide-area control of the IEEE 68-bus benchmark power system.