2021 article

Decomposability and Parallel Computation of Multi-Agent LQR

2021 AMERICAN CONTROL CONFERENCE (ACC), pp. 4527–4532.

By: G. Jing n , H. Bai *, J. George* & A. Chakrabortty n 

co-author countries: United States of America πŸ‡ΊπŸ‡Έ
author keywords: Reinforcement learning; linear quadratic regulator; multi-agent systems; decomposition
Source: Web Of Science
Added: November 1, 2021

Individual agents in a multi-agent system (MAS) may have decoupled open-loop dynamics, but a cooperative control objective usually results in coupled closed-loop dynamics thereby making the control design computationally expensive. The computation time becomes even higher when a learning strategy such as reinforcement learning (RL) needs to be applied to deal with the situation when the agents dynamics are not known. To resolve this problem, we propose a parallel RL scheme for a linear quadratic regulator (LQR) design in a continuous-time linear MAS. The idea is to exploit the structural properties of two graphs embedded in the Q and R weighting matrices in the LQR objective to define an orthogonal transformation that can convert the original LQR design to multiple decoupled smaller-sized LQR designs. We show that if the MAS is homogeneous then this decomposition retains closed-loop optimality. Conditions for decomposability, an algorithm for constructing the transformation matrix, a parallel RL algorithm, and robustness analysis when the design is applied to non-homogeneous MAS are presented. Simulations show that the proposed approach can guarantee significant speed-up in learning without any loss in the cumulative value of the LOR cost.