2022 article

High-Fidelity Model Extraction Attacks via Remote Power Monitors

2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, pp. 328–331.

author keywords: Neural networks; model stealing; time-to-digital converters; secure virtualization
TL;DR: It is demonstrated that a remote monitor implemented with time-to-digital converters can be exploited to steal the weights from a hardware implementation of NN inference, which expands the attack vector to multi-tenant cloud FPGA platforms. (via Semantic Scholar)
Source: Web Of Science
Added: November 7, 2022

This paper shows the first side-channel attack on neural network (NN) IPs through a remote power monitor. We demonstrate that a remote monitor implemented with time-to-digital converters can be exploited to steal the weights from a hardware implementation of NN inference. Such an attack alleviates the need to have physical access to the target device and thus expands the attack vector to multi-tenant cloud FPGA platforms. Our results quantify the effectiveness of the attack on an FPGA implementation of NN inference and compare it to an attack with physical access. We demonstrate that it is indeed possible to extract the weights using DPA with 25000 traces if the SNR is sufficient. The paper, therefore, motivates secure virtualization-to protect the confidentiality of high-valued NN model IPs in multi-tenant execution environments, platform developers need to employ strong countermeasures against physical side-channel attacks.