2022 journal article

Guarding Machine Learning Hardware Against Physical Side-channel Attacks

ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 18(3).

By: A. Dubey n, R. Cammarota*, V. Suresh* & A. Aysu n

author keywords: Side-channel attack; neural networks; masking
TL;DR: This work develops and combines different flavors of side-channel defenses for ML models in the hardware blocks and proposes and optimize the first defense based on Boolean masking, which impedes a straightforward second-order attack on the first-order masked implementation. (via Semantic Scholar)
Source: Web Of Science
Added: December 5, 2022

Machine learning (ML) models can be trade secrets due to their development cost. Hence, they need protection against malicious forms of reverse engineering (e.g., in IP piracy). With a growing shift of ML to the edge devices, in part for performance and in part for privacy benefits, the models have become susceptible to the so-called physical side-channel attacks.