2022 article

Dynamic Set Stealing to Improve Cache Performance

2022 IEEE 34TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD 2022), pp. 60–70.

By: B. Testa*, S. Mirbagher-Ajorpaz n & D. Jimenez*

author keywords: microarchitecture; prediction; cache replacement
TL;DR: Set Stealing with Perceptron Tables (SSPT) is introduced, a novel resource management policy that allows combining the strengths of many replacement policies to maximize performance and eliminate wasted processor resources. (via Semantic Scholar)
Source: Web Of Science
Added: February 20, 2023

In the last-level cache (LLC), replacement policy is dependent on workload characteristics. Adapting the policy to the current workload has been an active area of research. Previous works includes set dueling exemplified by DIP [40] which uses static replacement policies and machine learning based models such as Glider [45] or Multiperspective Reuse Prediction [19]. Both provide improvement over the least-recently-used (LRU) policy, but additional improvement is possible. DIP suffers from wasted resources as leading sets of the competing policies are fixed in size. Machine learning approaches each use a fixed set of features that were selected offline that are not optimal for all workloads. We introduce Set Stealing with Perceptron Tables (SSPT), a novel resource management policy that allows combining the strengths of many replacement policies to maximize performance and eliminate wasted processor resources. This policy achieves a 9.45% geometric mean speedup over a baseline LRU policy on a set of 81 SPEC benchmark work-loads, compared to Glider's 9.28% and Multiperspective Reuse Prediction at 7.62% assuming a 2 MB LLC. We achieve a 9.70% geometric mean speedup over LRU on a set of 90 big data (GAPS and XS) workloads, compared to Glider's 8.79% and Multiperspective Reuse Prediction at 7.86% assuming a 2 MB LLC.