2023 journal article

Interpretable boosted-decision-tree analysis for the MAJORANA DEMONSTRATOR


By: I. Arnquist, F. Avignone III, A. Barabash, C. Barton, K. Bhimani, E. Blalock*, B. Bos, M. Busch ...

Source: Web Of Science
Added: March 27, 2023

The Majorana Demonstrator is a leading experiment searching for neutrinoless double-beta decay with high purity germanium (HPGe) detectors. Machine learning provides a new way to maximize the amount of information provided by these detectors, but the data-driven nature makes it less interpretable compared to traditional analysis. An interpretability study reveals the machine's decision-making logic, allowing us to learn from the machine to feed back to the traditional analysis. In this work, we present the first machine learning analysis of the data from the Majorana Demonstrator; this is also the first interpretable machine learning analysis of any germanium detector experiment. Two gradient boosted decision tree models are trained to learn from the data, and a game-theory-based model interpretability study is conducted to understand the origin of the classification power. By learning from data, this analysis recognizes the correlations among reconstruction parameters to further enhance the background rejection performance. By learning from the machine, this analysis reveals the importance of new background categories to reciprocally benefit the standard Majorana analysis. This model is highly compatible with next-generation germanium detector experiments like LEGEND since it can be simultaneously trained on a large number of detectors.2 MoreReceived 22 July 2022Accepted 15 November 2022DOI:https://doi.org/10.1103/PhysRevC.107.014321©2023 American Physical SocietyPhysics Subject Headings (PhySH)Research AreasNeutrinoless double beta decayPhysical SystemsSolid-state detectorsTechniquesMachine learningNuclear PhysicsInterdisciplinary Physics