2023 journal article

Deep generative modeling-based data augmentation with demonstration using the BFBT benchmark void fraction datasets

NUCLEAR ENGINEERING AND DESIGN, 415.

By: F. Alsafadi n & X. Wu n

author keywords: Deep generative modeling; Generative adversarial networks; Normalizing flows; Variational autoencoders
TL;DR: The findings shows that DGMs have a great potential to augment scientific data in nuclear engineering, which proves effective for expanding the training dataset and enabling other DL models to be trained more accurately. (via Semantic Scholar)
Source: Web Of Science
Added: December 18, 2023

Deep learning (DL) has achieved remarkable successes in many disciplines such as computer vision and natural language processing due to the availability of “big data”. However, such success cannot be easily replicated in many nuclear engineering problems because of the limited amount of training data, especially when the data comes from high-cost experiments. To overcome such a data scarcity issue, this paper explores the applications of deep generative models (DGMs) that have been widely used for image data generation to scientific data augmentation. DGMs, such as generative adversarial networks (GANs), normalizing flows (NFs), variational autoencoders (VAEs), and conditional VAEs (CVAEs), can be trained to learn the underlying probabilistic distribution of the training dataset. Once trained, they can be used to generate synthetic data that are similar to the training data and significantly expand the dataset size. By employing DGMs to augment TRACE simulated data of the steady-state void fractions based on the NUPEC Boiling Water Reactor Full-size Fine-mesh Bundle Test (BFBT) benchmark, this study demonstrates that VAEs, CVAEs, and GANs have comparable generative performance with similar errors in the synthetic data, with CVAEs achieving the smallest errors. The findings shows that DGMs have a great potential to augment scientific data in nuclear engineering, which proves effective for expanding the training dataset and enabling other DL models to be trained more accurately.