Works (22)

Updated: March 29th, 2025 01:30

2024 journal article

ON THE TRAINING AND GENERALIZATION OF DEEP OPERATOR NETWORKS

SIAM JOURNAL ON SCIENTIFIC COMPUTING, 46(4), C273–C296.

By: S. Lee* & Y. Shin n

author keywords: deep operator networks; divide-and-conquer; sequential training method; generalization error analysis
topics (OpenAlex): Neural Networks and Applications; Model Reduction and Neural Networks; Gaussian Processes and Bayesian Inference
Sources: ORCID, Web Of Science, NC State University Libraries
Added: July 8, 2024

2024 journal article

S-OPT: A POINTS SELECTION ALGORITHM FOR HYPER-REDUCTION IN REDUCED ORDER MODELS

SIAM JOURNAL ON SCIENTIFIC COMPUTING, 46(4), B474–B501.

author keywords: reduced order modeling; nonlinear model reduction; Galerkin projection; hyper-reduction; sampling algorithm
topics (OpenAlex): Model Reduction and Neural Networks; Probabilistic and Robust Engineering Design; Hydraulic and Pneumatic Systems
Sources: Web Of Science, NC State University Libraries
Added: September 23, 2024

2024 journal article

tLaSDI: Thermodynamics-informed latent space dynamics identification

COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 429.

By: J. Park*, S. Cheung*, Y. Choi* & Y. Shin n

author keywords: Thermodynamics; Reduced order modeling; Autoencoder; Error estimates; Nonlinear-manifold ROM
topics (OpenAlex): Model Reduction and Neural Networks; Neural Networks and Applications; Gaussian Processes and Bayesian Inference
Sources: ORCID, Web Of Science, NC State University Libraries
Added: July 8, 2024

2023 journal article

Accelerating gradient descent and Adam via fractional gradients

Neural Networks, 161, 185–201.

By: Y. Shin*, J. Darbon* & G. Karniadakis*

topics (OpenAlex): Fractional Differential Equations Solutions; Machine Learning and ELM; Model Reduction and Neural Networks
TL;DR: The superiority of CfGD and CfAdam are demonstrated on several large scale optimization problems that arise from scientific machine learning applications, such as ill-conditioned least squares problem on real-world data and the training of neural networks involving non-convex objective functions. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2023 journal article

ERROR ESTIMATES OF RESIDUAL MINIMIZATION USING NEURAL NETWORKS FOR LINEAR PDES

Journal of Machine Learning for Modeling and Computing, 4(4), 73–101.

By: Y. Shin n, Z. Zhang* & G. Karniadakis*

topics (OpenAlex): Model Reduction and Neural Networks; Advanced Numerical Methods in Computational Mathematics; Advanced Numerical Analysis Techniques
Sources: Crossref, NC State University Libraries
Added: January 27, 2024

2022 journal article

Active Neuron Least Squares: A training method for multivariate rectified neural networks

SIAM Journal on Scientific Computing, 44(4), A2253–A2275.

By: M. Ainsworth & Y. Shin*

topics (OpenAlex): Model Reduction and Neural Networks; Neural Networks and Applications; Machine Learning and ELM
UN Sustainable Development Goals Color Wheel
UN Sustainable Development Goal Categories
16. Peace, Justice and Strong Institutions (OpenAlex)
Source: ORCID
Added: January 24, 2024

2022 journal article

Approximation rates of DeepONets for learning operators arising from advection–diffusion equations

Neural Networks, 153, 411–426.

topics (OpenAlex): Model Reduction and Neural Networks; Neural Networks and Applications; Advanced Mathematical Modeling in Engineering
TL;DR: It is found that the approximation rates depend on the architecture of branch networks as well as the smoothness of inputs and outputs of solution operators. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2022 journal article

Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions

Neurocomputing, 468, 165–180.

topics (OpenAlex): Model Reduction and Neural Networks; Neural Networks and Applications; Numerical methods in engineering
TL;DR: A new type of neural networks, Kronecker neural networks (KNNs), that form a general framework for neural networks with adaptive activation functions that are designed to get rid of any saturation region by injecting sinusoidal fluctuations, which include trainable parameters. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2022 journal article

Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks

Analysis and Applications, 20(01), 73–119.

By: Y. Shin*

topics (OpenAlex): Stochastic Gradient Optimization Techniques; Machine Learning and ELM; Advanced Neural Network Applications
TL;DR: A general convergence analysis of BCGD is established and the optimal learning rate is found, which results in the fastest decrease in the loss, which is found that the use of deep networks could drastically accelerate convergence when it is compared to those of a depth 1 network, even when the computational cost is considered. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2022 journal article

GFINNs: GENERIC formalism informed neural networks for deterministic and stochastic dynamical systems

By: Z. Zhang*, Y. Shin* & G. Karniadakis*

topics (OpenAlex): Model Reduction and Neural Networks; Neural Networks and Applications; Probabilistic and Robust Engineering Design
TL;DR: It is proved theoretically that GFINNs are sufficiently expressive to learn the underlying equations, hence establishing the universal approximation theorem. (via Semantic Scholar)
Sources: ORCID, Crossref, NC State University Libraries
Added: January 24, 2024

2021 journal article

Identification of Corrupted Data via $k$-Means Clustering for Function Approximation

CSIAM Transactions on Applied Mathematics, 2(1), 81–107.

By: J. Hou*, Y. Shin* & D. Xiu*

Contributors: J. Hou*, Y. Shin* & D. Xiu*

topics (OpenAlex): Neural Networks and Applications
Sources: ORCID, Crossref, NC State University Libraries
Added: January 24, 2024

2021 journal article

Plateau Phenomenon in Gradient Descent Training of RELU Networks: Explanation, Quantification, and Avoidance

SIAM Journal on Scientific Computing, 43(5), A3438–A3468.

By: M. Ainsworth* & Y. Shin*

topics (OpenAlex): Neural Networks and Applications; Advancements in Semiconductor Devices and Circuit Design; Evolutionary Algorithms and Applications
TL;DR: A new iterative training method is proposed, the Active Neuron Least Squares (ANLS), characterised by the explicit adjustment of the activation pattern at each step, which is designed to enable a quick exit from a plateau. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2020 journal article

Dying ReLU and Initialization: Theory and Numerical Examples

Communications in Computational Physics, 28(5), 1671–1706.

By: L. Lu*, Y. Shin*, Y. Su* & G. Karniadakis

Contributors: Y. Shin*, Y. Su* & G. Karniadakis

topics (OpenAlex): Model Reduction and Neural Networks; Stochastic Gradient Optimization Techniques; Machine Learning and ELM
TL;DR: It is shown that even for rectified linear unit activation, deep and narrow neural networks (NNs) will converge to erroneous mean or median states of the target function depending on the loss with high probability. (via Semantic Scholar)
Sources: ORCID, Crossref, NC State University Libraries
Added: January 24, 2024

2020 journal article

On the Convergence of Physics Informed Neural Networks for Linear Second-Order Elliptic and Parabolic Type PDEs

Communications in Computational Physics, 28(5), 2042–2074.

By: Y. Shin*, J. Darbon* & G. Karniadakis

Contributors: J. Darbon* & G. Karniadakis

topics (OpenAlex): Model Reduction and Neural Networks; Advanced Numerical Methods in Computational Mathematics; Fluid Dynamics and Turbulent Flows
TL;DR: This is the first theoretical work that shows the consistency of PINNs, and it is shown that the sequence of minimizers strongly converges to the PDE solution in $C^0$. (via Semantic Scholar)
Sources: ORCID, Crossref, NC State University Libraries
Added: January 24, 2024

2020 journal article

TRAINABILITY OF ReLU NETWORKS AND DATA-DEPENDENT INITIALIZATION

Journal of Machine Learning for Modeling and Computing, 1(1), 39–74.

By: Y. Shin* & G. Karniadakis*

topics (OpenAlex): Neural Networks and Applications; Machine Learning and ELM; Advanced Memory and Neural Computing
Source: ORCID
Added: January 24, 2024

2018 journal article

Sequential function approximation with noisy data

Journal of Computational Physics, 371, 363–381.

By: Y. Shin*, K. Wu* & D. Xiu*

topics (OpenAlex): Sparse and Compressive Sensing Techniques; Stochastic Gradient Optimization Techniques; Advanced Bandit Algorithms Research
TL;DR: This work presents a sequential method for approximating an unknown function sequentially using random noisy samples, which results in a simple numerical implementation using only vector operations and avoids the need to store the entire data set. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2017 journal article

A Randomized Algorithm for Multivariate Function Approximation

SIAM Journal on Scientific Computing, 39(3), A983–A1002.

By: Y. Shin* & D. Xiu

topics (OpenAlex): Stochastic Gradient Optimization Techniques; Sparse and Compressive Sensing Techniques; Markov Chains and Monte Carlo Methods
TL;DR: This paper demonstrates that by conducting the approximation randomly one sample at a time the RK method converges, and establishes the optimal sampling probability measure to achieve the optimal rate of convergence. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2017 journal article

A Randomized Tensor Quadrature Method for High Dimensional Polynomial Approximation

SIAM Journal on Scientific Computing, 39(5), A1811–A1833.

By: K. Wu, Y. Shin* & D. Xiu

topics (OpenAlex): Electromagnetic Scattering and Analysis; Tensor decomposition and applications; Matrix Theory and Algorithms
TL;DR: By using a new randomized algorithm and taking advantage of the tensor structure of the grids, a highly efficient algorithm can be constructed that can be lower than the standard methods, when applicable, such as least squares. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2017 journal article

Sparse Approximation using $\ell_1-\ell_2$ Minimization and Its Application to Stochastic Collocation

SIAM Journal on Scientific Computing, 39(1), A229–A254.

By: L. Yan*, Y. Shin* & D. Xiu

topics (OpenAlex): Sparse and Compressive Sensing Techniques; Probabilistic and Robust Engineering Design; Image and Signal Denoising Methods
TL;DR: Theoretical estimates regarding its recoverability for both sparse and nonsparse signals are presented and the recoverability of both the standard $\ell_1-\ell_2$ minimization and Chebyshev weighted versions are studied. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2016 journal article

Correcting Data Corruption Errors for Multivariate Function Approximation

SIAM Journal on Scientific Computing, 38(4), A2492–A2511.

By: Y. Shin* & D. Xiu

topics (OpenAlex): Probabilistic and Robust Engineering Design; Statistical Methods and Inference; Advanced Statistical Methods and Models
TL;DR: This work proves that the sparse corruption errors can be effectively eliminated by using $\ell_1$-minimization, also known as the least absolute deviations method, and establishes probabilistic error bounds of the $\ell-1- minimization solution with the corrupted data. (via Semantic Scholar)
UN Sustainable Development Goals Color Wheel
UN Sustainable Development Goal Categories
16. Peace, Justice and Strong Institutions (OpenAlex)
Source: ORCID
Added: January 24, 2024

2016 journal article

Nonadaptive Quasi-Optimal Points Selection for Least Squares Linear Regression

SIAM Journal on Scientific Computing, 38(1), A385–A411.

By: Y. Shin* & D. Xiu

topics (OpenAlex): Probabilistic and Robust Engineering Design; Control Systems and Identification; Advanced Statistical Methods and Models
TL;DR: This paper presents a quasi-optimal sample set for ordinary least squares (OLS) regression, and presents its efficient implementation via a greedy algorithm, along with several numerical examples to demonstrate its efficacy. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2016 journal article

On a near optimal sampling strategy for least squares polynomial regression

Journal of Computational Physics, 326, 931–946.

By: Y. Shin* & D. Xiu*

topics (OpenAlex): Probabilistic and Robust Engineering Design; Markov Chains and Monte Carlo Methods; Optimal Experimental Design Methods
TL;DR: A sampling strategy of least squares polynomial regression that first choose samples from the pluripotential equilibrium measure and then re-order the samples by the quasi-optimal algorithm is presented. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

Employment

Updated: October 5th, 2023 20:43

2023 - present

North Carolina State University Raleigh, US
Assistant Professor Mathematics

Citation Index includes data from a number of different sources. If you have questions about the sources of data in the Citation Index or need a set of data which is free to re-distribute, please contact us.

Certain data included herein are derived from the Web of Science© and InCites© (2025) of Clarivate Analytics. All rights reserved. You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.