Works (19)

Updated: April 4th, 2024 16:20

2023 journal article

Accelerating gradient descent and Adam via fractional gradients

Neural Networks, 161, 185–201.

By: Y. Shin*, J. Darbon* & G. Karniadakis*

TL;DR: The superiority of CfGD and CfAdam are demonstrated on several large scale optimization problems that arise from scientific machine learning applications, such as ill-conditioned least squares problem on real-world data and the training of neural networks involving non-convex objective functions. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2023 journal article

ERROR ESTIMATES OF RESIDUAL MINIMIZATION USING NEURAL NETWORKS FOR LINEAR PDES

Journal of Machine Learning for Modeling and Computing, 4(4), 73–101.

By: Y. Shin n, Z. Zhang* & G. Karniadakis*

Sources: Crossref, NC State University Libraries
Added: January 27, 2024

2022 journal article

Active Neuron Least Squares: A training method for multivariate rectified neural networks

SIAM Journal on Scientific Computing, 44(4), A2253–A2275.

By: M. Ainsworth & Y. Shin*

UN Sustainable Development Goal Categories
16. Peace, Justice and Strong Institutions (OpenAlex)
Source: ORCID
Added: January 24, 2024

2022 journal article

Approximation rates of DeepONets for learning operators arising from advection–diffusion equations

Neural Networks, 153, 411–426.

TL;DR: It is found that the approximation rates depend on the architecture of branch networks as well as the smoothness of inputs and outputs of solution operators. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2022 journal article

Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions

Neurocomputing, 468, 165–180.

TL;DR: A new type of neural networks, Kronecker neural networks (KNNs), that form a general framework for neural networks with adaptive activation functions that are designed to get rid of any saturation region by injecting sinusoidal fluctuations, which include trainable parameters. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2022 journal article

Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks

Analysis and Applications, 20(01), 73–119.

By: Y. Shin*

TL;DR: A general convergence analysis of BCGD is established and the optimal learning rate is found, which results in the fastest decrease in the loss, which is found that the use of deep networks could drastically accelerate convergence when it is compared to those of a depth 1 network, even when the computational cost is considered. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2022 journal article

GFINNs: GENERIC formalism informed neural networks for deterministic and stochastic dynamical systems

By: Z. Zhang*, Y. Shin* & G. Karniadakis*

TL;DR: It is proved theoretically that GFINNs are sufficiently expressive to learn the underlying equations, hence establishing the universal approximation theorem. (via Semantic Scholar)
Sources: ORCID, Crossref, NC State University Libraries
Added: January 24, 2024

2021 journal article

Identification of Corrupted Data via $k$-Means Clustering for Function Approximation

CSIAM Transactions on Applied Mathematics, 2(1), 81–107.

By: J. Hou*, Y. Shin & D. Xiu

Contributors: J. Hou*, Y. Shin & D. Xiu

Sources: ORCID, Crossref, NC State University Libraries
Added: January 24, 2024

2021 journal article

Plateau Phenomenon in Gradient Descent Training of RELU Networks: Explanation, Quantification, and Avoidance

SIAM Journal on Scientific Computing, 43(5), A3438–A3468.

By: M. Ainsworth & Y. Shin*

TL;DR: A new iterative training method is proposed, the Active Neuron Least Squares (ANLS), characterised by the explicit adjustment of the activation pattern at each step, which is designed to enable a quick exit from a plateau. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2020 journal article

Dying ReLU and Initialization: Theory and Numerical Examples

Communications in Computational Physics, 28(5), 1671–1706.

By: L. Lu, Y. Shin*, Y. Su & G. Karniadakis

Contributors: Y. Shin*, Y. Su & G. Karniadakis

TL;DR: It is shown that even for rectified linear unit activation, deep and narrow neural networks (NNs) will converge to erroneous mean or median states of the target function depending on the loss with high probability. (via Semantic Scholar)
Sources: ORCID, Crossref, NC State University Libraries
Added: January 24, 2024

2020 journal article

On the Convergence of Physics Informed Neural Networks for Linear Second-Order Elliptic and Parabolic Type PDEs

Communications in Computational Physics, 28(5), 2042–2074.

By: Y. Shin*, J. Darbon & G. Karniadakis

Contributors: J. Darbon & G. Karniadakis

TL;DR: This is the first theoretical work that shows the consistency of PINNs, and it is shown that the sequence of minimizers strongly converges to the PDE solution in $C^0$. (via Semantic Scholar)
Sources: ORCID, Crossref, NC State University Libraries
Added: January 24, 2024

2020 journal article

TRAINABILITY OF ReLU NETWORKS AND DATA-DEPENDENT INITIALIZATION

Journal of Machine Learning for Modeling and Computing, 1(1), 39–74.

By: Y. Shin* & G. Karniadakis*

Source: ORCID
Added: January 24, 2024

2018 journal article

Sequential function approximation with noisy data

Journal of Computational Physics, 371, 363–381.

By: Y. Shin*, K. Wu* & D. Xiu*

TL;DR: This work presents a sequential method for approximating an unknown function sequentially using random noisy samples, which results in a simple numerical implementation using only vector operations and avoids the need to store the entire data set. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2017 journal article

A Randomized Algorithm for Multivariate Function Approximation

SIAM Journal on Scientific Computing, 39(3), A983–A1002.

By: Y. Shin* & D. Xiu

TL;DR: This paper demonstrates that by conducting the approximation randomly one sample at a time the RK method converges, and establishes the optimal sampling probability measure to achieve the optimal rate of convergence. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2017 journal article

A Randomized Tensor Quadrature Method for High Dimensional Polynomial Approximation

SIAM Journal on Scientific Computing, 39(5), A1811–A1833.

By: K. Wu, Y. Shin* & D. Xiu

TL;DR: By using a new randomized algorithm and taking advantage of the tensor structure of the grids, a highly efficient algorithm can be constructed that can be lower than the standard methods, when applicable, such as least squares. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2017 journal article

Sparse Approximation using $\ell_1-\ell_2$ Minimization and Its Application to Stochastic Collocation

SIAM Journal on Scientific Computing, 39(1), A229–A254.

By: L. Yan*, Y. Shin* & D. Xiu

TL;DR: Theoretical estimates regarding its recoverability for both sparse and nonsparse signals are presented and the recoverability of both the standard $\ell_1-\ell_2$ minimization and Chebyshev weighted versions are studied. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2016 journal article

Correcting Data Corruption Errors for Multivariate Function Approximation

SIAM Journal on Scientific Computing, 38(4), A2492–A2511.

By: Y. Shin* & D. Xiu

TL;DR: This work proves that the sparse corruption errors can be effectively eliminated by using $\ell_1$-minimization, also known as the least absolute deviations method, and establishes probabilistic error bounds of the $\ell-1- minimization solution with the corrupted data. (via Semantic Scholar)
UN Sustainable Development Goal Categories
16. Peace, Justice and Strong Institutions (OpenAlex)
Source: ORCID
Added: January 24, 2024

2016 journal article

Nonadaptive Quasi-Optimal Points Selection for Least Squares Linear Regression

SIAM Journal on Scientific Computing, 38(1), A385–A411.

By: Y. Shin* & D. Xiu

TL;DR: This paper presents a quasi-optimal sample set for ordinary least squares (OLS) regression, and presents its efficient implementation via a greedy algorithm, along with several numerical examples to demonstrate its efficacy. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

2016 journal article

On a near optimal sampling strategy for least squares polynomial regression

Journal of Computational Physics, 326, 931–946.

By: Y. Shin* & D. Xiu*

TL;DR: A sampling strategy of least squares polynomial regression that first choose samples from the pluripotential equilibrium measure and then re-order the samples by the quasi-optimal algorithm is presented. (via Semantic Scholar)
Source: ORCID
Added: January 24, 2024

Employment

Updated: October 5th, 2023 20:43

2023 - present

North Carolina State University Raleigh, US
Assistant Professor Mathematics

Citation Index includes data from a number of different sources. If you have questions about the sources of data in the Citation Index or need a set of data which is free to re-distribute, please contact us.

Certain data included herein are derived from the Web of Science© and InCites© (2024) of Clarivate Analytics. All rights reserved. You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.