Yeonjong Shin Shin, Y., Darbon, J., & Karniadakis, G. E. (2023). Accelerating gradient descent and Adam via fractional gradients. Neural Networks, 161, 185–201. https://doi.org/10.1016/j.neunet.2023.01.002 Shin, Y., Zhang, Z., & Karniadakis, G. E. (2023). ERROR ESTIMATES OF RESIDUAL MINIMIZATION USING NEURAL NETWORKS FOR LINEAR PDES. Journal of Machine Learning for Modeling and Computing, 4(4), 73–101. https://doi.org/10.1615/jmachlearnmodelcomput.2023050411 Ainsworth, M., & Shin, Y. (2022). Active Neuron Least Squares: A training method for multivariate rectified neural networks. SIAM Journal on Scientific Computing, 44(4), A2253–A2275. https://doi.org/10.1137/21m1460764 Deng, B., Shin, Y., Lu, L., zhang, & Karniadakis, G. (2022). Approximation rates of DeepONets for learning operators arising from advection–diffusion equations. Neural Networks, 153, 411–426. https://doi.org/10.1016/j.neunet.2022.06.019 Jagtap, A. D., Shin, Y., Kawaguchi, K., & Karniadakis, G. E. (2022). Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions. Neurocomputing, 468, 165–180. https://doi.org/10.1016/j.neucom.2021.10.036 Shin, Y. (2022). Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks. Analysis and Applications, 20(01), 73–119. https://doi.org/10.1142/s0219530521500263 Zhang, Z., Shin, Y., & Karniadakis, G. E. (2022). GFINNs: GENERIC formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 380(2229). https://doi.org/10.1098/rsta.2021.0207 Hou, J., Shin, Y., & Xiu, D. (2021). Identification of Corrupted Data via $k$-Means Clustering for Function Approximation. CSIAM Transactions on Applied Mathematics, 2(1), 81–107. https://doi.org/10.4208/csiam-am.2020-0212 Ainsworth, M., & Shin, Y. (2021). Plateau Phenomenon in Gradient Descent Training of RELU Networks: Explanation, Quantification, and Avoidance. SIAM Journal on Scientific Computing, 43(5), A3438–A3468. https://doi.org/10.1137/20m1353010 Lu, L., Shin, Y., Su, Y., & Karniadakis, G. E. (2020). Dying ReLU and Initialization: Theory and Numerical Examples. Communications in Computational Physics, 28(5), 1671–1706. https://doi.org/10.4208/cicp.oa-2020-0165 Shin, Y., Darbon, J., & Karniadakis, G. E. (2020). On the Convergence of Physics Informed Neural Networks for Linear Second-Order Elliptic and Parabolic Type PDEs. Communications in Computational Physics, 28(5), 2042–2074. https://doi.org/10.4208/cicp.oa-2020-0193 Shin, Y., & Karniadakis, G. E. (2020). TRAINABILITY OF ReLU NETWORKS AND DATA-DEPENDENT INITIALIZATION. Journal of Machine Learning for Modeling and Computing, 1(1), 39–74. https://doi.org/10.1615/jmachlearnmodelcomput.2020034126 Shin, Y., Wu, K., & Xiu, D. (2018). Sequential function approximation with noisy data. Journal of Computational Physics, 371, 363–381. https://doi.org/10.1016/j.jcp.2018.05.042 Shin, Y., & Xiu, D. (2017). A Randomized Algorithm for Multivariate Function Approximation. SIAM Journal on Scientific Computing, 39(3), A983–A1002. https://doi.org/10.1137/16m1075193 Wu, K., Shin, Y., & Xiu, D. (2017). A Randomized Tensor Quadrature Method for High Dimensional Polynomial Approximation. SIAM Journal on Scientific Computing, 39(5), A1811–A1833. https://doi.org/10.1137/16m1081695 Yan, L., Shin, Y., & Xiu, D. (2017). Sparse Approximation using $\ell_1-\ell_2$ Minimization and Its Application to Stochastic Collocation. SIAM Journal on Scientific Computing, 39(1), A229–A254. https://doi.org/10.1137/15m103947x Shin, Y., & Xiu, D. (2016). Correcting Data Corruption Errors for Multivariate Function Approximation. SIAM Journal on Scientific Computing, 38(4), A2492–A2511. https://doi.org/10.1137/16m1059473 Shin, Y., & Xiu, D. (2016). Nonadaptive Quasi-Optimal Points Selection for Least Squares Linear Regression. SIAM Journal on Scientific Computing, 38(1), A385–A411. https://doi.org/10.1137/15m1015868 Shin, Y., & Xiu, D. (2016). On a near optimal sampling strategy for least squares polynomial regression. Journal of Computational Physics, 326, 931–946. https://doi.org/10.1016/j.jcp.2016.09.032