2022 journal article
Newton's Method in Mixed Precision
SIAM Review.
. We investigate the use of reduced precision arithmetic to solve the linear equation for the Newton step. If one 3 neglects the backward error in the linear solve, then well-known convergence theory implies that using single precision in the 4 linear solve has very little negative e(cid:11)ect on the nonlinear convergence rate. 5 However, if one considers the e(cid:11)ects of backward error, then the usual textbook estimates are very pessimistic and even the 6 state-of-the-art estimates using probabilistic rounding analysis do not fully conform to experiments. We report on experiments 7 with a speci(cid:12)c example. We store and factor Jacobians in double, single, and half precision. In the single precision case we 8 observe that the convergence rates for the nonlinear iteration do not degrade as the dimension increases and that the nonlinear 9 iteration statistics are essentially identical to the double precision computation. In half precision we see that the nonlinear 10 convergence rates, while poor, do not degrade as the dimension increases. 11 Audience. This paper is intended for students who have completed or are taking an entry-level graduate course in 12 numerical analysis and for faculty who teach numerical analysis. The important ideas in the paper are O notation, (cid:13)oating 13 point precision, backward error in linear solvers, and Newton’s method.