Skip to main content

Numerical Analysis for Applied Science, 2nd Edition

Numerical Analysis for Applied Science, 2nd Edition

Myron B. Allen, Eli L. Isaacson

ISBN: 978-1-119-24565-0

Mar 2019

576 pages

$104.99

Product not available for purchase

Description

Maintaining the clear writing style and effective pedagogical approach of the prior edition, the Second Edition features new coverage on many topics, including preconditioning, kriging methods designed for stochastic data, interpolation in two and three dimensions, steady-state problems, and finite difference methods for variable-coefficient elliptic equations.  This new edition also presents expanded coverage on both the finite-element method and multigrid methods.  The authors present an introduction to numerical analysis and numerical methods and discuss the various applications within the fields of applied mathematics, engineering, and the physical and life sciences.  Including an in-depth exploration of the basic theoretical results and proofs in numerical analysis, this book combines theory and practice and provides a broad selection of current numerical methods that specifically emphasize scientific computation involving differential equations.  This new edition provides updated coverage of topics often omitted in similar titles at this level, which includes multidimensional interpolation, Quasi-Newton methods, multigrid method, QR methods of eigenvalues, finite elements, and partial differential equations. A section on useful tools such as bounded sets, normed linear spaces, and calculus results is provided, and the necessary background material needed for studying numerical analysis is presented.  In addition, this book uniquely introduces the motivation and construction behind these numerical methods in order to explain the types of problems being addressed and the heuristic ideas behind the concepts.  Practical considerations are included to facilitate the translation of concepts into computer code, and the presented mathematical analysis establishes the underlying theory and analytic techniques commonly used to prove numerical methods.  This book also details the advanced theory and complexity behind these topics with references for further study.  

Related Resources

Instructor

Request an Evaluation Copy for this title

Preface v

1 Some Useful Tools 1

1.1 Introduction 1

1.2 Bounded Sets 4

1.2.1 The Least Upper Bound Principle 4

1.2.2 Bounded Sets in Rn 5

1.3 Normed Vector Spaces 8

1.3.1 Vector Spaces 8

1.3.2 Matrices as Linear Operators 10

1.3.3 Norms 12

1.3.4 Inner Products 15

1.3.5 Norm Equivalence 17

1.4 Eigenvalues and Matrix Norms 19

1.4.1 Eigenvalues and Eigenvectors 19

1.4.2 Matrix Norms 21

1.5 Results from Calculus 26

1.5.1 Seven Theorems 26

1.5.2 The Taylor Theorem 28

1.6 Problems 33

2 Approximation of Functions 37

2.1 Introduction 37

2.2 Polynomial Interpolation 38

2.2.1 Motivation and Construction 38

2.2.2 Practical Considerations 42

2.2.3 Mathematical Details 43

2.2.4 Further Remarks 46

2.3 Piecewise Polynomial Interpolation 48

2.3.1 Motivation and Construction 48

2.3.2 Practical Considerations 50

2.3.3 Mathematical Details 54

2.3.4 Further Remarks 55

2.4 Hermite Interpolation 55

2.4.1 Motivation and Construction 55

2.4.2 Practical Considerations 59

2.4.3 Mathematical Details 60

2.5 Interpolation in Two Dimensions 63

2.5.1 Constructing TensorProduct Interpolants 64

2.5.2 Error Estimates for TensorProduct Methods 68

2.5.3 Interpolation on Triangles: Background 70

2.5.4 Construction of Planar Interpolants on Triangles 72

2.5.5 Error Estimates for Interpolation on Triangles 74

2.6 Splines 78

2.6.1 Motivation and Construction 78

2.6.2 Practical Considerations 84

2.6.3 Mathematical Details 85

2.6.4 Further Remarks 94

2.7 LeastSquares Methods 95

2.7.1 Motivation and Construction 96

2.7.2 Practical Considerations 100

2.7.3 Mathematical Details 101

2.7.4 Further Remarks 103

2.8 Trigonometric Interpolation 104

2.8.1 Motivation and Construction 105

2.8.2 Practical Considerations: Fast Fourier Transform 109

2.8.3 Mathematical Details 116

2.8.4 Further Remarks 118

2.9 Problems 119

3 Direct Methods for Linear Systems 125

3.1 Introduction 125

3.2 The Condition Number of a Linear System 127

3.3 Gauss Elimination 131

3.3.1 Motivation and Construction 131

3.3.2 Practical Considerations 133

3.3.3 Mathematical Details 139

3.4 Variants of Gauss Elimination 148

3.4.1 Motivation 148

3.4.2 The Doolittle and Crout Methods 148

3.4.3 Cholesky Decomposition 152

3.5 Band Matrices 155

3.5.1 Motivation and Construction 155

3.5.2 Practical Considerations 161

3.5.3 Mathematical Details 163

3.5.4 Further Remarks 167

3.6 Iterative Improvement 167

3.7 Problems 169

4 Solution of Nonlinear Equations 175

4.1 Introduction 175

4.2 Bisection 179

4.2.1 Motivation and Construction 179

4.2.2 Practical Considerations 181

4.3 Successive Substitution in One Variable 183

4.3.1 Motivation and Construction 183

4.3.2 Practical Considerations 184

4.3.3 Mathematical Details 190

4.4 Newton’s Method in One Variable 192

4.4.1 Motivation and Construction 192

4.4.2 Practical Considerations 194

4.4.3 Mathematical Details 199

4.5 The Secant Method 203

4.5.1 Motivation and Construction 203

4.5.2 Practical Considerations 205

4.5.3 Mathematical Details 206

4.6 Successive Substitution: Several Variables 211

4.6.1 Motivation and Construction 211

4.6.2 Convergence Criteria 213

4.6.3 An Application to Differential Equations 217

4.7 Newton’s Method: Several Variables 219

4.7.1 Motivation and Construction 219

4.7.2 Practical Considerations 221

4.7.3 Mathematical Details: Newton’s Method 224

4.7.4 Mathematical Details: FiniteDifference Newton Methods 229

4.8 Problems 232

5 Iterative Methods for Linear Systems 239

5.1 Introduction 239

5.2 Conceptual Foundations 243

5.3 MatrixSplitting Techniques 248

5.3.1 Motivation and Construction: Jacobi and GaussSeidel Methods 248

5.3.2 Practical Considerations 254

5.3.3 Mathematical Details 258

5.4 Successive Overrelaxation 266

5.4.1 Motivation 266

5.4.2 Practical Considerations 266

5.4.3 Mathematical Details 272

5.4.4 Further Remarks: The Power Method and Symmetric SOR 279

5.5 Multigrid Methods 280

5.5.1 Motivation: Error Reduction Versus Smoothing 280

5.5.2 A TwoGrid Algorithm 284

5.5.3 Vcycles and the Full Multigrid Algorithm 289

5.6 The ConjugateGradient Method 293

5.6.1 Motivation and Construction 293

5.6.2 Practical Considerations 298

5.6.3 Mathematical Details 303

5.6.4 Further Remarks: Krylov Methods and Steepest Descent 309

5.7 Problems 311

6 Eigenvalue Problems 317

6.1 More About Eigenvalues 318

6.2 Power Methods 323

6.2.1 Motivation and Construction 323

6.2.2 Practical Considerations 325

6.3 The QR Decomposition 328

6.3.1 Geometry and Algebra of the QR Decomposition 329

6.3.2 Application to LeastSquares

Problems 334

6.3.3 Further Remarks 336

6.4 The QR Algorithm for Eigenvalues 338

6.4.1 Motivation and Construction 338

6.4.2 Practical Considerations 341

6.4.3 Mathematical Details 347

6.4.4 Further Remarks 351

6.5 Singular Value Decomposition 352

6.5.1 Theory of the Singular Value Decomposition 352

6.5.2 Computing Singular Value Decompositions 354

6.5.3 Application to Principal Component Analysis 355

6.6 Problems 358

7 Numerical Integration 365

7.1 Introduction 365

7.2 NewtonCotes Formulas 366

7.2.1 Motivation and Construction 366

7.2.2 Practical Considerations: Composite Formulas 369

7.2.3 Mathematical Details 371

7.2.4 Further Remarks 375

7.3 Romberg and Adaptive Quadrature 375

7.3.1 Romberg Quadrature 376

7.3.2 Adaptive Quadrature 381

7.4 Gauss Quadrature 387

7.4.1 Motivation and Construction 387

7.4.2 Practical Considerations 390

7.4.3 Mathematical Details 392

7.5 Problems 401

8 Ordinary Differential Equations 405

8.1 Introduction 405

8.2 OneStep Methods 408

8.2.1 Motivation and Construction 408

8.2.2 Practical Considerations 411

8.2.3 Mathematical Details 412

8.2.4 Further Remarks: The RungeKuttaFehlberg Algorithm 418

8.3 Multistep Methods: Consistency and Stability 422

8.3.1 Motivation 422

8.3.2 AdamsBashforth and AdamsMoulton Methods 424

8.3.3 Consistency of Multistep Methods 425

8.3.4 Stability of Multistep Methods 428

8.3.5 PredictorCorrector Methods 432

8.3.6 Mathematical Details: The Root Condition 433

8.4 Multistep Methods: Convergence 440

8.4.1 Convergence Implies Stability and Consistency 441

8.4.2 Consistency and Stability Imply Convergence 444

8.5 Problems 450

9 Difference Methods for PDEs 455

9.1 Introduction 455

9.1.1 Classification 456

9.1.2 Characteristic Curves and Characteristic Equations 457

9.1.3 Grid Functions and Difference Operators 462

9.2 The Poisson Equation 464

9.2.1 The FivePoint Method 465

9.2.2 Consistency and Convergence 468

9.2.3 Accommodating Variable Coefficients 473

9.2.4 Accommodating Other Boundary Conditions 474

9.2.5 Accommodating Nonrectangular Domains 475

9.3 The Advection Equation 477

9.3.1 The CourantFriedrichsLewy Condition 478

9.3.2 Stability of Approximations to TimeDependent Problems 482

9.3.3 Sufficient Conditions for Convergence 487

9.3.4 Further Remarks 490

9.4 Other TimeDependent Equations 491

9.4.1 The Heat Equation 491

9.4.2 The AdvectionDiffusion Equation 500

9.4.3 The Wave Equation 505

9.5 Problems 507

10 Introduction to Finite Elements 513

10.1 Introduction and Background 513

10.1.1 A Model BoundaryValue Problem 514

10.1.2 Variational Formulation 515

10.2 A SteadyState Problem 519

10.2.1 Construction of a FiniteElement Approximation 520

10.2.2 A Basic Error Estimate 522

10.2.3 OptimalOrder Error Estimates 527

10.2.4 Other Boundary Conditions 530

10.2.5 Condition Number of the FiniteElement Matrix 534

10.3 A Transient Problem 539

10.3.1 A Semidiscrete Formulation 539

10.3.2 A Fully Discrete Method 541

10.3.3 Convergence of the Fully Discrete Method 542

10.4 Problems 549

A Divided Differences 551

B Local Minima 555

C Chebyshev Polynomials 557

References 561

Index 565