E-Book, Englisch, 628 Seiten
Ford Numerical Linear Algebra with Applications
1. Auflage 2014
ISBN: 978-0-12-394784-0
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
Using MATLAB and Octave
E-Book, Englisch, 628 Seiten
ISBN: 978-0-12-394784-0
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
William Ford completed his undergraduate degree at MIT, having majored in mathematics and minored in electrical engineering. He went on to complete a Ph.D. in mathematics at the University of Illinois, Urbana-Champaign, with his thesis paper entitled 'Numerical Solution of Pseudo-parabolic Partial Differential Equations,' and after two years of researching and teaching within the Department of Mathematics at Clemson University he joined the faculty of the Mathematics Department at the University of the Pacific, in Stockton, California. Here he went on to become a founding member of the Department of the Computer Science. Beginning in the 1980s, he and William Topp began jointly publishing books, that included a Motorola 68000 assembly language book through D.C. Heath, a book on data structures with C++ through Prentice Hall, and a book on data structures with Java through Prentice Hall. Dr. Ford additionally developed an IDE (Integrated Development Environment) named 'EZJava' to accompany the Java book and served as the Chair of the Computer Science Department until his retirement in 2014.
Autoren/Hrsg.
Weitere Infos & Material
1;Front Cover;1
2;Numerical Linear Algebra with Applications;4
3;Copyright;5
4;Dedication;6
5;Contents;8
6;List of Figures;14
7;List of Algorithms;18
8;Preface;20
9;Matrices;28
9.1;Matrix Arithmetic;28
9.1.1;Matrix Product;29
9.1.2;The Trace;32
9.1.3;MATLAB Examples;33
9.2;Linear Transformations;34
9.2.1;Rotations;34
9.3;Powers of Matrices;38
9.4;Nonsingular Matrices;40
9.5;The Matrix Transpose and Symmetric Matrices;43
9.6;Chapter Summary;45
9.7;Problems;46
9.7.1;MATLAB Problems;49
10;Linear Equations;52
10.1;Introduction to Linear Equations;52
10.2;Solving Square Linear Systems;54
10.3;Gaussian Elimination;55
10.3.1;Upper-Triangular Form;56
10.4;Systematic Solution of Linear Systems;58
10.5;Computing the Inverse;61
10.6;Homogeneous Systems;63
10.7;Application: A Truss;64
10.8;Application: Electrical Circuit;66
10.9;Chapter Summary;67
10.10;Problems;69
10.10.1;MATLAB Problems;70
11;Subspaces;74
11.1;Introduction;74
11.2;Subspaces of Rn;74
11.3;Linear Independence;76
11.4;Basis of a Subspace;77
11.5;The Rank of a Matrix;78
11.6;Chapter Summary;82
11.7;Problems;83
11.7.1;MATLAB Problems;84
12;Determinants;86
12.1;Developing the Determinant of a 2bold0mu mumu section2 and a 3bold0mu mumu section3 Matrix;86
12.2;Expansion by Minors;87
12.3;Computing a Determinant Using Row Operations;91
12.4;Application: Encryption;98
12.5;Chapter Summary;100
12.6;Problems;101
12.6.1;MATLAB Problems;103
13;Eigenvalues and Eigenvectors;106
13.1;Definitions and Examples;106
13.2;Selected Properties of Eigenvalues and Eigenvectors;110
13.3;Diagonalization;111
13.3.1;Powers of Matrices;115
13.4;Applications;116
13.4.1;Electric Circuit;116
13.4.2;Irreducible Matrices;118
13.4.3;Ranking of Teams Using Eigenvectors;121
13.5;Computing Eigenvalues and Eigenvectors using MATLAB;122
13.6;Chapter Summary;123
13.7;Problems;124
13.7.1;MATLAB Problems;126
14;Orthogonal Vectors and Matrices;130
14.1;Introduction;130
14.2;The Inner Product;131
14.3;Orthogonal Matrices;134
14.4;Symmetric Matrices and Orthogonality;136
14.5;The L2 Inner Product;137
14.6;The Cauchy-Schwarz Inequality;138
14.7;Signal Comparison;139
14.8;Chapter Summary;140
14.9;Problems;141
14.9.1;MATLAB Problems;143
15;Vector and Matrix Norms;146
15.1;Vector Norms;146
15.1.1;Properties of the 2-Norm;148
15.1.2;Spherical Coordinates;150
15.2;Matrix Norms;153
15.2.1;The Frobenius Matrix Norm;154
15.2.2;Induced Matrix Norms;154
15.3;Submultiplicative Matrix Norms;158
15.4;Computing the Matrix 2-Norm;159
15.5;Properties of the Matrix 2-Norm;163
15.6;Chapter Summary;165
15.7;Problems;167
15.7.1;MATLAB Problems;169
16;Floating Point Arithmetic;172
16.1;Integer Representation;172
16.2;Floating-Point Representation;174
16.2.1;Mapping from Real Numbers to Floating-Point Numbers;175
16.3;Floating-Point Arithmetic;177
16.3.1;Relative Error;177
16.3.2;Rounding Error Bounds;178
16.3.2.1;Addition;178
16.3.2.2;Multiplication;178
16.3.2.3;Matrix Operations;178
16.4;Minimizing Errors;182
16.4.1;Avoid Adding a Huge Number to a Small Number;182
16.4.2;Avoid Subtracting Numbers That Are Close;182
16.5;Chapter Summary;183
16.6;Problems;185
16.6.1;MATLAB Problems;187
17;Algorithms;190
17.1;Pseudocode Examples;190
17.1.1;Inner Product of Two Vectors;191
17.1.2;Computing the Frobenius Norm;191
17.1.3;Matrix Multiplication;191
17.1.4;Block Matrices;192
17.2;Algorithm Efficiency;193
17.2.1;Smaller Flop Count Is Not Always Better;195
17.2.2;Measuring Truncation Error;195
17.3;The Solution to Upper and Lower Triangular Systems;195
17.3.1;Efficiency Analysis;197
17.4;The Thomas Algorithm;198
17.4.1;Efficiency Analysis;200
17.5;Chapter Summary;201
17.6;Problems;202
17.6.1;MATLAB Problems;204
18;Conditioning of Problems and Stability of Algorithms;208
18.1;Why Do We Need Numerical Linear Algebra?;208
18.2;Computation Error;210
18.2.1;Forward Error;210
18.2.2;Backward Error;211
18.3;Algorithm Stability;212
18.3.1;Examples of Unstable Algorithms;213
18.4;Conditioning of a Problem;214
18.5;Perturbation Analysis for Solving a Linear System;217
18.6;Properties of the Matrix Condition Number;220
18.7;MATLAB Computation of a Matrix Condition Number;222
18.8;Estimating the Condition Number;222
18.9;Introduction to Perturbation Analysis of Eigenvalue Problems;223
18.10;Chapter Summary;224
18.11;Problems;226
18.11.1;MATLAB Problems;227
19;Gaussian Elimination and the LU Decomposition;232
19.1;LU Decomposition;232
19.2;Using LU to Solve Equations;233
19.3;Elementary Row Matrices;235
19.4;Derivation of the LU Decomposition;237
19.4.1;Colon Notation;241
19.4.2;The LU Decomposition Algorithm;243
19.4.3;LU Decomposition Flop Count;244
19.5;Gaussian Elimination with Partial Pivoting;245
19.5.1;Derivation of PA=LU;246
19.5.2;Algorithm for Gaussian Elimination with Partial Pivoting;250
19.6;Using the LU Decomposition to Solve Axi = bi, 1i k;252
19.7;Finding A–1;253
19.8;Stability and Efficiency of Gaussian Elimination;254
19.9;Iterative Refinement;255
19.10;Chapter Summary;257
19.11;Problems;259
19.11.1;MATLAB Problems;263
20;Linear System Applications;268
20.1;Fourier Series;268
20.1.1;The Square Wave;270
20.2;Finite Difference Approximations;271
20.2.1;Steady-State Heat and Diffusion;272
20.3;Least-Squares Polynomial Fitting;274
20.3.1;Normal Equations;276
20.4;Cubic Spline Interpolation;279
20.5;Chapter Summary;283
20.6;Problems;284
20.6.1;MATLAB Problems;287
21;Important Special Systems;290
21.1;Tridiagonal Systems;290
21.2;Symmetric Positive Definite Matrices;294
21.2.1;Applications;296
21.3;The Cholesky Decomposition;296
21.3.1;Computing the Cholesky Decomposition;297
21.3.2;Efficiency;299
21.3.3;Solving Ax = b If A Is Positive Definite;299
21.3.4;Stability;300
21.4;Chapter Summary;300
21.5;Problems;301
21.5.1;MATLAB Problems;304
22;Gram-Schmidt Orthonormalization;308
22.1;The Gram-Schmidt Process;308
22.2;Numerical Stability of the Gram-Schmidt Process;311
22.3;The QR Decomposition;314
22.3.1;Efficiency;316
22.3.2;Stability;317
22.4;Applications of the QR Decomposition;317
22.4.1;Computing the Determinant;318
22.4.2;Finding an Orthonormal Basis for the Range of a Matrix;318
22.5;Chapter Summary;319
22.6;Problems;319
22.7;MATLAB Problems;320
23;The Singular Value Decomposition;326
23.1;The SVD Theorem;326
23.2;Using the SVD to Determine Properties of a Matrix;329
23.2.1;The Four Fundamental Subspaces of a Matrix;331
23.3;SVD and Matrix Norms;333
23.4;Geometric Interpretation of the SVD;334
23.5;Computing the SVD Using MATLAB;335
23.6;Computing A–1;336
23.7;Image Compression Using the SVD;337
23.7.1;Image Compression Using MATLAB;338
23.7.2;Additional Uses;340
23.8;Final Comments;341
23.9;Chapter Summary;341
23.10;Problems;343
23.10.1;MATLAB Problems;344
24;Least-Squares Problems;348
24.1;Existence and Uniqueness of Least-Squares Solutions;349
24.1.1;Existence and Uniqueness Theorem;349
24.1.2;Normal Equations and Least-Squares Solutions;351
24.1.3;The Pseudoinverse, m n;351
24.1.4;The Pseudoinverse, m
List of Figures
Fig. 0.1 NLALIB hierarchy. xxv
Fig. 1.1 Matrix multiplication. 3
Fig. 1.2 Rotating the xy-plane. 7
Fig. 1.3 Rotated line 8
Fig. 1.4 Rotate three-dimensional coordinate system. 9
Fig. 1.5 Translate a point in two dimensions. 9
Fig. 1.6 Rotate a line about a point. 10
Fig. 1.7 Rotation about an arbitrary point. 11
Fig. 1.8 Undirected graph. 12
Fig. 2.1 Polynomial passing through four points. 26
Fig. 2.2 Inconsistent equations. 31
Fig. 2.3 Truss. 38
Fig. 2.4 Electrical circuit. 39
Fig. 2.5 Truss problem. 45
Fig. 2.6 Circuit problem. 45
Fig. 3.1 Subspace spanned by two vectors. 48
Fig. 4.1 Geometrical interpretation of the determinant. 75
Fig. 5.1 Direction of eigenvectors. 80
Fig. 5.2 Circuit with an inductor. 89
Fig. 5.3 Currents in the RL circuit. 92
Fig. 5.4 Digraph of an irreducible matrix. 93
Fig. 5.5 Hanowa matrix. 101
Fig. 6.1 Distance between points. 104
Fig. 6.2 Equality, addition, and subtraction of vectors. 104
Fig. 6.3 Scalar multiplication of vectors. 104
Fig. 6.4 Vector length. 106
Fig. 6.5 Geometric interpretation of the inner product. 106
Fig. 6.6 Law of cosines. 106
Fig. 6.7 Triangle inequality. 112
Fig. 6.8 Signal comparison. 112
Fig. 6.9 Projection of one vector onto another. 115
Fig. 7.1 Effect of an orthogonal transformation on a vector. 122
Fig. 7.2 Spherical coordinates. 123
Fig. 7.3 Orthonormal basis for spherical coordinates. 124
Fig. 7.4 Point in spherical coordinate basis and Cartesian coordinates. 125
Fig. 7.5 Function specified in spherical coordinates. 126
Fig. 7.6 Effect of a matrix on vectors. 128
Fig. 7.7 Unit spheres in three norms. 129
Fig. 7.8 Image of the unit circle. 133
Fig. 8.1 Floating-point number system. 149
Fig. 8.2 Map of IEEE double-precision floating-point. 150
Fig. 9.1 Matrix multiplication. 165
Fig. 10.1 Forward and backward errors. 184
Fig. 10.2 The Wilkinson polynomial. 187
Fig. 10.3 Ill-conditioned Cauchy problem. 188
Fig. 10.4 Conditioning of a problem. 189
Fig. 11.1 LU decomposition of a matrix. 206
Fig. 11.2 k × k submatrix. 215
Fig. 11.3 Gaussian elimination flop count. 217
Fig. 12.1 Square wave with period 2p. 244
Fig. 12.2 Fourier series converging to a square wave. 244
Fig. 12.3 The heat equation: a thin rod insulated on its sides. 245
Fig. 12.4 Numerical solution of the heat equation: subdivisions of the x and t axes. 245
Fig. 12.5 Numerical solution of the heat equation:locally related points in the grid. 246
Fig. 12.6 Grid for the numerical solution of the heat equation. 246
Fig. 12.7 Graph of the solution for the heat equation problem. 247
Fig. 12.8 Linear least-squares approximation. 250
Fig. 12.9 Quadratic least-squares approximation. 251
Fig. 12.10 Estimating absolute zero. 252
Fig. 12.11 Linear interpolation. 253
Fig. 12.12 Cubic splines. 253
Fig. 12.13 Cubic spline approximation. 256
Fig. 12.14 Sawtooth wave with period 2p. 258
Fig. 13.1 Conductance matrix. 270
Fig. 14.1 Vector orthogonal projection. 282
Fig. 14.2 Removing the orthogonal projection. 282
Fig. 14.3 Result of the first three steps of Gram-Schmidt. 283
Fig. 15.1 The four fundamental subspaces of a matrix. 305
Fig. 15.2 SVD rotation and distortion. 308
Fig. 15.3 (a) Lena (512 × 512) and (b) lena using 35 modes. 312
Fig. 15.4 Lena using 125 modes. 312
Fig. 15.5 Singular value graph of lena. 313
Fig. 15.6 SVD image capture. 314
Fig. 16.1 Geometric interpretation of the least-squares solution. 322
Fig. 16.2 An overdetermined system. 322
Fig. 16.3 Least-squares estimate for the power function. 329
Fig. 16.4 The reduced SVD for a full rank matrix. 330
Fig. 16.5 Velocity of an enzymatic reaction. 332
Fig. 16.6 Underdetermined system. 339
Fig. 17.1 Givens matrix. 353
Fig. 17.2 Givens rotation. 354
Fig. 17.3 Householder reflection. 363
Fig. 17.4 Linear combination associated with Householder reflection. 363
Fig. 17.5 Householder reflection to a multiple of e1. 366
Fig. 17.6 Transforming an m × n matrix to upper triangular form using householder reflections. 369
Fig. 17.7 Householder reflections and submatrices. 369
Fig. 17.8 Householder reflection for a submatrix. 369
Fig. 18.1 Tacoma Narrows Bridge collapse. 380
Fig. 18.2 Mass-spring system. 380
Fig. 18.3 Solution to a system of ordinary differential equations. 382
Fig. 18.4 Populations using the Leslie matrix. 387
Fig. 18.5 Column buckling. 387
Fig. 18.6 Deflection curves for critical loads P1, P2, and P3. 389
Fig. 18.7 Reduced Hessenberg matrix. 401
Fig. 18.8 Inductive step in Schur’s triangularization. 407
Fig. 18.9 Schur’s triangularization. 407
Fig. 18.10 Eigenvalues of a 2 × 2 matrix as shifts. 413
Fig. 18.11 Springs problem. 430
Fig. 19.1 Bisection. 454
Fig. 19.2 Interlacing. 454
Fig. 19.3 Bisection method: ?k located to the left. 456
Fig. 19.4 Bisection method: ?k located to the right. 457
Fig. 19.5 Bisection and multiple eigenvalues. 458
Fig. 19.6 Secular equation. 460
Fig. 20.1 SOR spectral radius. 479
Fig. 20.2 Region in the plane. 479
Fig. 20.3 Five-point stencil. 480
Fig. 20.4 Poisson’s equation. (a) Approximate solution and (b) analytical solution. 481
Fig. 20.5 One-dimensional Poisson equation grid. 484
Fig. 20.6 One-dimensional red-black GS. 486
Fig. 21.1 Examples of...




