Eigenvalue, eigenvector and eigenspace
Eigenvalues, eigenvectors and eigenspace are properties of matrices. They have applications in different fields, such as dynamic analysis and uncertainty representation methods.
In general, if a matrix acts on a vector, the direction and the length of the vector are changed. If a matrix acts on an eigenvector, the direction is unchanged and only the length of the vector is changed. The associated scaling factor is denoted eigenvalue.
Classical eigenvalue problem
The classical eigenvalue problem is defined as
where is an matrix, is denoted eigenvector and is the corresponding eigenvalue . All vectors with the same direction as are also called eigenvectors to which the same eigenvalue is assigned. These vectors form the so-called eigenspace for this eigenvalue.
For a real, symmetric matrix , all of the , are real numbers, each has an associated and the following orthogonality properties hold:
General eigenvalue problem
The general eigenvalue problem is defined as
where and are positive, semidefinite matrices of size . In case of symmetric matrices the eigenvalues are positive and the the eigenvectors are orthogonal with respect to the matrix , i.e.
Solving linear eigenvalue problems
The different methods for the numerical solution of the eigenvalue problem can be divided into
- Vector iteration techniques
- Transformation methods
- Polynomial iteration techniques
- Sturm sequence method
The polynomial iteration techniques and the Sturm sequence method are costly if applied to larger matrices. Vector iteration methods in the original form converge to the lowest eigenvalue and eigenvector. Thus, they are often not applied in their basic form. Nevertheless, the idea of vector iteration is applied in eigenvalue solvers for very large systems, as e.g. the subspace iteration method.
From the basic techniques, the transformation methods are applied most often, especially for medium sized systems. These methods solve for all eigenpairs at the same time. One method is the Jacobi method, the other the QR iteration method which is mostly applied in combination with a Householder tridiagonalization. Generally, the latter is faster and therefore it is implemented in many programming packages.
Of major interest are the solution methods of large structural systems. Two methods which are capable to solve an eigenvalue problem of considerable size are the subspace iteration method and the Lanczos method. Detailed information about the different eigenvalue solvers ca be found in e.g..
- C.G.J. Jacobi. Über ein leichtes Verfahren die in der Theorie der Säcularstörungen vorkommenden Gleichungen numerischen Gleichungen zu lösen.Crelle's Journal. 30:51-54, 1846
- J.G.F. Francis, The QR Transformation, Part I. The Computer Journal, 4(3): 265-271, 1961. 
- J.G.F. Francis, The QR Transformation, Part II. The Computer Journal, 4(4): 332-345, 1962. 
- A.S. Householder. The approximate solution of matrix problems. Journal of the ACM (JACM), 5(3): 205-243, 1958. 
- A.S. Householder. Generated error in rotational tridiagonalization. Journal of the ACM (JACM), 5(3): 335-338, 1958. 
- K.-J. Bathe and E.L. Wilson. Large eigenvalue problems in dynamic analysis. Journal of the Engineering Mechanics Division, ASCE.6: 1471-1485, 1972.
- C. Lanczos. An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. Journal of Research of the National Bureau of Standards. 45: 255-282, 1950. 
- G.S. Székely. Numerical methods for eigenvalue and eigenvector solutions of large structural systems. Internal working report no. 38-96. Institute of Engineering Mechanics. University of Innsbruck. November 1996.
- K.-J. Bathe. Finite Element Procedures in Engineering Analysis. Prentice-Hall, Inc., Enlewood Cliffs, New Jersey, 1982, ISBN 0133173054.