# Eigenvalue, eigenvector and eigenspace

Eigenvalues, eigenvectors and eigenspace are properties of matrices. They have applications in different fields, such as dynamic analysis and uncertainty representation methods.

In general, if a matrix acts on a vector, the direction and the length of the vector are changed. If a matrix acts on an eigenvector, the direction is unchanged and only the length of the vector is changed. The associated scaling factor is denoted eigenvalue.

## Classical eigenvalue problem

The classical eigenvalue problem is defined as $\begin{matrix} \mathbf{A}\boldsymbol{\phi}_n = \lambda_n \boldsymbol{\phi}_n, \qquad n=1,\ldots N \end{matrix}$

where $\mathbf{A}$ is an $N\times N$ matrix, $\boldsymbol{\phi}_n$ is denoted eigenvector and $\lambda_n$ is the corresponding eigenvalue . All vectors with the same direction as $\boldsymbol{\phi}_n$ are also called eigenvectors to which the same eigenvalue $\lambda_n$ is assigned. These vectors form the so-called eigenspace for this eigenvalue.

For a real, symmetric matrix $\mathbf{A}$, all of the $\lambda_n$, $n=1,\ldots, N$ are real numbers, each $\lambda_n$ has an associated $\boldsymbol{\phi}_n$ and the following orthogonality properties hold: $\begin{matrix} \boldsymbol{\phi}_n^T \boldsymbol{\phi}_n = 1 \quad {\rm and} \quad \boldsymbol{\phi}_n^T \boldsymbol{\phi}_m = 0, \quad {\rm for} \quad n\ne m \end{matrix}$

## General eigenvalue problem

The general eigenvalue problem is defined as $\begin{matrix} \mathbf{A}\boldsymbol{\phi}_n = \lambda_n \mathbf{B}\boldsymbol{\phi}_n, \qquad n=1,\ldots N \end{matrix}$

where $\mathbf{A}$ and $\mathbf{B}$ are positive, semidefinite matrices of size $N\times N$. In case of symmetric matrices the eigenvalues are positive and the the eigenvectors are orthogonal with respect to the matrix $\mathbf{B}$, i.e. $\begin{matrix} \boldsymbol{\phi}_n^T \mathbf{B}\boldsymbol{\phi}_n = 1 \quad {\rm and} \quad \boldsymbol{\phi}_n^T \mathbf{B}\boldsymbol{\phi}_m = 0, \quad {\rm for} \quad n\ne m \end{matrix}$

## Solving linear eigenvalue problems

The different methods for the numerical solution of the eigenvalue problem can be divided into

1. Vector iteration techniques
2. Transformation methods
3. Polynomial iteration techniques
4. Sturm sequence method

The polynomial iteration techniques and the Sturm sequence method are costly if applied to larger matrices. Vector iteration methods in the original form converge to the lowest eigenvalue and eigenvector. Thus, they are often not applied in their basic form. Nevertheless, the idea of vector iteration is applied in eigenvalue solvers for very large systems, as e.g. the subspace iteration method.

From the basic techniques, the transformation methods are applied most often, especially for medium sized systems. These methods solve for all eigenpairs at the same time. One method is the Jacobi method, the other the QR iteration method which is mostly applied in combination with a Householder tridiagonalization. Generally, the latter is faster and therefore it is implemented in many programming packages.

Of major interest are the solution methods of large structural systems. Two methods which are capable to solve an eigenvalue problem of considerable size are the subspace iteration method and the Lanczos method. Detailed information about the different eigenvalue solvers ca be found in e.g..