FMS contains subroutines which are useful for determining some of the lowest eigenvalues and eigenvectors of the following systems:
[A]{X} = [M]{X}[E]
where:
[A] = a known N-by-N matrix stored by FMS,
{X} = a N-by-M group of eigenvectors to be determined,
[M] = a known N-by-N matrix with the same sparsity as [A], or is a diagonal matrix,
[E] = a diagonal M-by-M matrix of eigenvalues to be determined.
Numerous techniques are currently available for computing {X} and [E]. No single technique is best for all possible matrices [A] and [M]. For this reason, FMS offers a building block approach for computing eigenvalues.
Two of the most important tools used for computing eigenvalues have already been presented: matrix factoring and vector solution. You can factor the matrices [A] and [M] by themselves or compute the factors for the shifted matrix [A] - c[M]. The processing speed of FMS permits you to perform matrix factoring on a more frequent basis than can currently be implemented in most programs. You can use the FMS parameter NUMSCG to count the number of diagonal sign changes during the factoring process to determine the number of eigenvalues below the current shift point.
One technique that shows how to use FMS for computing eigenvalues of large matrices is subspace iteration. In this technique, FMS projects the original problem into a subspace having considerably smaller dimensions.
You can easily compute the eigenvectors in the subspace and project them back to the problem space. You repeat this process until the eigenvalues in the problem space converge. This technique is especially useful for large systems that are resident on disk.
The following figure shows a flowchart of the subspace iteration algorithm and the FMS subroutines that you can use at each stage of the analysis.
You begin the process by selecting initial values for the {X} vectors, which define the subspace. The first set of {Y} vectors are computed from {Y}=[M]{X}, which the FMS matrix-vectors multiply subroutine performs. The matrix [A], which can optionally be shifted, is then factored using the appropriate FMS factoring subroutine.
The first step in the interactive loop is to project the problem into the subspace by forming matrices [A*] and [M*]. This process involves solving for new {X} vectors using the {Y} vectors as right-hand sides. FMS vectors-vectors multiply subroutine computes [A*]={X}T{Y} to obtain the matrix [A*]. The FMS matrix- vectors multiply subroutine then computes {Y}=[M]{X} to obtain a new set of {Y} values. The FMS vectors-vectors multiply subroutine computes [M*]={X}T{Y} to obtain the subspace matrix [M*].
An important feature of the above process is that it preserves the sparsity and symmetry of the large matrices [A] and [M]. For FMS, another important feature is that it can process a large number of vectors in parallel.
The next step is to solve the following problem for the eigenvectors [Q] in the subspace:
[A][Q] = [M*][Q][E]
You can perform this calculation using standard math library subroutines designed to solve eigenvalue problems for full memory resident matrices.
After you compute the eigenvectors in the subspace, project them back to the problem space by computing {Y}={Y}[Q]. You can use the FMS vectors-matrix multiply subroutine to perform this task.
Repeat the above process until you obtain an acceptable set of eigenvectors in the problem space. As a final check, shift and factor the matrix to determine if all required eigenvalues have been computed.
FMS intentionally leaves the control of this calculation to your application program. You must specify the dimension of the subspace, the initial starting values, and the convergence criteria. Making these decisions properly usually involves insight into the particular problem you are solving.