JP | EN | ZH

Lanczos Method for Eigenvalue Analysis

Category: Structural Analysis > Modal Analysis | Consolidated Guide 2026-04-06

Theory & Physics

Overview

🧑‍🎓

Professor, I always see "Lanczos method" flash by when running modal analysis in FEM software. What's actually happening underneath?

🎓

Modal analysis requires computing the natural frequencies and mode shapes of a structure — mathematically a generalized eigenvalue problem of order $n \times n$, where $n$ is the number of degrees of freedom. In a large FEM model, $n$ can exceed one million. The Lanczos method uses an algorithm called Krylov subspace iteration to extract only the lowest eigenvalues — the low-frequency modes you actually care about — very efficiently from that enormous matrix, without ever needing to compute all $n$ eigenpairs. Cornélius Lanczos proposed it in 1950, but its power wasn't fully appreciated until the 1970s–80s when large-scale computing made it the standard algorithm in every serious FEM solver.

🧑‍🎓

Why is it sufficient to extract only the low-frequency modes? What about high-frequency modes?

🎓

In most engineering problems, structural response is dominated by lower modes. For noise and vibration (NV) analysis on a passenger car, the relevant frequency range is roughly 20–500 Hz — modes above that contribute negligibly to the overall response. The practical standard is the Modal Effective Mass Ratio: keep extracting modes until the cumulative ratio exceeds 90% in each principal direction. For most engineering problems, 50–300 modes capture this threshold, so you're working with a drastically reduced eigenvalue problem even for a million-DOF model.

Generalized Eigenvalue Problem

🧑‍🎓

What does the modal analysis eigenvalue problem look like mathematically?

🎓

Starting from the undamped FEM equations of motion:

$$ [M]\{\ddot{u}\} + [K]\{u\} = \{0\} $$

Substituting the harmonic ansatz $\{u\} = \{\phi\}e^{j\omega t}$ yields the generalized eigenvalue problem:

$$ [K]\{\phi\} = \lambda [M]\{\phi\}, \qquad \lambda = \omega^2 $$

$[K]$ is the stiffness matrix (symmetric positive semi-definite), $[M]$ is the mass matrix (symmetric positive definite), $\omega$ is the natural angular frequency, and $\{\phi\}$ is the mode shape (eigenvector). The full system has $n$ eigenpairs $(\lambda_i, \phi_i)$, but the Lanczos method efficiently finds only the smallest $m \ll n$ of them — the physically relevant low-frequency modes.

The eigenvectors satisfy mass-orthonormality: $\{\phi_i\}^T[M]\{\phi_j\} = \delta_{ij}$ and stiffness-orthogonality: $\{\phi_i\}^T[K]\{\phi_j\} = \lambda_i\delta_{ij}$. These properties allow the equations of motion to be decoupled into $m$ independent SDOF oscillators.

The Lanczos Algorithm

🧑‍🎓

Can you walk me through exactly what the Lanczos algorithm does, step by step?

🎓

The algorithm builds the Krylov subspace $\mathcal{K}_m(A, v_1) = \text{span}\{v_1, Av_1, A^2v_1, \ldots, A^{m-1}v_1\}$ where $A = K^{-1}M$ (after spectral transformation) via repeated matrix-vector products:

  1. Initialization: Choose a starting vector $v_1$ (random, or a known approximate mode shape). Normalize: $v_1 \leftarrow v_1 / \|v_1\|_M$ where $\|\cdot\|_M = \sqrt{v^T M v}$.
  2. Matrix-vector product: Compute $w = K^{-1}(Mv_j)$. This is the dominant cost — it requires solving a large sparse linear system $Kx = Mv_j$ at every iteration. An LU factorization of $K$ is pre-computed once.
  3. Compute tridiagonal coefficients: $\alpha_j = w^T M v_j$ and update $w \leftarrow w - \alpha_j v_j - \beta_{j-1} v_{j-1}$.
  4. Orthogonalize and normalize: $\beta_j = \|w\|_M$; new Lanczos vector $v_{j+1} = w / \beta_j$.
  5. Assemble tridiagonal matrix: After $m$ steps, form the $m \times m$ tridiagonal matrix $T_m$ with diagonal entries $\alpha_j$ and sub-diagonal entries $\beta_j$.
  6. Solve reduced problem: Compute the eigenvalues $\theta_j$ and eigenvectors $s_j$ of the small $T_m$ matrix (trivial cost). When $|\beta_m s_j(m)| \le \text{tol}$, eigenvalue $\theta_j$ has converged.
  7. Back-transform: Ritz vectors $\phi_j = V_m s_j$ (where $V_m = [v_1, \ldots, v_m]$) give the mode shape approximations in the original $n$-DOF space.

The key insight: the algorithm is doing repeated solves $Kx = b$ (cheap after the one-time LU factorization) plus orthogonalization, and the eigenvalue problem is reduced from $n \times n$ to $m \times m$ — which is trivially solvable.

Spectral Shifting

🧑‍🎓

I've heard Lanczos always finds the smallest eigenvalues first. What if I need modes in a specific higher frequency range?

🎓

That's where spectral shifting comes in. The shifted eigenvalue problem is:

$$ (K - \sigma M)\{\phi\} = \mu M\{\phi\}, \qquad \mu = \lambda - \sigma $$

By choosing shift $\sigma$ near the target frequency range ($\sigma = \omega_{target}^2$), the Krylov subspace preferentially converges to eigenvalues near $\sigma$. The LU factorization is performed on the shifted matrix $(K - \sigma M)$ instead of $K$. Multiple shift-and-extract passes can be used to systematically cover a broad frequency range. This is how Nastran's ASHIFT/BSHIFT and Abaqus's frequency range specification work internally. Caution: if $\sigma$ falls exactly on an eigenvalue, $(K - \sigma M)$ becomes singular — choose shifts slightly off from expected eigenvalues.

Numerical Methods & Implementation

Reorthogonalization & Numerical Stability

🧑‍🎓

I've heard the Lanczos method can become numerically unstable due to rounding errors and produce ghost eigenvalues. Is that a real concern in practice?

🎓

It's a real concern, but one that modern solvers handle well — if you're aware of it. Floating-point rounding causes the Lanczos basis vectors to gradually lose their mutual $M$-orthogonality. This allows already-converged eigenvalues to re-appear as duplicate entries in $T_m$ — the "ghost eigenvalues." Three standard countermeasures:

  • Full Reorthogonalization (FRO): At each step, reorthogonalize $v_{j+1}$ against all previous Lanczos vectors using modified Gram-Schmidt. This is safe and eliminates ghost eigenvalues, but memory cost is $O(m \cdot n)$ — expensive for large $m$.
  • Selective Reorthogonalization (SRO, Parlett-Scott 1979): Monitor the level of orthogonality; reorthogonalize only against converged Ritz vectors when a prescribed tolerance is violated. Reduces memory and compute cost significantly. Used by Nastran and Abaqus.
  • Partial Reorthogonalization (PRO): Track orthogonality loss via a recursion formula (Luk-Björck); reorthogonalize only when needed. Best compromise between safety and efficiency for very large problems.

Ghost eigenvalue detection: run a Sturm sequence count using the sign changes in the Cholesky factorization of $(K - \sigma M)$ to count the exact number of eigenvalues below any given shift. If the Lanczos output has more eigenvalues in a range than the Sturm count, some are ghosts.

Block Lanczos for Clustered Eigenvalues

🧑‍🎓

What is Block Lanczos, and when do I need it?

🎓

Standard (single-vector) Lanczos can miss eigenvalues that are numerically identical or very close together — called clustered or repeated eigenvalues. This happens in highly symmetric structures: a circular ring has many pairs of repeated frequencies; a symmetric aircraft fuselage has symmetric/antisymmetric mode pairs.

Block Lanczos uses $p$ starting vectors simultaneously (typically $p = 6$–30), producing a block tridiagonal matrix with $p \times p$ blocks. It naturally captures all eigenvalues in a cluster of multiplicity up to $p$. Additional benefits: better parallelism (each Lanczos step is a block solve), reduced sensitivity to starting vector choice, and often faster overall convergence per wall-clock hour on modern hardware. Nastran's BLOCK LANCZOS, Ansys Block Lanczos, and Abaqus Lanczos all use block variants as defaults.

Convergence Criteria

🧑‍🎓

How do I know when the Lanczos iterations have converged to the correct eigenvalues?

🎓

The primary convergence criterion for eigenvalue $\theta_j$ is the residual norm:

$$ r_j = \left\|K\phi_j - \theta_j M\phi_j\right\|_2 \le \varepsilon \cdot \theta_j \cdot \|M\|_2 $$

where $\varepsilon$ is the tolerance (typically $10^{-6}$ to $10^{-8}$). In the tridiagonal algorithm this simplifies to $|\beta_m s_j(m)| \le \varepsilon\theta_j$. A supplementary check is the Sturm sequence count: verify that the number of eigenvalues found in each frequency band matches the Sturm count. Any mismatch signals missing or ghost eigenvalues and should trigger additional iterations or a shifted restart.

Practical Guide

🧑‍🎓

I'm running NV analysis on a powertrain system. How should I set up the modal analysis?

🎓

Here's the complete modal analysis workflow:

  1. Set the frequency range: Extract modes up to 2–2.5× the highest frequency of interest. For 1 kHz target, extract up to 2–2.5 kHz.
  2. Verify rigid body modes: A free-free analysis should produce exactly 6 rigid body modes (3 translations + 3 rotations) with near-zero frequencies. Fewer means over-constraint; more means stiffness matrix singularity from a free DOF that should be constrained.
  3. Check Modal Effective Mass Ratio: Post-process to get cumulative mass participation in X, Y, Z directions. Target ≥ 90% in each direction. If you're far below 90%, add more modes (extend frequency range).
  4. Identify spurious modes: Modes with extremely low modal mass (effective mass ratio < 0.01%) are often numerical artifacts or local modes of poorly constrained components. Investigate before trusting them.
  5. Model validation: Compare natural frequencies against experimental modal analysis (EMA) data or known analytical solutions for subcomponents. A discrepancy of <5% is excellent; 5–10% is acceptable for complex assemblies with uncertain joint stiffness.
🧑‍🎓

What's the most common mistake engineers make in modal analysis setup?

🎓

Not extracting enough modes, and then applying the results to a frequency response analysis that falls beyond the range of extracted modes. If you compute 50 modes up to 500 Hz and then use mode superposition for forced response at 600 Hz, you'll get completely wrong results — the residual stiffness correction (static residual modes or residual attachment modes) is missing. Always extract modes to at least twice your highest excitation frequency, and include static correction if you have excitation near or above the extraction limit. This mistake is extremely common in industry and produces dangerously inaccurate fatigue or vibration durability predictions.

🧑‍🎓

For very large models — like a full vehicle FEM at 10 million DOF — are there special strategies for making the Lanczos solution efficient?

🎓

Several key strategies are used in production:

  • Component Mode Synthesis (CMS / Craig-Bampton): Decompose the model into substructures, compute modes for each independently (small Lanczos runs), then assemble a reduced coupled system. Orders of magnitude more efficient for 10M+ DOF models, and naturally enables parallel processing of substructures.
  • Multiple RHS direct solver: The Lanczos LU factorization is done once; all subsequent solves are back-substitutions. Using a multifrontal solver (MUMPS, PARDISO, SuperLU) allows batch back-substitution of block vectors efficiently.
  • Frequency windowing with multiple shifts: Divide the target range into windows, run Lanczos with shifts centered in each window, then merge and de-duplicate using Sturm sequence counting.
  • AMSES (Automated Multi-level Sub-structuring Eigensolver): OptiStruct's implementation combines AMG preconditioning with Lanczos to achieve near-linear scaling for extremely large problems (100M+ DOF).

Software Comparison

🧑‍🎓

How does Lanczos implementation compare across major FEM packages?

🎓

Here's a comparison of major tools:

ToolSolver / MethodDefault AlgorithmStrengths
MSC Nastran (SOL 103)LANCZOS, BLOCK_LANCZOSBlock LanczosIndustry gold standard; Automated CMS (ACMS) for ultra-large models
Ansys MechanicalBlock Lanczos, QRDAMP, SupernodeBlock LanczosStrong for large NV models; SPOINT and supernode methods for speed
Abaqus/StandardLanczos (*FREQUENCY, method=LANCZOS)Lanczos (block capable)SIM architecture for large-scale coupled problems; seamless with nonlinear pre-stress
OptiStruct (Altair)AMSES (AMG Lanczos)AMSESNear-linear scaling for 100M+ DOF; best-in-class very large model performance
Code_Aster (EDF)CALC_MODES (Lanczos)Lanczos (ARPACK)Open source; excellent for nuclear structural analysis
🧑‍🎓

What input parameters should I tune in Nastran to control the Lanczos run?

🎓

The key EIGRL parameters in Nastran:

$ EIGRL SID  V1     V2     ND    MSGLVL  MAXSET  SHFSCL  NORM
  EIGRL  1   0.0   2500.0  200    0       12    1.0E+5   MAX
  • V1/V2: Frequency range (Hz) — not rad/s! Extract from 0 to 2500 Hz in this example.
  • ND: Maximum number of eigenvalues to extract (200 here).
  • MAXSET: Block size for Block Lanczos (12 here; increase to 20–40 for highly symmetric models with many repeated frequencies).
  • NORM: Eigenvector normalization — MAX normalizes to peak displacement of 1.0; MASS normalizes for modal mass = 1.0.

Advanced Topics

🧑‍🎓

What are the most important recent developments in eigenvalue solvers beyond classical Lanczos?

🎓

The frontier is handling larger problems and extracting modes from specific frequency bands more efficiently:

  • FEAST Eigenvalue Solver (Polizzi, 2009): Uses spectral projection via contour integration in the complex plane to extract all eigenvalues within a specified frequency band simultaneously in parallel. Outperforms Lanczos for large, high-frequency problems and is embarrassingly parallel — each contour integration point can be computed independently on separate cores.
  • Randomized SVD + Lanczos: Randomized algorithms based on random sketching reduce the problem dimensionality before Lanczos iterations. These have excellent GPU affinity and can outperform classical Lanczos by 5–10× on GPU hardware.
  • Krylov-Schur Algorithm: A restart strategy that is equivalent to Lanczos with implicit restarts (ARPACK). Avoids the memory growth of full Lanczos by maintaining a fixed-size Krylov subspace while discarding unconverged directions.
  • Subspace Iteration with Preconditioning (LOBPCG): Locally Optimal Block Preconditioned Conjugate Gradient. Ideal for GPU computing and large-scale eigenvalue problems where a good preconditioner is available.
  • Quantum Eigensolvers (VQE): Variational quantum eigensolvers are still purely academic for structural engineering problems, but represent the very long-term trajectory for exponentially large eigenvalue problems.
🧑‍🎓

In pre-stressed modal analysis — like finding vibration modes of a pressurized pipe — does Lanczos need modifications?

🎓

No modifications to Lanczos itself — but the stiffness matrix used must include the geometric stiffness (stress stiffening) contribution. First run a nonlinear static analysis to get the equilibrium stress state $\sigma_0$. Then form the effective stiffness $[K_{eff}] = [K_e] + [K_\sigma]$ where $[K_\sigma]$ is the geometric stiffness matrix derived from $\sigma_0$. Feed this into Lanczos as the stiffness matrix. Compressive pre-stress reduces the apparent stiffness (frequencies decrease); tensile pre-stress increases them. This is why a guitar string's pitch increases when you tighten it — you're increasing the geometric stiffness contribution to the effective modal stiffness.

Coffee Break Historical Note

Lanczos — Dismissed, Forgotten, Rehabilitated

Cornélius Lanczos proposed his algorithm in 1950, but the computers of the day were overwhelmed by its numerical instability — the ghost eigenvalue problem made results unreliable, and it was largely dismissed as a failed algorithm for nearly two decades. The turning point came in 1979 when Beresford Parlett and Jas Scott published their selective reorthogonalization strategy, showing that ghost eigenvalues could be reliably identified and eliminated. With the arrival of workstations in the 1980s and the explosion of large-scale FEM models in the 1990s, Lanczos was fully rehabilitated and quickly became the universal standard. Today it sits at the heart of every commercial FEM solver's modal analysis capability. It's a cautionary and inspiring tale: a good idea sometimes just needs to wait a few decades for the technology to catch up.

Troubleshooting

🧑‍🎓

What are the most common errors in eigenvalue analysis, and how do I diagnose them?

🎓

Here are the most common problems and their diagnostic signatures:

SymptomLikely CauseRemedy
Ghost (duplicate) eigenvalues in outputOrthogonality loss in Lanczos without adequate reorthogonalizationEnable full or selective reorthogonalization; cross-check with Sturm sequence count
Negative eigenvalues (imaginary frequencies)Rigid body modes not constrained; inadequate support conditions; buckling load exceededAdd supports or use free-free analysis with 6 rigid body modes; check for contact penetration
Wrong count of rigid body modesBoundary condition error — over- or under-constrained modelFree-free model should yield exactly 6; review support point DOF constraints
Convergence failure — solver does not find requested modesPoorly conditioned stiffness matrix; extremely clustered eigenvalues; insufficient Lanczos vectorsIncrease MAXSET (block size); add spectral shifts in the target range; check mesh quality
Modes with extremely low effective mass (near zero)Local modes of disconnected or lightly connected componentsCheck model connectivity; look for missing contact pairs or adhesive bonds
Mode shapes appear incorrect (crossing modes)Mode tracking error in parametric or optimization studiesUse MAC (Modal Assurance Criterion) ≥ 0.9 threshold to track modes across parameter variations
Detailed Troubleshooting Guide

Ghost eigenvalues, negative eigenvalues, convergence failure, rigid body mode count mismatch — detailed step-by-step solutions

Go to Troubleshooting Guide
Written by NovaSolver Contributors
Anonymous Engineers & AI — サイトマップ
About the Authors