Topology Optimization (SIMP Method)
Theory and Physics
What is Topology Optimization?
Professor, what is topology optimization?
Topology optimization optimizes the presence or absence of material (0/1) within a design domain. It automatically determines where to place holes and where to leave material. Proposed by Bendsøe & Kikuchi in 1988.
SIMP Method
SIMP (Solid Isotropic Material with Penalization) is the most widely used topology optimization method. It assigns a design variable $\rho_e$ (density from 0 to 1) to each element:
$$ E_e = \rho_e^p E_0 $$
SIMP (Solid Isotropic Material with Penalization) is the most widely used topology optimization method. It assigns a design variable $\rho_e$ (density from 0 to 1) to each element:
The penalty exponent $p$ (typically $p = 3$) suppresses intermediate densities, pushing them towards 0/1.
Optimization Problem
Typical formulation:
"Find the stiffest structure while keeping the material below $V^*$," right?
Exactly. FEM calculates displacement for each iteration → calculates sensitivity (change in objective function when each element's density is changed) → updates density → iterates until convergence.
Summary
The "SIMP" in SIMP method was named by Bendsoe (1989)
The representative topology optimization method SIMP (Solid Isotropic Material with Penalization) is a method formulated by Bendsoe (1989) by simplifying the homogenization method of Bendsoe & Kikuchi (1988). By expressing the density ρ of each element as a continuous variable from 0 to 1 and stiffness as Eρ^p, the penalty parameter p suppresses intermediate densities, yielding a clear material distribution that is almost 0 or 1. The origin of the name comes from Bendsoe writing "Solid Isotropic..." in the title of his 1989 paper, which later became an acronym.
Physical Meaning of Each Term
- Inertia term (mass term): $\rho \ddot{u}$, meaning "mass × acceleration". Have you ever experienced being thrown forward when slamming on the brakes? That "feeling of being pulled" is precisely the inertial force. Heavier objects are harder to set in motion and harder to stop once moving. Buildings shake during earthquakes because the ground moves suddenly while the building's mass "gets left behind". In static analysis, this term is set to zero, assuming "forces are applied slowly so acceleration can be ignored". It absolutely cannot be omitted for impact loads or vibration problems.
- Stiffness term (elastic restoring force): $Ku$ or $\nabla \cdot \sigma$. When you pull a spring, you feel a "force trying to return it", right? That's Hooke's law $F=kx$, the essence of the stiffness term. Now a question — an iron rod and a rubber band, which stretches more when pulled with the same force? Obviously the rubber. This "resistance to stretching" is the Young's modulus $E$, which determines stiffness. A common misconception: "high stiffness ≠ strong". Stiffness is "resistance to deformation", strength is "resistance to failure" — different concepts.
- External force term (load term): Body force $f_b$ (gravity, etc.) and surface force $f_s$ (pressure, contact force, etc.). Think of it this way — the weight of a truck on a bridge is a "force acting on the entire volume" (body force), the force of the tires pushing on the road surface is a "force acting only on the surface" (surface force). Wind pressure, water pressure, bolt tightening force... all are external forces. A typical pitfall here: getting the load direction wrong. Intending "tension" but ending up with "compression" — sounds like a joke, but it actually happens when coordinate systems are rotated in 3D space.
- Damping term: Rayleigh damping $C\dot{u} = (\alpha M + \beta K)\dot{u}$. Try plucking a guitar string. Does the sound continue forever? No, it gradually fades. That's because vibration energy is converted to heat by air resistance and internal friction in the string. Car shock absorbers work on the same principle — intentionally absorbing vibration energy to improve ride comfort. What if damping were zero? Buildings would keep shaking forever after an earthquake. Since that doesn't happen in reality, setting appropriate damping is crucial.
Assumptions and Applicability Limits
- Continuum assumption: Treats material as a continuous medium, ignoring microscopic heterogeneity
- Small deformation assumption (for linear analysis): Deformation is sufficiently small compared to initial dimensions, stress-strain relationship is linear
- Isotropic material (unless otherwise specified): Material properties are independent of direction (anisotropic materials require separate tensor definition)
- Quasi-static assumption (for static analysis): Ignores inertial and damping forces, considers only equilibrium between external and internal forces
- Non-applicable cases: Geometric nonlinearity is required for large deformation/large rotation problems. Constitutive law extension is needed for nonlinear material behavior like plasticity and creep
Dimensional Analysis and Unit Systems
| Variable | SI Unit | Notes / Conversion Memo |
|---|---|---|
| Displacement $u$ | m (meter) | When inputting in mm, unify load/elastic modulus to MPa/N system |
| Stress $\sigma$ | Pa (Pascal) = N/m² | MPa = 10⁶ Pa. Be careful of unit system inconsistency when comparing with yield stress |
| Strain $\varepsilon$ | Dimensionless (m/m) | Note the distinction between engineering strain and logarithmic strain (for large deformation) |
| Elastic modulus $E$ | Pa | Steel: ~210 GPa, Aluminum: ~70 GPa. Note temperature dependence |
| Density $\rho$ | kg/m³ | In mm system: tonne/mm³ (= 10⁻⁹ tonne/mm³ for steel) |
| Force $F$ | N (Newton) | Unify to N in mm system, N in m system |
Numerical Methods and Implementation
SIMP Method Algorithm
1. Set initial density for all elements to $\rho = V^*/V_{total}$
2. Calculate displacement and stress using FEM
3. Calculate sensitivity $\partial C / \partial \rho_e$ (Adjoint method)
4. Update density (OC method or MMA method)
5. Iterate until convergence (typically 50–200 iterations)
Solvers
Summary
SIMP without density filtering produces a checkerboard pattern
It has long been known that running SIMP topology optimization without density filtering causes a numerical pathology called the "checkerboard pattern", where adjacent elements alternately become ρ=0/1. The Helmholtz PDE (partial differential equation) filter proposed by Bourdin (2001) naturally controls the minimum member size (rmin) and is now incorporated into the standard implementations of OptiStruct, Tosim, and ABAQUS. Setting rmin in correspondence with manufacturing constraints (minimum wall thickness, draft angle) allows simultaneous management of design aesthetics and manufacturability.
Linear Elements (1st-order elements)
Linear interpolation between nodes. Low computational cost but low stress accuracy. Beware of shear locking (mitigated with reduced integration or B-bar method).
Quadratic Elements (with mid-side nodes)
Can represent curved deformation. Stress accuracy improves significantly, but degrees of freedom increase by about 2–3 times. Recommended when stress evaluation is important.
Full Integration vs Reduced Integration
Full Integration: Risk of over-constraint (locking). Reduced Integration: Risk of hourglass mode (zero-energy mode). Choose appropriately for the situation.
Adaptive Mesh
Automatic refinement based on error indicators (e.g., ZZ estimator). Efficiently improves accuracy in stress concentration areas. Includes h-method (element subdivision) and p-method (order increase).
Newton-Raphson Method
Standard method for nonlinear analysis. Updates tangent stiffness matrix each iteration. Achieves quadratic convergence within convergence radius, but computational cost is high.
Modified Newton-Raphson Method
Updates tangent stiffness matrix using initial value or every few iterations. Cost per iteration is low, but convergence speed is linear.
Convergence Criteria
Force residual norm: $||R|| / ||F_{ext}|| < \epsilon$ (typically $\epsilon = 10^{-3}$ to $10^{-6}$). Displacement increment norm: $||\Delta u|| / ||u|| < \epsilon$. Energy norm: $\Delta u \cdot R < \epsilon$
Load Increment Method
Applies load in small increments rather than all at once. The arc-length method (Riks method) can trace beyond extremum points on the load-displacement curve.
Analogy: Direct Method vs Iterative Method
The direct method is like "solving simultaneous equations accurately with pen and paper" — reliable but takes too long for large-scale problems. The iterative method is like "repeatedly guessing to approach the correct answer" — starts with a rough answer but accuracy improves with each iteration. It's the same principle as looking up a word in a dictionary: opening to an estimated page and adjusting forward/backward (iterative method) is more efficient than searching sequentially from the first page (direct method).
Relationship Between Mesh Order and Accuracy
1st-order elements are like "approximating a curve with a ruler" — represented by straight line segments, so accuracy is limited. 2nd-order elements are like a "flexible curve" — can represent curved changes, dramatically improving accuracy even with the same mesh density. However, computational cost per element increases, so judgment should be based on total cost-effectiveness.
Practical Guide
Topology Optimization in Practice
Automotive lightweighting (brackets, suspension arms), aerospace (structural parts), 3D printing (freeform shapes).
Practical Checklist
The Airbus A380 wing attachment bracket is a masterpiece of SIMP optimization
The Airbus A380 cabin ceiling panel attachment bracket (first flight 2006) is famous in the industry as a part designed using SIMP topology optimization with OptiStruct. It achieved a 30% weight reduction compared to the conventional manually designed part while meeting fatigue life constraints, winning Altair's Engineering Impact Award. OptiStruct is now used as the standard topology optimization tool across all Airbus aircraft models, with over 1000 part optimizations conducted annually using this tool.
Analogy for Analysis Flow
The analysis flow is actually very similar to cooking. First, buy ingredients (prepare CAD model), do prep work (mesh generation), apply heat (solver execution), and finally plate it (visualization in post-processing). Here's an important question — which step in cooking is most prone to failure? Actually, it's the "prep work". If mesh quality is poor, results will be a mess no matter how excellent the solver is.
Pitfalls Beginners Often Fall Into
Are you checking mesh convergence? Do you think "the calculation ran = the result is correct"? This is actually the most common trap for CAE beginners. The solver will always return "some answer" for the given mesh. But if the mesh is too coarse, that answer will be far from reality. Verify that results stabilize across at least three levels of mesh density — neglecting this leads to the dangerous assumption that "the computer gave the answer, so it must be correct".
Thinking About Boundary Conditions
Setting boundary conditions is like "writing the problem statement" for an exam. If the problem statement is wrong? No matter how accurately you calculate, the answer will be wrong. "Is this surface truly fully fixed?" "Is this load truly uniformly distributed?" — Correctly modeling real-world constraint conditions is actually the most critical step in the entire analysis.
Software Comparison
Topology Optimization Tools
Related Topics
なった
詳しく
報告