Reliability-Based Design Optimization (RBDO)
Theory and Physics
What is RBDO?
Professor, what is RBDO?
RBDO (Reliability-Based Design Optimization) is optimization that considers the variability of design variables (e.g., manufacturing errors). Deterministic optimization optimizes using "standard values," whereas RBDO optimizes such that "the probability of satisfying constraints, even including variability, is above a specified value."
$\beta_t$ is the target reliability index. With $\beta = 3$, the failure probability $\approx 10^{-3}$.
Summary
The "6σ" in reliability design was trademarked by Motorola in 1986
"Six Sigma" is a statistical quality control concept. In 1986, Motorola engineer Bill Smith (known as the "Father of Six Sigma") proposed it internally as a framework to reduce manufacturing defect rates to 3.4 ppm and established it in a proprietary manner. In Reliability-Based Design Optimization (RBDO), this Six Sigma criterion is mathematically modeled as a probabilistic constraint, enabling robust design that accounts for uncertainties (material variability, load fluctuations). GE adopted Six Sigma company-wide in 1995 under CEO Jack Welch, and publicly stated that applying RBDO to engine parts reduced warranty costs by $1 billion annually.
Physical Meaning of Each Term
- Inertia Term (Mass Term): $\rho \ddot{u}$, i.e., "mass × acceleration". Have you ever experienced being thrown forward when slamming on the brakes? That "feeling of being pulled" is precisely the inertial force. Heavier objects are harder to set in motion and harder to stop once moving. Buildings shake during earthquakes because the ground moves suddenly while the building's mass "gets left behind". In static analysis, this term is set to zero, which assumes "forces are applied slowly enough that acceleration can be ignored". It cannot be omitted in impact loads or vibration problems.
- Stiffness Term (Elastic Restoring Force): $Ku$ or $\nabla \cdot \sigma$. When you stretch a spring, you feel a "force trying to return it", right? That is Hooke's law $F=kx$, the essence of the stiffness term. So, a question—if you pull an iron rod and a rubber band with the same force, which stretches more? Obviously, the rubber band. This "resistance to stretching" is the Young's modulus $E$, which determines stiffness. A common misconception: "High stiffness = strong" is incorrect. Stiffness is "resistance to deformation", strength is "resistance to failure"—they are different concepts.
- External Force Term (Load Term): Body forces $f_b$ (e.g., gravity) and surface forces $f_s$ (e.g., pressure, contact forces). Think of it this way—the weight of a truck on a bridge is a "force acting on the entire volume" (body force), while the force of the tires pushing on the road is a "force acting only on the surface" (surface force). Wind pressure, water pressure, bolt tightening force... all are external forces. A typical mistake here: getting the load direction wrong. Intending "tension" but modeling "compression"—it sounds like a joke, but it actually happens when coordinate systems are rotated in 3D space.
- Damping Term: Rayleigh damping $C\dot{u} = (\alpha M + \beta K)\dot{u}$. Try plucking a guitar string. Does the sound continue forever? No, it gradually fades. That's because vibrational energy is converted to heat through air resistance and internal friction in the string. Car shock absorbers work on the same principle—they intentionally absorb vibrational energy to improve ride comfort. What if damping were zero? Buildings would keep shaking forever after an earthquake. Since that doesn't happen in reality, setting appropriate damping is crucial.
Assumptions and Applicability Limits
- Continuum assumption: Treats material as a continuous medium, ignoring microscopic inhomogeneity.
- Small deformation assumption (for linear analysis): Deformation is sufficiently small compared to initial dimensions, and the stress-strain relationship is linear.
- Isotropic material (unless specified otherwise): Material properties are independent of direction (anisotropic materials require separate tensor definitions).
- Quasi-static assumption (for static analysis): Ignores inertial and damping forces, considering only the balance between external and internal forces.
- Non-applicable cases: Large deformation/large rotation problems require geometric nonlinearity. Nonlinear material behavior like plasticity or creep requires constitutive law extensions.
Dimensional Analysis and Unit Systems
| Variable | SI Unit | Notes / Conversion Memo |
|---|---|---|
| Displacement $u$ | m (meter) | When inputting in mm, unify loads and elastic modulus to MPa/N system. |
| Stress $\sigma$ | Pa (Pascal) = N/m² | MPa = 10⁶ Pa. Be careful of unit system inconsistencies when comparing with yield stress. |
| Strain $\varepsilon$ | Dimensionless (m/m) | Note the distinction between engineering strain and logarithmic strain (for large deformations). |
| Elastic modulus $E$ | Pa | Steel: ~210 GPa, Aluminum: ~70 GPa. Note temperature dependence. |
| Density $\rho$ | kg/m³ | In mm system: tonne/mm³ (= 10⁻⁹ tonne/mm³ for steel). |
| Force $F$ | N (Newton) | In mm system: N, in m system: N (unified). |
Numerical Methods and Implementation
RBDO Calculation
1. FORM/SORM — Efficiently calculates the reliability index $\beta$.
2. Surrogate Model — Replaces FEM. Speeds up Monte Carlo.
3. OptiSlang + FEM — Probabilistic wrapper + FEM.
Summary
FORM is the standard first-order reliability analysis method for over 40 years
The First Order Reliability Method (FORM) was proposed by Hasofer and Lind in the Journal of Engineering Mechanics in 1974. It is a reliability assessment method based on the design point (most probable failure point). The algorithm transforms variables to standard normal space and finds the minimum distance (reliability index β) to the failure surface (limit state surface). With low computational cost, it has been the standard engineering design method for over 40 years. Its combination with Monte Carlo methods (Importance Sampling) to compensate for FORM's approximation accuracy limits is widely used in seismic design and nuclear structure evaluation.
Linear Elements (1st Order Elements)
Linear interpolation between nodes. Low computational cost but low stress accuracy. Beware of shear locking (mitigated with reduced integration or B-bar method).
Quadratic Elements (with Mid-side Nodes)
Can represent curved deformation. Stress accuracy improves significantly, but degrees of freedom increase by about 2-3 times. Recommended when stress evaluation is critical.
Full Integration vs Reduced Integration
Full Integration: Risk of over-constraint (locking). Reduced Integration: Risk of hourglass modes (zero-energy modes). Choose appropriately for the situation.
Adaptive Mesh
Automatic refinement based on error indicators (e.g., ZZ estimator). Efficiently improves accuracy in stress concentration areas. Includes h-method (element subdivision) and p-method (order increase).
Newton-Raphson Method
Standard method for nonlinear analysis. Updates the tangent stiffness matrix each iteration. Achieves quadratic convergence within the convergence radius, but computational cost is high.
Modified Newton-Raphson Method
Updates the tangent stiffness matrix using the initial value or every few iterations. Cost per iteration is low, but convergence speed is linear.
Convergence Criteria
Force residual norm: $||R|| / ||F_{ext}|| < \epsilon$ (typically $\epsilon = 10^{-3}$ to $10^{-6}$). Displacement increment norm: $||\Delta u|| / ||u|| < \epsilon$. Energy norm: $\Delta u \cdot R < \epsilon$.
Load Increment Method
Applies the full load not all at once, but in small increments. The arc-length method (Riks method) can trace beyond limit points on the load-displacement curve.
Analogy: Direct Method vs Iterative Method
The direct method is like "solving simultaneous equations accurately with pen and paper"—reliable but takes too long for large-scale problems. The iterative method is like "repeatedly guessing to approach the correct answer"—the initial answer is rough, but accuracy improves with each iteration. It's the same principle as looking up a word in a dictionary: it's more efficient to open it at an estimated location and adjust forward/backward (iterative) than to search sequentially from the first page (direct).
Relationship Between Mesh Order and Accuracy
1st order elements are like "approximating a curve with a ruler"—represented by straight line segments, so accuracy is limited. 2nd order elements are like a "flexible curve"—can represent curved changes, dramatically improving accuracy even at the same mesh density. However, computational cost per element increases, so judgment should be based on total cost-effectiveness.
Practical Guide
RBDO in Practice
Automotive crash safety (ensuring safety under manufacturing variability), aerospace structural reliability.
Practical Checklist
B787 quantified fastener hole failure probability using RBDO
In the design of CFRP fuselage panels for the Boeing 787, RBDO using probabilistic FEM was adopted to evaluate the probability of fatigue crack initiation. Material property statistical variability (fiber strength, interlaminar shear strength) was sampled via Monte Carlo methods, and the design process optimized the allowable range for fastener pitch and tightening torque against a target failure probability (10⁻⁷ per flight hour), as introduced in the Boeing Technical Journal (2009).
Analogy for Analysis Flow
The analysis flow is actually very similar to cooking. First, you buy ingredients (prepare CAD model), do the prep work (mesh generation), apply heat (solver execution), and finally plate it (post-processing visualization). Here's an important question—which step in cooking is most prone to failure? Actually, it's the "prep work". If mesh quality is poor, the results will be a mess no matter how good the solver is.
Common Pitfalls for Beginners
Are you checking mesh convergence? Do you think "the calculation ran = the result is correct"? This is actually the most common trap for CAE beginners. The solver will always return "some answer" for the given mesh. But if the mesh is too coarse, that answer can be far from reality. Confirm that results stabilize across at least three mesh densities—neglecting this leads to the dangerous assumption that "the computer gave the answer, so it must be correct."
Thinking About Boundary Conditions
Setting boundary conditions is like "writing the exam question". If the question is wrong? No matter how accurately you calculate, the answer will be wrong. "Is this surface truly fully fixed?" "Is this load truly uniformly distributed?"—Correctly modeling real-world constraints is often the most critical step in the entire analysis.
Software Comparison
RBDO Tools
OptiSlang is a specialized probabilistic design tool acquired by Ansys
optiSlang (by German company Dynardo, founded 2001) is a probabilistic design optimization platform integrating sensitivity analysis, robust design, RBDO, and coefficient of variation analysis. It has been adopted for Volkswagen's crash safety robust design and ZF's gearbox reliability design. Ansys acquired it in 2019 and integrated it as Ansys optiSLang. Its seamless ability to build FEA ⇔ probabilistic optimization loops from within the Ansys Workbench environment is a key differentiator from competitors like NESSUS (by SwRI, for space/aerospace).
The Three Most Important Questions for Selection
- "What problem are you solving?": Does it support the physical models/element types needed for Reliability-Based Design Optimization (RBDO)? For example, presence of LES support for fluids, or contact/large deformation capability for structures can be differentiators.
- "Who will use it?": For beginner teams, tools with rich GUIs are suitable; for experienced users, flexible script-driven tools are better. Similar to the difference between automatic (GUI) and manual (script) transmission cars.
- "How far will you expand?": Selection considering future scale-up (HPC support), expansion to other departments, and integration with other tools leads to long-term cost reduction.
Advanced Technologies
Advanced RBDO
The difference between FORM and SORM lies in the presence of curvature correction
FORM (First Order Reliability Method) linearly approximates the failure surface at the design point, so error can be large for highly nonlinear failure surfaces with significant curvature. SORM (Second Order Reliability Method), proposed by Hohenbichler & Rackwitz (1983), is a curvature-corrected method that corrects the probability using principal curvatures κi at the design point. For cases with high curvature (e.g., buckling reliability of thin-walled structures), the estimated reliability can differ by more than 10 times between FORM and SORM. Guidelines for nuclear and spacecraft design mandate the use of SORM or Monte Carlo.
Troubleshooting
RBDO Troubles
Monte Carlo method requires 100 million runs to find P(f)=10⁻⁶
When using Monte Carlo methods for reliability analysis, the number of samples needed to ensure estimation accuracy for failure probability Pf is on the order of the inverse of Pf (e.g., ~10⁸ samples for Pf=10⁻⁶). For a system where one FEA evaluation takes 10 seconds, 10⁸ samples would take 32 years, making it impractical. Importance Sampling or Line Sampling (LS) can reduce sample counts by 100-1000 times, but their efficiency depends on prior estimation accuracy of the failure region, so combination with FORM is the standard practical workflow.
When You Think "The Analysis Doesn't Match"
- First, take a deep breath—Panicking and randomly changing settings can make the problem more complex.
- Create a minimal reproducible case—
Related Topics
なった
詳しく
報告