# Tolerance Analysis Summary (Part 13 / 13)

Author: Karl - CB Expert/Thursday, November 4, 2010/Categories: Engineering Modeling

No rating

Tolerance Analysis focuses on dimensional aspects of manufactured physical products and the process of determining appropriate tolerances (read: allowable variations) so that things fit together and work the way they are supposed to. When done properly in conjunction with known manufacturing capabilities, products don't feel sloppy nor inappropriately "tight" (i.e., higher operating efforts) to the customer. The manufacturer also minimizes the no-build scenario and spends less time (and money) in assembly, where workers are trying to force sloppy parts together. Defects are less frequent. There are a wealth of benefits too numerous to list but obvious nonetheless. Let us measure twice and cut once.

The three primary methodologies to perform Tolerance Analysis as described in these posts are:

• Worst Case Analysis (WCA)
• Monte Carlo (MC) Analysis

WCA relies on calculating worst case or extreme possibilities, given a range of potential individual values (defined by nominals and tolerances). It is straightforward in its approach by asking the question: "What is the worst that can happen?" It is simple to understand in mathematical terms. However, the extreme values it calculates have no associated probability with them. Because of its conservative nature, there is a good likelihood that those extremes will rarely occur. (In many cases, very rarely.)

RSS relies on mathematical approximations that are generally good for Dimensional Tolerance Analysis, given the usually linear nature of the transfer functions. Unlike WCA, it provides information on predicted output means and standard deviations and variation contributions from isolated inputs. Both of these are invaluable to the design engineer. Now we have associated probabilities with certain output values occurring and we know which input variations to attack if our product has not attained desired quality standards.

MC Simulation relies on a defined transfer function (as does RSS). However, instead of using nasty calculus to approximate means and standard deviations, it simulates the response variation by sampling values from input distributions and applying them to the transfer function many times. The result is also an approximation of the true variation behavior (dependent on seed value and number of trials) but it is a better approximation than RSS. Better in the sense that it does not care if there is curvature or non-linearities in the transfer function and it does not care that input variations are non-normal. RSS provides less accurate predictions when those conditions occur.

Here is a table summarizing Pros and Cons of the three approaches:

 APPROACH PROS CONS Worst Case Analysis Lickety-split calculations based on two sets of extreme input values Easy to understand Accounts for variation extremes Very unlikely variation extremes will occur in reality Very conservative in nature "What-if" experiments may take more time to find acceptable design solutions Root Sum Squares Provides estimation of mean and standard deviation More accurate and less conservative than WCA in predicting variation Provides Sensitivities and % Contributions to enable efficient design direction Difficult to understand & explain (may be important for design change buy-in) Requires math & calculus skills Relies on approximations that are violated when either: Input probabilities are non-normal and/or skewed Transfer function is non-linear Monte Carlo Analysis Easy to understand Most accurate variation estimation (with appropriate # of trials) Provides Sensitivities and % Contributions to enable efficient design direction Accounts for non-normal input behavior and non-linear transfer functions Accuracy depends on: Number of trials Input probability definition Complex models may run "slow"

I hope you have enjoyed this little journey through the Tolerance Analysis world as much as I have had putting my thoughts on internet paper. Please stay tuned for more posts in the realm of analysis and simulation.

Sleeper, Andrew D., Design for Six Sigma Statistics (2006); McGraw-Hill, pp. 703, 731.

Print