Succeed at Simulation

Sept. 6, 2007
The “Seven rules for successful simulation” need to be revisited and revised because of advancements in computers. Simulations that once took weeks to compute an answer now take mere hours.

Much has changed in the seven years since I wrote an article called “Seven rules for successful simulation” [1]. Computers are now faster and more powerful and hold more data in ever smaller space.

Engineers right out of school with little or no real-world experience have the opportunity to focus on “world-class” problems. As far as software, new programs seem to pop up on a daily basis, touting more advanced capabilities than ever before. Some are so “advanced” that they seem to run themselves. Simulations that once took weeks to compute an answer now take mere hours.

With all the computer power and sophisticated software, what responsibility is the user taking? What role does the seasoned engineer play in this new age? Generally the engineer with 30-to-40-years of experience prefers to do complex calculations by hand. This leaves the young computer jockeys of today to sit at the helm of the many robust simulation packages. But, because the software is handling many of the constraints and boundary conditions that govern the outcome of a solution, how much confidence can we have in the results?

[javascriptSnippet ]

Read almost any major engineering magazine and you’ll see advertisements that make all sorts of claims about software capability — e.g., non-linear capability when addressing the physics, and “automatic” and “user-friendly” when discussing the model design itself. Many of the non-linear solutions require all sorts of coefficients that normally come from experiments or well-known solutions. These coefficients may not be completely accurate for the problem you are working on.

Achieving accuracy

What process or method should we use to ensure accurate solutions? When performing numerical modeling, seven basic rules have withstood the test of time:

  1. No result can be more accurate than the input conditions. Not too long ago I was in a technical review where the engineers were touting the complexity and accuracy of their solution. When asked about critical boundary conditions it became quite clear that important factors that would affect the solution were no more than estimates. Sometimes we become so proud of our calculation that we forget the error and uncertainty of input data. Many models require calculation of boundary conditions. These are either calculated by hand or by some other software package. Sometimes these boundary conditions are provided by measurements. A model or simulation can’t be any more accurate than the input data. One of the worst mistakes on a simulation is evaluating the results to four significant digits when the input data are limited to only one significant digit. Always evaluate the “uncertainty” of all aspects of the problem you are working on. (See CP’s ongoing series by Dr. Gooddata; Part 1.)

  2. Nothing beats experience. It’s important to define the model that best fits the physical situation. Many young folks are computer wizards and can develop a model and quickly get results. That’s great but it’s a good idea to have as much gray hair around the problem as possible. The experienced folks might not know all the details about the computer modeling but they have “been there and done it” in the field. They have an intuitive feel for the behavior of a problem that others don’t have. This is important for success. A few years ago I was in a meeting where a group were presenting simulation results on a structural dynamics problem. A well-respected senior engineer told the group their results were wrong. When asked specifically what the problem was, he said, “I don’t know what the actual problem is but the numbers you calculated in the results I have never seen in that range before.” The young guns just passed the comments off, but the senior engineer asked them to solve a problem he had done by hand that was a known solution in the industry. As it turned out, their program left out a gravitational acceleration term, causing an error in all results by a factor of 32.2. That senior engineer saved the team an embarrassing mistake; it’s a lesson those young guns will never forget.

  3. Take it easy on the problem size. Don’t try to model the world. With computers as powerful as they are today, the trend is to make the model as big as the computer can handle. This isn’t the best strategy. Take the problem in increments and strategic “bite sizes.” Similarly, stay away from non-linear analysis until things are “tuned up and debugged” with linear analysis. Then, introduce complexity in small steps. As a friend once said, “Just because you can eat more doesn’t mean you should.” This step also ties to Rule 2, because with all the computer power now we can do more testing effectually and faster to make sure our approach is correct.

  4. Always check the model in detail. I remember discussing a structural dynamics problem with an engineer who had a doctorate in mechanical engineering. The results of the problem were coming up with an incredibly unrealistic deflection. The experienced plant folks questioned this. The PhD’s response was a long theoretical “mumbo-jumbo” on how this could occur. As it turned out, the only problem was a bug in the input data. You analytical types should keep your ego in check when questioned by experienced design engineers and operating plant personnel who have lived with the situation. Don’t always trust those beautiful color plots. Assume everything in the model is wrong until all is proven correct. It’s always wise to perform hand calculations to double-check certain aspects of the problem.

  5. Define a model that best depicts the physical situation. Don’t rush into this important step. Good thinking upfront in defining the model will save valuable time on the overall project. It’s important that the boundary conditions of the problem are gathered from a reliable source and that these conditions can be verified through a known solution. If the exact model design isn’t clear, take a “macro” model approach — use a large simplistic model to yield rough solutions to give some idea what the detailed model should look like. After several trial attempts a detailed model then can be developed with reasonable certainty.

  6. Use commercial software that has the theory to back it up. Good software developers of commercial code aren’t afraid to publish a theory manual. I’ve invariably found that the quality of the manual correlates with the performance of the software. Commercial codes that have had a poor theory manual or that lacked one have performed poorly. Ignore any hype that the code is so user-friendly that the engineer doesn’t need to have detailed knowledge of the theory. For you P.E.’s (Professional Engineers) out there, “watch it.” You’re responsible for the design or analysis, not the software vendor. Is the bat liable if a baseball player hits a foul ball or strikes out? Don’t ruin your career because you trusted or used a cheap piece of software that’s relatively unproven or isn’t backed by a detailed theory manual. Just because some software is the latest thing on the market and claims to have all the latest “advances” doesn’t make it the best product to use.

  7. Perform hand calculations to check the approach. For most problems you can run hand calculations on a test case to check your approach to a problem and give you confidence in your solution. After completion of the detailed analysis and checking, see what the governing aspects of the problem are and develop test cases that can be calculated by hand to validate your approach. It’s also important to run sensitivity studies on governing parameters to evaluate the accuracy needed for these parameters to get a meaningful solution.

The last part of simulation almost always leads to assessment. Sometimes this is as big a problem as the simulation itself. First, take for example, a structural analysis where the stresses must be classified for a code assessment to be performed. Often a process called stress linearization is conducted to compare the stresses to the code. Stress linearization, while sounding sophisticated, is nothing more than a translator for Finite Element Analysis (FEA) to the code. Unfortunately, almost all the stress linearization routines have errors associated with the problem and they are user-dependent on chosen cross-sections; so, be careful and therefore refer to Rule 2.

FEA exploded in the 1980s and Computational Fluid Dynamics (CFD) grew rapidly in the 1990s. Now there are integrated design packages that will automatically perform CFD and FEA and require little knowledge from the user. Almost all of this analysis is non-linear and highly dependent on boundary conditions, convergence algorithms, model definition and equation parameters. I can give 10 inexperienced engineers a CFD problem and get 10 different answers because there’s one thing for certain — these packages will yield answers. When assessing the results the reviewer must be aware of this. Having some bench-mark comparisons often is helpful. Data are merely data, unless you are able to correctly interpret those data.

A bright future

Don’t be put off by the concerns I’ve raised. There are good software tools out there and there are good people using them. The future is exciting. For instance, today we can perform non-linear structural analysis and assess local plasticity, but usually don’t. I believe that one day almost all structural analysis will be non-linear and the model will automatically account for local plasticity. In CFD, the tools are gaining speed and efficiency and more data and information are available to tune the solvers to achieve a better solution. Maybe someday the software programs and computers will be smart enough to replace seasoned engineers with common sense and experience (ha ha!) but that day isn’t today. Until then, you’re better off sticking to the seven rules for successful simulation.

Reference


1. Knight, C., “Seven rules for successful simulation,” Hydrocarbon. Proc., p. 61 (Dec. 2001).

Cliff Knight, P.E., is president of KnightHawk Engineering, Houston. E-mail him at [email protected].

Sponsored Recommendations

Heat Recovery: Turning Air Compressors into an Energy Source

More than just providing plant air, they're also a useful source of heat, energy savings, and sustainable operations.

Controls for Industrial Compressed Air Systems

Master controllers leverage the advantages of each type of compressor control and take air system operations and efficiency to new heights.

Discover Your Savings Potential with the Kaeser Toolbox

Discover your compressed air station savings potential today with our toolbox full of calculators that will help you determine how you can optimize your system!

The Art of Dryer Sizing

Read how to size compressed air dryers with these tips and simple calculations and correction factors from air system specialists.