Succeed at Simulation

The “Seven rules for successful simulation” need to be revisited and revised because of advancements in computers. Simulations that once took weeks to compute an answer now take mere hours.

By Cliff Knight, KnightHawk Engineering

Share Print Related RSS
Page 1 of 2 « Prev 1 | 2 View on one page

Much has changed in the seven years since I wrote an article called “Seven rules for successful simulation” [1]. Computers are now faster and more powerful and hold more data in ever smaller space.

Engineers right out of school with little or no real-world experience have the opportunity to focus on “world-class” problems. As far as software, new programs seem to pop up on a daily basis, touting more advanced capabilities than ever before. Some are so “advanced” that they seem to run themselves. Simulations that once took weeks to compute an answer now take mere hours.

With all the computer power and sophisticated software, what responsibility is the user taking? What role does the seasoned engineer play in this new age? Generally the engineer with 30-to-40-years of experience prefers to do complex calculations by hand. This leaves the young computer jockeys of today to sit at the helm of the many robust simulation packages. But, because the software is handling many of the constraints and boundary conditions that govern the outcome of a solution, how much confidence can we have in the results?

Read almost any major engineering magazine and you’ll see advertisements that make all sorts of claims about software capability — e.g., non-linear capability when addressing the physics, and “automatic” and “user-friendly” when discussing the model design itself. Many of the non-linear solutions require all sorts of coefficients that normally come from experiments or well-known solutions. These coefficients may not be completely accurate for the problem you are working on.

Achieving accuracy

What process or method should we use to ensure accurate solutions? When performing numerical modeling, seven basic rules have withstood the test of time:

  1. No result can be more accurate than the input conditions. Not too long ago I was in a technical review where the engineers were touting the complexity and accuracy of their solution. When asked about critical boundary conditions it became quite clear that important factors that would affect the solution were no more than estimates. Sometimes we become so proud of our calculation that we forget the error and uncertainty of input data. Many models require calculation of boundary conditions. These are either calculated by hand or by some other software package. Sometimes these boundary conditions are provided by measurements. A model or simulation can’t be any more accurate than the input data. One of the worst mistakes on a simulation is evaluating the results to four significant digits when the input data are limited to only one significant digit. Always evaluate the “uncertainty” of all aspects of the problem you are working on. (See CP’s ongoing series by Dr. Gooddata; Part 1.)

  2. Nothing beats experience. It’s important to define the model that best fits the physical situation. Many young folks are computer wizards and can develop a model and quickly get results. That’s great but it’s a good idea to have as much gray hair around the problem as possible. The experienced folks might not know all the details about the computer modeling but they have “been there and done it” in the field. They have an intuitive feel for the behavior of a problem that others don’t have. This is important for success. A few years ago I was in a meeting where a group were presenting simulation results on a structural dynamics problem. A well-respected senior engineer told the group their results were wrong. When asked specifically what the problem was, he said, “I don’t know what the actual problem is but the numbers you calculated in the results I have never seen in that range before.” The young guns just passed the comments off, but the senior engineer asked them to solve a problem he had done by hand that was a known solution in the industry. As it turned out, their program left out a gravitational acceleration term, causing an error in all results by a factor of 32.2. That senior engineer saved the team an embarrassing mistake; it’s a lesson those young guns will never forget.

  3. Take it easy on the problem size. Don’t try to model the world. With computers as powerful as they are today, the trend is to make the model as big as the computer can handle. This isn’t the best strategy. Take the problem in increments and strategic “bite sizes.” Similarly, stay away from non-linear analysis until things are “tuned up and debugged” with linear analysis. Then, introduce complexity in small steps. As a friend once said, “Just because you can eat more doesn’t mean you should.” This step also ties to Rule 2, because with all the computer power now we can do more testing effectually and faster to make sure our approach is correct.

  4. Always check the model in detail. I remember discussing a structural dynamics problem with an engineer who had a doctorate in mechanical engineering. The results of the problem were coming up with an incredibly unrealistic deflection. The experienced plant folks questioned this. The PhD’s response was a long theoretical “mumbo-jumbo” on how this could occur. As it turned out, the only problem was a bug in the input data. You analytical types should keep your ego in check when questioned by experienced design engineers and operating plant personnel who have lived with the situation. Don’t always trust those beautiful color plots. Assume everything in the model is wrong until all is proven correct. It’s always wise to perform hand calculations to double-check certain aspects of the problem.

  5. Define a model that best depicts the physical situation. Don’t rush into this important step. Good thinking upfront in defining the model will save valuable time on the overall project. It’s important that the boundary conditions of the problem are gathered from a reliable source and that these conditions can be verified through a known solution. If the exact model design isn’t clear, take a “macro” model approach — use a large simplistic model to yield rough solutions to give some idea what the detailed model should look like. After several trial attempts a detailed model then can be developed with reasonable certainty.
Page 1 of 2 « Prev 1 | 2 View on one page
Share Print Reprints Permissions

What are your comments?

You cannot post comments until you have logged in. Login Here.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments