English
Log in
English

3 practical tips to start your epic 3D simulation journey

By Oliver Mashari 07 April 2020

The subject of today’s blog post is somewhat controversial.

Engineers that are new to FEA will often say, “I really want to run this huge model in full 3D.” But is it practical to run large 3D models? And for what?

That’s what we’re going to talk about in this article!

First we’ll ask the following questions:

  • What are the advantages and drawbacks to simulating in full 3D?
  • Is simulating in full 3D really required?
  • If we suppose that simulating in full 3D is beneficial, how can we make it practical, knowing that the model will take a long time to run and will be more difficult to debug?

Those are all interesting questions and their answers are often misunderstood by FEA beginners.

Second we’ll review three factors to successfully run full 3D models:

  1. Building a 1D or 2D approximate model
  2. Evaluating the memory that will be required for a 3D run
  3. Running a convergence study by doing a cloud sweep of mesh size

Finally we will study an example model together!

Simulating with finite element analysis in full 3D is pure joy … but the journey can be tumultuous

Finite element analysis (FEA) is a method of analyzing how a part or assembly will perform over its lifetime.

It enables you to predict potential design issues and therefore minimize risk to your product while increasing profits and contributing greatly to your business.

Consider this …

FEA allows you to construct a prototype of the design before you even go anywhere near a production facility.

Speaking from experience, any enthusiastic engineer starting to use OnScale or any FEA tool will think “I want to simulate my device in 3D”, like this one:

But …

Simulating in full 3D can be an epic journey with many ups and downs!

Fortunately there are a few critical factors to consider before jumping into simulating 3D models and also ways to make the journey much smoother! Like making a simple 1D or 2D approximate model first …

1. Building a 1D or 2D approximate model

Will a 1D or a 2D model give you the same answer?

3D simulations can be time-consuming and you will have to wait a long time to get the results (although by leveraging the power of High Performance Computing over the cloud, OnScale can of course get you your results many times faster than legacy FEA software packages ever could!).

Because of the time involved, if your model is not set up correctly, you will have a hard time finding the error.

A 1D or a 2D model will provide some initial results that can be used to validate the 3D model later. It’s fast and quick to build and easy to debug.

Why is it so difficult to find the source of errors and inaccuracies in your simulation model?

Sometimes the model just doesn’t run. In this case if you can pinpoint the part of the model or the code which causes problems, you can often correct your model almost right away.

The problem is that errors and inaccuracies can be difficult to spot because there are many variables that can cause problems.

The difficult part is when the model does run but the results don’t match expectations at all. At this point there enters a real need to understand the physics behind the simulation!

The very worst case is when the model run and the results looks okay, but when you compare these with a physical experiment, you discover there’s a problem.

But first things first, let’s get back to finding the location of the errors in the model.

The solution to build faster complex 3D models

It may seem counter-intuitive to say it, but … always start simple!

1D and 2D models are very effective tools to evaluate model aspects such as:

  • Mesh size
  • Material properties
  • Loads and boundary conditions
  • Output results
  • Timesteps

What is crucial here is that by using a 1D or 2D model, we dramatically decrease the number of variables to consider and each “run” is faster.

If we use simple 1D/2D models at first, we won’t fall into the infinite loop of “Change 1 parameter, wait 3 hours, it doesn’t work, change another parameter, run another simulation …”

What are 1D simulation models good for?

When we simulate resonators, for example, 1D models are great and can be used to quickly evaluate the thickness of resonator stack-ups. On top of that it is a great way to check and confirm that material properties are correct.

Material properties are very important! (Remember the “Garbage-in, Garbage-out” principle. 😊)

What about 2D simulation models?

2D models can be used to simulate cross sections or cylindrical structures and obtain a first approximation of the key performance indicators (KPIs).

(A cylindrical structure is modeled as a 2D axi-symmetric model, which is a special type of 2D model.)

They allow you to evaluate the setup in greater detail before moving on to the full 3D simulation.

Testing should always start on small models to quickly gain answers to setup concerns. It’s only when you are satisfied with the 2D results that you should move onto the 3D simulations.

What to keep in mind when moving from 2D to 3D?

2D models are quick to run, allowing 100s of models to be evaluated in the time taken to run one 3D simulation. They also require less memory!

Remember, though, that while 2D models run fast, they will only give you an approximation of the real solution.

Why?

2D models lack the depth dimension so cannot simulate waves/resonances induced along the third axis. There is less attenuation in 2D models.

2. Evaluating memory that will be required for a full 3D model

OnScale estimates memory requirements prior to job submission to the cloud.

Memory typically scales with model size. However, here are some things to consider:

  • Piezoelectric elements within the electromechanical solve require more memory
  • Additional calculated quantities and outputs also require more memory (like if you have extrapolation or MP4 movie output defined)
  • Switching to the double-precision solver requires more memory
  • Solver types can affect memory usage (Pardiso vs conjugate gradients)

How does OnScale estimate the memory required?

Well, one timestep is run on your local hardware and that gives an estimate of the RAM required as well as the solve time and core-hours needed to run the simulation.

Large models can often exceed 16+ GB RAM. Some systems cannot estimate the model if the hardware lacks memory, therefore sometimes the estimate fails before submitting a job to the cloud.

We have a solution for that, though: Estimation Override is a feature that lets you assign a known amount of memory per simulation.

How can I optimize the memory needed for a simulation?

Reduce the size of your model so it runs faster on your local system.

By overriding the estimate you can assign excess RAM (32 GB) and check the *.flxprt print file in the working directory to find the exact memory usage.

Memory usage is reported at the bottom of the *.flxprt file:

You can then see how much memory is required and can extrapolate what will be needed to run a larger model.

There’s no need to run the model for the whole duration: 10 timesteps are generally enough. Use trial and error to change various values and optimize the settings.

3. Running a convergence study by doing a cloud sweep of mesh size

Does mesh density impact the memory?

Yes, a finer mesh will yield better results, but an over-meshed model will just result in an excessively long simulation time.

It could also be quite detrimental to the stability. Over-meshing can result in bad elements, which is detrimental to stability.

Mesh convergence studies are used to determine how meshing affects device performance.

The process is a simple sweep of the mesh size from small to large, followed by an analysis of the changes in outputs (impedance or KPIs of interest).

Convergence studies can be performed quickly using OnScale’s capability to batch process several simulation studies on the cloud simultaneously.

If you are using wavelength-based meshing, sweep elements per wavelength (epw). Typically we define this variable in Analyst as nepw. We recommend 15 epw to accurately discretize the wave.

Let’s look at a simple mesh convergence case study

This simple unit cell model of a single-port surface acoustic wave (SAW) filter allows rapid simulation of device performance. The model can be downloaded here.

This allows a very wide range of designs to be explored before moving to more complex full 3D simulations.

The model comprises a pair of aluminium electrodes on a piezoelectric substrate. The substrate can either be lithium tantalate (LiTaO3) or lithium niobate (LiNbO3).

Meshing is swept from 15 to 50 elements per wavelength. We will look at how meshing affects the center frequency (Fs) and the impedance of the device.

Now that we have the full data set we can look at the series resonance of the device and how it varies with mesh refinement.

 

Let’s analyze the results

With a coarse mesh of 15 elements per wavelength, we can achieve an accuracy of better than 1%, which is enough for most research and development purposes.

Some things to consider when looking at the results:

  • Is it necessary to achieve 0.01% accuracy?
  • How accurate are your material properties?
  • What are the manufacturing tolerances?
  • Is it worth running a simulation for many days to achieve this accuracy?

Remember that engineers must be pragmatic and aim to achieve the best solution in the shortest time available.

There is always a trade-off between accuracy and simulation time!

Oliver Mashari
Oliver Mashari

Oliver Mashari is an Application Engineer at OnScale. As part of our engineering team he assists with developing applications, improving our existing software and providing technical support to our customers.