Most finite element simulations require fine meshes to accurately resolve details in geometry, edge effects and material interfaces. However, to ensure numerical stability during execution, the time step must be smaller than the time required for the fastest wave to propagate across the smallest element:
In higher dimensions this equation is adjusted, but the idea is clear. This in turn will result in large amounts of data, and will inherently mean processing data at a very small timestep, driving up computational time and cost. However, a simple solution exists to tackle this problem if the frequency range of interest of the data is known.
Time domain simulation, frequency domain results
In most cases, OnScale is used to simulate a device or problem in the time domain and post-process the results using Fourier transforms combined with other operations, to transform to the frequency domain representation of the desired quantities.
For example, if you are interested in the impedance of a device, and the time domain voltage v(t) and current i(t) quantities are known, the following can be calculated:
Moreover, the device behavior is usually investigated within a specific frequency band. The excitation signal is chosen to have a suitable bandwidth: too narrow, and not all spurious modes are picked up, too wide and the input energy is spread out more, reducing precision. Knowledge of the input signal bandwidth will be critical in the following sections, a commonly used Ricker wavelet excitation with its frequency spectrum is given in Fig. 1.
The sampling theorem
A simplified version of the sampling theorem (that is also usually named to honour some of Whittaker–Nyquist–Kotelnikov–Shannon) is going to be presented here. First, observe an even simpler fact: to represent an arbitrary frequency sine wave, you need at least two sample points per time period. Less than that, and you will not be able to reconstruct the sine wave anymore. Using this as a starting point, the theorem in a simpler form can give an upper limit for the maximum representable frequency of the time domain signal, based on the sampling frequency, considering at least two samples per period:
For example, if the simulation timestep is 0.1 microsecond, the maximum representable frequency is 5 MHz. But now, assume that the frequency range of interest is only around 200 kHz to 500 kHz. We could subsample the data by a factor of 10, reducing file size by the same factor!
This whole thing sounds a bit counterintuitive. Why would I run a simulation at such a fine timestep when afterwards I discard most of the data anyway? The analogy can be the following: imagine you are an umpire at a car race. To determine the qualification order, you need an extremely good stopwatch to resolve the minuscule differences between the lap time of the racers. However, the bulk of the time is irrelevant determining the faster one. Similarly, in simulations, to resolve the exact moment when reflections happen at material and boundary interfaces, and therefore have accurate device behavior simulated, you need a fine timestep. But naturally this generates a much higher frequency content of the data than you need, and it is (usually) safe to discard that.
Two things to bear in mind though: (i) too much subsampling can lead to aliasing and (ii) starting the subsampling at the wrong datapoint causes phase changes in the frequency domain.
First, aliasing is illustrated below for different subsampling values. It can be imagined as a ‘fold’ in the frequency spectrum at half of the sampling frequency. If it is too low, the fold causes summation of the two sides in a destructive way, see Fig. 2. For the 10.6 GHz case, the original and subsampled spectra match well up to 5 GHz. However, with 6.6 GHz sampling frequency aliasing happens, and the subsampled spectrum only agrees up to about 2.5 GHz with the original. Keep in mind that having a sampling frequency more than double the largest frequency component of interest does not guarantee a ‘nice’ frequency spectrum if there are many non-zero components ‘near the fold’. It must be evaluated on a case-by-case basis. Nevertheless, having an excitation signal with a limited bandwidth usually solves the issue.
Secondly, recall that a shift in time domain corresponds to a complex phase shift in the frequency domain:
where the notation common in engineering was used. Now, if a discrete Fourier transform of the signal is taken using a fast-Fourier algorithm, the time vector of the data is discarded, and it is implicitly assumed that the first data point in the data vector corresponds to t=0. Therefore, it is essential to carry out subsampling by keeping the first value of the data and skipping afterwards. For example, assuming a 4-times subsampling, the right way is using the 1st, 5th, 9th etc data points, and NOT the multiples of four i.e. NOT 4th, 8th,12th etc. The effect is illustrated in Fig. 3. Notice that only the real and imaginary values are affected, but not the magnitude.
How to use in OnScale
There are two types of data you can apply this to in OnScale: time histories (time domain data) and mode shape data (Fourier transformed values). In both cases, the logic is similar:
- Use the RATE subcommand to access the temporal subsampling functionality. In its simplest form, just enter an integer that is used to skip samples
- An AUTO option is available, that adjusts the rate value to match a given frequency value, and still satisfy the sampling theorem
- To allow support for subsampling starting at the first value, use default settings, or explicitly specify the first option. To support subsampling at every nth point, we have the option nth available
Here the runtime is not affected significantly, but the exported history file shrinks to the size inversely proportional to the RATE value. Note that the command is combined with snapshot rate definition (see Documentation). All the following examples use the POUT primary command.
Time histories are subsampled at a rate of 2, starting at the first element:
Time histories subsampled every 2 timesteps with nth option:
rate 2 * nth
Time histories are subsampled (no snapshot request), starting at the first element, to satisfy Nyquist criterion with 2.0 GHz:
rate auto 0 first * 2.0e9
rate auto * * * 2.0e9
Mode shape data
Here the output file size is not affected, as the outputs are still going to be complex amplitudes, at the requested locations, at the requested frequency points. However, the processing time during simulation can be significantly reduced using RATE. Note furthermore, that due to the time domain simulation, the Fourier transforms are calculated using a rolling integral, and therefore the linkage between time vector and data vector is present, no issues with start of the subsampling is present.
Some examples (all use SHAP primary command):
Requesting subsampling at every third timestep:
Requesting subsampling at an auto rate with default safety factor (in this case the maximum requested frequency is doubled to be the sampling frequency):
Requesting subsampling at an auto rate with increased safety margin (the largest requested frequency is 20% of the sampling frequency to satisfy the Nyquist criterion):
rate auto 0.2