English
Log in
English

A Guide to Cloud Estimation and Memory Assignment

By Chloe Allison 19 June 2020

Figuring out how much memory is required for a simulation is no easy task. So, we have put together a short guide that should help you tackle those ‘insufficient memory allocation’ errors.

Estimation

OnScale is a cloud simulation platform that gives you access to limitless CPUs to run simulations. Jobs are submitted through the Cloud Scheduler.

Figure 1: Cloud Scheduler

Before submitting any jobs to the cloud, you must estimate your model. The estimator runs one time step of your model locally on your computer to check that there are no syntax errors in the code and also to give you an idea of the solve time, required RAM and estimated core-hour burn. However, since the estimator is extrapolating the data from one time step of the model, this estimation is not always accurate. If your model fails on the cloud with the error ‘insufficient memory allocation’ it means that the estimated RAM was not enough and there hasn’t been enough memory allocated to your simulation on the cloud. In this case, you would need to override the estimate.

Another reason why you may need to override the estimate is if you have an extremely large model or if you have few cores on your local machine. In these cases, this estimation can take a long time or fail completely. This doesn’t mean that your model cannot be run on the cloud, it just means your local computer is unable to process one time step of the simulation.

 

Memory Assignment

If you bypass the estimator then you must assign RAM to your model manually. It is important to assign as close to the true RAM requirement as possible to ensure that the CPUs are assigned efficiently, and you are not spooling up RAM from cloud nodes that you do not need.  This will slow down the runtime. How much RAM the simulation requires depends on a few factors.

How big is my model?

The main thing that contributes to RAM is the size of your model. The larger the model, the more RAM you will require. In FEA modelling, size doesn’t refer to the model dimensions, it refers to the number of elements in the mesh. In FEA, each element is solved using partial differential equations (PDEs). So, the number of elements and the number of degrees of freedom (DOF) each element has correlates to how much RAM is required to solve the model.

To check the size of your model you can select Preview Model in the Home tab. When your model geometry appears, select Stop Preview. This creates a *.flxprt file in the same directory which contains the number of elements in your model.

Figure 2: Snippet from *.flxprt file showing model size

A good way to estimate the RAM requirement is to run a coarsely meshed model, then run a finer meshed model. If you get the difference between the RAM required for the two models and how many elements in both, you do a prioporitonaliy calculation between the elements and RAM. You can then use this to see what a larger scale model would require.

What type of analysis am I doing?

The type of analysis also affects how much memory is required. For example, static analysis is cheaper than transient analysis as it is less complex to compute. Modal analysis is another more expensive analysis because to obtain frequency domain data we need to capture data at every timestep in the time domain. A couple of other examples are mechanical analysis and electrical analysis, the former is less expensive. However, it gets complicated when there are multiple types of analysis happening in one model e.g. electromechanics. If you know the regions in the Multi-physics model being solved by each analysis type you can roughly estimate how much RAM is required using our guidance table in Table 1.

What data arrays am I calculating?

The amount of outputs requested also impacts RAM requirement of a model. Always ensure you are only calculating the data arrays you need. For example, if you only want to see X displacement, ensure you are calculating X displacement only, not all displacements.

Figure 3: Top: Code to calculate all displacement, Bottom: Code to calculate X displacement only

Note: Outputs can also be subsampled to reduce RAM

What precision am I using?

Lastly, the RAM required for your model depends on the precision setting. The precision of a model is the numerical precision. There are two options: single and double. Single precision uses 32-bit word (each float is 4 bytes) and double precision uses a 64-bit word (each float is 8 bytes) so single precision requires half the amount of RAM as double precision.

Table of guidance

The below table should be a guide to help you estimate how much memory is required for your model.

Note: All of the examples below use a 3D solve.

Table 1: Table of RAM requirements for example models

Analysis Type Precision DOF Data Arrays Calculated RAM Required
Fully Mechanical (Static) Single ~100,000 3* ~4 MB
Fully Mechanical (Static) Single ~1,000,000 3* ~36 MB
Fully Mechanical (Static) Single ~10,000,000 3* ~355 MB
Fully Mechanical (Transient) Single ~100,000 1 ~4 MB
Fully Mechanical

(Transient)

Single ~1,000,000 1 ~30 MB
Fully Mechanical (Transient) Single ~10,000,000 1 ~255 MB
Fully Electrical Single ~100,000 1 ~2 GB
Fully Electrical Double ~1,000,000 1 ~41 GB
Electromechanical Single ~100,000 (80% mechanical, 20% electrical) 1 ~197 MB
Electromechanical Single ~1,000,000

(80% mechanical, 20% electrical)

1 ~4.3 GB
Electromechanical Double ~10,000,000

(80% mechanical, 20% electrical)

1 ~84 GB
Thermomechanical Single ~100,000

(2/3 mechanical, 1/3 thermal)

1 ~198 MB
Thermomechanical Single ~1,000,000

(2/3 mechanical, 1/3 thermal)

1 ~181 MB
Thermomechanical Single ~10,000,000

(2/3 mechanical, 1/3 thermal)

1 ~1.8 GB

*You must calculate all displacement arrays for static analysis

 

 

 

Chloe Allison
Chloe Allison

Chloe Allison is an Application Engineer at OnScale. She received her MA in Electrical and Electronics Engineering from the University of Strathclyde. As part of our engineering team Chloe assists with developing applications, improving our existing software and providing technical support to our customers.