English
Log in
English

Running Large Array Models using OnScale MPI capabilities

By Oliver Mashari 18 December 2019

In our previous blog post, How to Assign Mechanical and Electrical BCs to Array Structures we showed you how to apply boundary conditions (BCs), calculate arrays, request outputs from the solver, calculate timestep by calling PRCS and set up the execution.

The model we were working with was a smaller version of the 10×10 array we showed you how to make previously, this array was a 3×3. Computationally it can be quite challenging to run large array models but with our MPI capabilities we can run these models in a much more efficient time frame.

What is MPI?

MPI is a de facto standard outlining method of passing information across a distributed network. In 1994, the first specification for MPI-1.0 was published by the MPI Forum, a group of Academic researches and industrial representatives spanning 20+ years of experience in parallel computing. The standard has now reach version 3.1 (June 2015) and a scope of version 4.0 is already in the works.

MPI enables vendors such as Intel and IBM or even the open-source communities to implement libraries to work across a wide range platforms and hardware to solve the most challenging scientific/engineering problems.

When we hear people talking about MPI, in most cases, this is an implementation associated with an application using MPI. It is widely used by most legacy simulation packages and aimed at those lucky enough to have High Performance Computing (HPC) hardware within their company to run their MPI application.

So, if this technology has been around for such a long time and people are already using it, you may be wondering why you should be excited by this now? Cloud technology has reached a turning point as the services has become more accessible and with near infinite compute resource (think Amazon Web Services or Google Cloud Platform). If we can take advantage of these immense cloud infrastructures and implement MPI effectively, we can tackle the range of simulation pain points that no one been able to do.

MPI Simulations

When we run MPI simulations we partition the model into smaller parts and run a single part on a single core.

Array Figure 1. The dotted line illustrates how we split the model up

Now how do we do this in the analyst script? Firstly, note it can’t just be inserted anywhere . All models built in OnScale use the grid and geom command, the command we are going to use to run an MPI simulation uses the part command and the max and sdiv subcommands.

The part command must be inserted between the grid geom command.Figure 2. These are the variables used to partition the MPI model along the X and Y axis

In the code for MPI partitioning, the only variables you need to change are npi, npj and nptot. These variables determine how many CPUs will be requested for the job.

The PART command definition must loop through the geometry and divide it into sections like the following code:Figure 3. Code used to partition the model

Once this code has been added to the phased array model it is ready to be submitted to the cloud. You are not required to estimate an MPI model before running, all you need to do is specify how much RAM to allocate per part for the simulation.

Note: The maximum amount of RAM you can allocate is 8 GB per part.

Through this three-part series, you should have learned how to build array models properly, apply BCs, get outputs and run MPI simulations. The code in this blog post should work for any 3D model only if you wish to keep the size of each part constant through the Z axis, but this is a very easy change if you are not!

For more information about MPI or phased arrays feel free to contact support@onscale.com.



Oliver Mashari
Oliver Mashari

Oliver Mashari is an Application Engineer at OnScale. As part of our engineering team he assists with developing applications, improving our existing software and providing technical support to our customers.

+