English
Log in
English

How To Solve a 10 Billion Degree of Freedom Problem

May 30, 2019

By: Andrew Tweedie, UK Director at OnScale

Anyone who has worked in engineering simulation knows the feeling. Your coffee is freshly brewed, and you’re all set for a successful day of changing the world and then you get one of these little messages from your favorite simulation tool:

“Solver out of Memory”

or perhaps worse still

“Remaining simulation time: 7,000 years”

We all get it. Finite Element Analysis (FEA) is extremely computationally intensive, and sometimes we come across a problem that is just a little too big to solve practically. From my perspective these typically fall into two areas:

As an engineer I want to focus on the design problem that I’m trying to solve, but I’ve frequently found myself spending hours trying to simplify my simulation enough to fit it into the hardware that I have available. It’s definitely made me miss deadlines and targets over the years.

So, ideally, we want to solve problems quickly, regardless of size. I call this the 10 billion degree of freedom (DoF) problem. It sounds like an insanely large number, but is surprisingly one that’s come up a number of times over the years in my work on simulation ultrasonic transducers. When it does, it usually goes something like:

“We would love to simulate this, but it would be a 10 billion DoFs… so that’s not going to happen.”

So how can we solve such a large problem? Is it even possible?

Well, there’s more than one way, but at OnScale we make use of Cloud High Performance Computing (HPC) to do the heavy lifting. The recipe looks something like this:

  1. Run your simulations on the cloud, so you don’t need to make use of local hardware
  2. Develop a way of parallelizing simulations across a large number of cloud nodes – what we call Cloud HPC
  3. Give users completely flexible access to the platform, so that they can make use of it whenever they need it, without any complex setup

Sounds great, right? But is this kind of simulation actually useful?

Well, at the moment, we can use OnScale to accelerate mechanical and electro-mechanical time-domain simulations. We’re working on extending our software to encompass our full range of Multiphysics solvers.

I’ll give you some examples. We do a lot of work simulating ultrasonic flow meters, which are used measure flow rates of liquids and gasses through pipes – think oil and gas metering, or Formula 1 fuel flow sensors. These typically require very large, explicit time domain simulations of both elastic and acoustic waves, which can often push the 1 billion DoF mark. We like to push boundaries here at OnScale, so we recently ran a simulation that was a bit larger and tried a 10 billion DoF problem, simulated over 2,000 timesteps.

We split the simulation across 4,096 cloud cores and set it running. We’d hoped this would be pretty fast, but even we were surprised by the results.

The full simulation ran in 8 minutes.

Intrigued, one of our engineers methodically tested how long the simulation would take on various numbers of cores. He came back with the following graph.

This graph is showing the speed increase we get by assigning additional cores to the simulation.

What we saw was very close to linear speed up vs. cores, making the system very scalable. It also means that we can run this problem around 300x faster than we used to (before we could run OnScale on the cloud) when running it on a typical 16 core desktop (ignoring the fact that the desktop would also require 464 GB of RAM).

Intrigued by the results, our engineers wanted to push the boundaries a bit more. Another area where we often see very large problems is 5G RF filter design. Applications such as 3D Bulk Acoustic Wave (BAW) filter simulations often require 100s of millions of DoF and can potentially take weeks to run on local hardware. The active material is piezoelectric, which means that such 5G RF simulations require an electro-mechanical solution, which is computationally expensive. Once again, we tackled this with our time domain solvers, running a 1 billion DoF BAW filter problem as shown below.

Our engineers took bets on how long it would take for OnScale to run this simulation on the cloud utilizing 4,096 cores. It took 6 hours 48 minutes. Not the ‘less than it takes to make a cup of tea’ result like the flowmeter, but still much better than with any other legacy simulation tools currently available.

We are often asked, well what is the limit? Can you run infinitely large simulations? I think we’ve yet to explore that. Is solving a 10 billion DoF a panacea for all of my simulation woes? Definitely not! But being able to run large problems quickly does remove one of the major roadblocks in engineering simulation.

I’ll not suggest running a billion DoF simulation as your initial OnScale model, but I’d encourage anyone who’s interested to try some of our example models here!

Thanks for reading!

Get Started With OnScale Today!



Your Comment

Your email address will not be published.