English
Log in
English

Solving Massively Parallel Simulations – A New Era of FEA

By Kevin Chan 24 July 2019

“Insufficient RAM”

Those two words reflect a world of pain for engineers trying to perform large, 3D FEA simulations. You spend hours setting up the model, checking materials, mesh, analysis, outputs, and when it comes to execute you hit the proverbial wall….

To progress, there are two options, none of which are particularly appealing:

  1. Order more RAM and upgrade your workstation
  2. Reduce the fidelity of your simulation to accommodate the limitations of your hardware.

But what if RAM and hardware were no longer a limitation? Engineers could tackle their largest problems immediately, with no roadblocks or time impediments, and get the insights they need. This inevitable size plateau of shared memory can be overcome by making use of Message Passing Interface (MPI) to access greatly extended memory across distributed HPCs, and now through OnScale – Cloud HPC.

What is MPI?

MPI is a de facto standard outlining methods of passing information across a distributed network. In 1994, the first specification for MPI-1.0 was published by the MPI Forum, a group of Academic researchers and industrial representatives spanning 20+ years of experience in parallel computing. The standard has now reached version 3.1 (June 2015) and a scope of version 4.0 is already in the works.

MPI enables vendors such as Intel and IBM or even the open-source communities to implement libraries to work across a wide range of platforms and hardware to solve the most challenging engineering problems. MPI is widely used by most of legacy simulation packages and is targeted at users who have on-premise High Performance Computing (HPC) hardware available.

Cloud technology has reached a tipping point as infrastructure services become more accessible to the general public. This, coupled with near infinite compute resources (e.g. Amazon Web Services or Google Cloud Platform), delivers a new era for MPI enabled simulation

At OnScale we are at the bleeding edge of these technological revolutions. Our proprietary multiphysics solvers are built from the ground up to be MPI compatible and deployable on cloud infrastructures. Also, by eliminating the need for licenses, a very sharp pain point of legacy simulation packages, OnScale enables any engineer to have access to an HPC and multiphysics solvers completely on-demand.

Here are some examples of real-world problems that OnScale is helping our customers overcome; faster than ever.

 

Film Bulk Acoustic Resonators (FBARs) on a Full Silicon Die

5G is a buzzword that many will have heard in the past few years. To be able to communicate on the 5G networks, the technology in our mobile handsets must contain the latest filter technology in order to cope with the high data rates. This can only mean higher performance requirements for FBARs, which is an extremely difficult problem to solve. Physical prototyping is costly in time, effort and finance so companies like Qorvo, Broadcom and Skyworks are looking towards simulation tools to be able to tackle this design problem.

At OnScale we have run the world’s first multi FBAR problem consisting of 1 billion Degrees of Freedom in 6.8 hours utilizing 4096 Cores.

OnScale’s FEA solvers are optimized for the coupled electromechanical simulations that are critical in this problem space, and for the massively parallel simulations required to run these problems in realistic time-frames.

By taking advantage of the distributed memory resources available on the cloud and MPI, we achieve close to linear speed-up over thousands of compute nodes (example shown in the graph below).

Piezoelectric Micromachined Ultrasound Transducers (PMUT) for Fingerprint Sensing

With a forecast of 1.6 Billion handsets to be shipped with fingerprint sensing technology, we can see why many MEMS companies have a strategic interest in developing the next generation of these sensors.

Ultrasonic Fingerprint sensing for mobile phones has enabled new forms of sensors to be developed for the industry, specifically with a new interest in PMUTs.

Ultrasonic fingerprint sensors have many advantages over competing technologies, most importantly, being insensitive to contamination and moisture, and being usable through phone display stacks that include multiple glass and adhesive layers. In addition, ultrasonic waves used in pulse-echo imaging can penetrate the finger’s epidermis, collecting images of sub-surface features.

To evaluate if the PMUT sensors will be effective when integrated into a full system stack-up of a phone, we need to simulate that entire stack-up – complete with the sensor embedded! This full-scale, 3D simulation would be impossible using legacy simulation packages, but with OnScale it is now possible – and it can be done in minutes

MPI

Thermal and Thermo-Mechanical challenges in Semiconductors

 

Recent advances in micro-fabrication techniques have fueled the growth trend of miniaturization of electronic devices. The International Roadmap for Devices and System, a successor of International Technology Roadmap for Semiconductors, have predicted 5nm transistors by 2020 and 3nm transistors by 2022. This device miniaturization trend is both a design and a fabrication challenge. Particularly, the physical design challenges associated with that scale are unprecedented. The design margins have reduced considerably, and these new challenges require a holistic approach to chip design early in the design cycle. 

Physical prototyping in early designs are time consuming and expensive. The complex interdependencies of electrical, thermal, and mechanical characteristics of the Integrated Circuits (ICs) are well documented. For example, the effects of voltage drop, and the subsequent temperature distribution of the die impacts the power integrity of the circuits.

These ICs then have to be packaged into a product and given the nature of the packaging materials and a non-uniform temperature distribution, the differences in the coefficient of thermal expansion results in thermo-mechanical stresses. These greatly affect the reliability of the package due to debonding, fatigue, and warpage. Early identification and mitigation of these effects will result in significant performance and cost benefits, and powerful multiphysics simulation is an effective way to make this happen.

Gaining comprehensive insights at that scale without oversimplification of the design is a serious simulation challenge and requires massively parallel simulation capabilities. These are the types of challenges that OnScale was built for, and the ones we help our customers solve everyday.

MPI

 

OnScale has opened the door to solve what was once seen as impossible simulation problems. By making massively parallel simulations possible without on-premise HPCs, engineers are no longer constrained by their problem size (hardware) as they strive for the optimal design. We encourage engineers to utilize our MPI implementation to solve their toughest and novel engineering problems and pave the way for new technology and innovation.

Get Started with OnScale Today!

 



Kevin Chan
Kevin Chan

Kevin is a Senior Application Engineer at OnScale. He tests and helps with the development of OnScale. His background and experience with the solvers has allowed him to work on a wide range of projects with a big focus on MEMS & RF. Kevin holds a MEng in Electronic and Electrical Engineering at the University of Strathclyde.