We have talked previously about piezocomposite transducers and their advantages for medical ultrasound.
In this article, we will discuss the following in more detail:
- How to simulate piezocomposite transducers with a specific case study of a 1 MHz sensor working in echo-pulse mode
- How to obtain the right KPIs to assess the design of such a piezocomposite transducer (i.e. sensitivity, fractional bandwidth, and center frequency)
- How to optimize the simulation process and use parametric sweeps to quickly determine important design variables such as the composite volume fraction and the matching layer thickness
- How to go even further and use Monte Carlo to assess how stable a device is
Let’s start with a simple but effective characterization of the piezocomposite simulation model.
What does a simple piezocomposite transducer model look like?
Let’s look at the following model:
This model is a simple 20 x 20 mm piezocomposite plate transducer.
We basically have two things here:
- PZT pillars of piezoelectric material (green)
- Some sort of polymer matrix, into which the PZT pillars are embedded (red)
The mix of these two components creates, as the name suggests, a composite material!
Here’s another view, which shows the piezocomposite structure from within:
A piezocomposite has a number of advantages over a monolithic type of PZT, but let’s not forget that this comes at the cost of the complexity of the design!
As a designer, we need to understand exactly what type of variables will help to optimize the performance.
The results that allow us to judge the performance of a transducer in general (also called KPIs) can also be used for piezocomposite transducers:
- The sensitivity
- The fractional bandwidth
- The center frequency
All of these KPIs can be obtained by looking at the impedance curve.
An interesting thing would be to compare the impedance curve of a piezocomposite transducer with the impedance curve of a more traditional monolithic piezoceramic transducer.
How does a piezocomposite transducer impedance curve compare with a monolithic piezoceramic bloc transducer?
The following image shows the impedance spectrum of a composite device (red) compared to the impedance of a monolithic PZT device (blue) with similar lateral dimension and resonance frequency.
We observe several advantages in the red piezocomposite impedance curve:
- A significant reduction in lateral resonances
- An improved electromechanical coupling coefficient
- An improved Tx/Rx Sensitivity
You can see in this animation that we also get a nice surface displacement on piezocomposite transducers:
Now that you know the rationale that leads to the use of this kind of device, let’s have a look at a specific example and how to simulate it in OnScale!
Designing a 1 MHz piezocomposite sensor with OnScale
Let’s look now at a specific problem involving a 1 MHz piezocomposite sensor operating into water and being used in pulse-echo mode against a steel target.
Here are the key structure parts involved in this model:
Our objective of this analysis is to obtain a high sensitivity and bandwidth when operating in pulse-echo mode.
For this exercise, we will make some initial design decisions:
- We are going to use a soft PZT with a high coupling coefficient because this will provide us a high sensitivity and bandwidth
- We are also going to use a hard epoxy filler
This is what we take for given and we won’t change.
Here’s a summary of the materials chosen:
|Soft PZT with a high coupling coefficient
|Stainless steel, generic
* Detailed material properties are provided in the model file composite.prjmat. Those properties are also available in the OnScale Material database.The things that are tricky to design in these systems are:
- The composite volume fraction: This is the percentage of piezoceramic in this active device.
- The matching layer thickness, which is a layer placed at the top of the piezocomposite array, which is there to improve the matching of the transducer with tissue.
We will see right after that the thickness of this matching layer is a critical parameter for achieving a good design!
To run this simulation, just unzip the file and open the composite_disk.flxinp file in OnScale Analyst.
Results of the simulation
Let’s look at some typical results that we would receive after running a time-domain simulation on this problem in OnScale.
We can see here the acoustic pressure wave leaving the device and reflecting the steel target:
And then we can calculate the electrical impedance of the device:
This impedance curve is important for electrical matching and design of the transmit/receive electronics.
If you are not familiar with the procedure to calculate the impedance curve in OnScale, have a look at this article.
This is the receive signal on the device which is calculated with the simulation:
What you see on the left is the excitation pulse and we get two return signals from the steel. The first reflection is the one that we are going to look at in detail.
If we post-process this first reflection time-domain signal into a frequency-domain signal using a fast Fourier transform (FFT), we are able to look at the frequency content and the amplitude of the signal!
If you are not familiar with the procedure to calculate the FFT of a time-domain curve in OnScale, have a look at this article.
The next step is to see if we can optimize the design to improve this frequency content (mainly center frequency and bandwidth).
In order to do that, let’s do a parametric study.
Determine the performance KPIs using a parametric study
Here we want to maximize sensitivity and bandwidth and keep the center frequency around 1 MHz.
*This image is not the real simulation model
The parameters that that we are going to vary are:
- The matching layer thickness (between 0.45 and 0.75 mm)
- The composite volume fraction between 40 and 60%
(This is a typical range for high bandwidth pulse-echo mode transducers.)
The main problem is that no textbook says what kind of fraction will be optimal for your design!
How many simulations do we have to launch?
We are going to do 20 steps of each, which will lead to a combined number of 400 simulations.*We will monitor the three following KPIs:
- The sensitivity
- The bandwidth
- The center frequency
* Note: 400 simulations may seem like a high number of simulations, but don’t panic! OnScale can run thousands of simulations in parallel at the same time on the cloud!
Here’s a table summarizing the goal, the design variables, and the KPIs of this parametric study:
|Maximize sensitivity and bandwidth of the device. ƒc around 1 MHz
|Matching layer thickness
|0.45–0.75 mm in 20 steps
|Composite volume fraction
|40–60 % in 20 steps
|As high as possible
|As large as possible
|As close as possible to 1 MHz
Once the model is ready, launching the simulations on the cloud is easy. We can just set up the range of the parameters and the number of steps. We can then run the 400 simulations in parallel:
The total calculation time is around 7 min. This is incredibly fast!After we have performed those simulations, we can have a look at the results. An easy way to look at the results of those 400 simulations is to post-process them in MATLAB
From those results, we calculate the three other KPIs.
The center frequency as a function of the matching thickness and the volume fraction:
In each case, the X- and Y- axes are the design parameters:
- The volume fraction going from 40 to 60%
- The matching layer thicknesses going from 0.45 to 0.75 mm
The advantage of post-processing with MATLAB, like we did here, is that we have a live display of the impact of the change in the design parameters on the KPIs.In this animation, you can see how changing a (matching thickness, volume fraction) point on any of the 3D surfaces changes the voltage and the frequency:
You see the result changing as the mouse moves around!What you can observe is that if we change the matching thickness, it has an important effect on the tuning of the main frequency:
If we go on the vertical axis and change the volume fraction, it has a big effect on the sensitivity of the device:
We can also see that the models with higher volume fraction have higher sensitivity.
But if we look at the bottom graph, we realize that we are trading off bandwidth for sensitivity:
How to optimize our transducer design?
We said that we wanted to optimize for both of those parameters, so how can we deal with this?
We can introduce the concept of sensitivity–bandwidth product.
This is not a new concept: it is used in a lot of industries.
What we need to do here is multiply the bandwidth and the sensitivity for every design, and we take the one with the best sensitivity–bandwidth product.
When we do that, we arrive at the optimum value shown in this picture, which has a volume fraction of 60% and a matching layer of 0.608 mm:
One more important thing to note is that all the designs have a center frequency within 30 kHz of the main 1 MHz frequency.
Thus, we don’t have to worry too much about that.
Let’s have a look at the chosen optimized design:
- We’ve got quite a nice receive pulse with around 4 and a half cycles
- That’s giving us a fractional bandwidth of just under 40%
- We’ve got a sensitivity of 247 mV and a center frequency of 1.02 MHz
The conclusion is that with a quite simple parameter sweep, we’ve solved a complex inverse problem!
How stable is this design? Let’s talk about manufacturing tolerances!
Now that we have a design, our life as an engineer doesn’t stop here. I am sure that many of you have experienced getting designs into production and the challenges that this can bring!
Let’s see how Monte Carlo Analysis can be used to assess how stable a device is.
(You can find more about Monte Carlo here.)
Basically, we are asking ourselves the question: “Can we build this transducer reliably?”
(Or perhaps, if we have a very good handle on our manufacturing process, we can ask the question: “What is our yield going to be?”)
In this case, let’s choose four manufacturing variables that we know will have some tolerances in production:
|What is that?
|Matching layer thickness
|Cut between the PZT pillars
|Distance between two pillars
These are all things that vary in manufacture.
These are all things that vary in manufacture.
Here are some chosen distributions which seem to be achievable:
There is a bit of thinking behind the choice of this statistical distribution … but as we aren’t really statisticians, a Gaussian curve would be a safe bet. (This is not always true … read The Black Swan by Nassim Nicholas for details.)
The kerf for example has a distribution centered at 110 microns plus or minus 10 microns, which could be due to blade wear or due to being cut on different machines.
It’s quite likely that this could happen.
And now we are going to run 1000 simulations!
In fact, what we are going to do is fabricate virtually those 1000 designs within OnScale and calculate what the output performance is!
Those simulations are going to run in parallel in OnScale cloud on our extra-fast cloud architecture.
Of course, this will generate a huge amount of data, but we can use MATLAB too to automate post-processing and get exactly what we want!
By the way…
Our friends in electronic design do this type of Monte Carlo analysis because they benefit from this type of cheap low complexity electrical simulation.
(They run small 1D simulations so they can run thousands of them quickly!)
Let’s discuss a few techniques they use and we will then use the same tricks to do our own post-processing.
Understanding the correlation matrix: How inputs correlate with outputs
First, they plot the correlation matrix.
On the left Y-axis of this matrix, we have the manufacturing variables we chose previously (matching thickness, PZT thickness, kerf, pitch) and on the X-axis we have the KPIs (center frequency, sensitivity, fractional bandwidth).
If we had to describe in one quick phrase what this matrix does, we’d say:
This graph shows how inputs affect the outputs.
Now … how to read it?
Well … when what we see is just a big blob it means that we are getting a normal distribution between inputs and outputs … so basically, inputs and outputs are not correlated.
That’s really telling us that the variable and the KPIs are not directly linked in most cases!
Now, let’s look at the matching layer thickness.
The matching layer thickness has almost a linear relationship with the resonant frequency fc and it has also quite a direct effect on both the sensitivity and the fractional bandwidth.
… And because it’s a quarter wave matching layer, it can often be one of the thinner layers in an ultrasonic sensor design!
It is not so surprising that the matching layer has so much impact on the KPIs!
That also means that when we manufacture these layers, we must hold to very high tolerances.
If we are not happy with some of the output variations that we are seeing, we need to take a very good look at the tolerancing on this matching layer, to see if we can hold it a bit tighter in manufacturing and use that to increase the yield!
Output distributions: Will what we build pass?
Now that we have made that decision, we can look at the output distributions.
These are graphs showing the center frequency, the sensitivity, and the fractional bandwidth across all our designs.
If we were to put tolerances over these outputs, these might be acceptance tolerances for the customers or maybe the tolerances we put in the datasheets if we went to market.
We can tell what our yield will be for these devices, so this is a really important tool!
Now that we have talked about optimization and manufacturing simulation, let’s go back an instant on all the simulation we did!
(Because without the capability to launch thousands of simulations on the cloud, all of what we did is just impossible.)
Let’s look now at some simulation statistics!
Here are some statistics to wrap up the data about all the simulations we did to design our sensor.
One individual simulation has around 6 million degrees of freedom, takes 1 GB of RAM, and takes 7 min to complete … not bad in itself!
Where we get a real gain is by using the cloud!
We can run in parallel the 400 design studies, which means that they all run simultaneously on the cloud and the whole 400 simulations data set comes back in 7 min.
Same for the Monte Carlo Study which also takes 7 min to run 1000 simulations!
Isn’t that incredible?
Considering that you already have all the scripts and the process set up, you could run 1 + 400 + 1000 simulations (1401 simulations in total) in under 10 min!
(Because yes, you could also run the three different sets of simulations at the same time as well. )
Simulation for engineering optimization study
OnScale is becoming more and more integrated with MATLAB and Python.
In fact we have created some code which allows MATLAB and Python to call and run simulations in OnScale, without even going through the GUI.
Thus, you can do analysis and simulation within the environment you are already comfortable with (supposing you are already familiar with either Python or MATLAB).
On top of that, it allows our users to access the top features that are already embedded within those platforms.
(You could, for example, create your own Python-based simulation app … Just sayin’!)
A couple of features that I have been using in MATLAB, for example, are as follows.
Genetic algorithm optimization
The genetic algorithm is very effective to handle large problem spaces with discrete problem spaces.
(It makes it possible to avoid local minimums … the plague of many optimization studies.)
This picture shows simulation data processed by MATLAB to do optimization of an FBAR filter using the genetic algorithm:
Pareto front multi-objective optimization
The Pareto front multi-objective analysis is an effective way to assess problems with multiple objectives.
Taking, for example, sensitivity and bandwidth, we could find how they trade off against each other using Pareto front analysis!
This allows you to go into design meetings with a very clear plot, like the above, and provide a very clear view of where you want to be with your design!
“OK, where do we want to be? What would have the most impact in the market?”