Unfortunately, this scenario happens more often than one might expect. Maybe it’s the change in tooling, such as an increase in cavitation; maybe because you’re testing larger numbers of devices, previously unseen problems are becoming apparent; maybe it’s the move from manual to automated assembly. There are probably a number potential reasons but identifying the most likely cause – and how to address it – could determine whether the product is launched or not. And that is where a sensitivity analysis, undertaken early in the development process, could make all the difference.

A sensitivity analysis is a process for determining the magnitude of the effects that design changes have on the performance of a system, allowing you to build a robust design with knowledge of its limits. If decisions need to be made, or problems arise, during any development stage, a sensitivity analysis can help evaluate and solve these issues with minimal design change and disruption, and hopefully minimal cost. But when and how to do it?

One way is to conduct experiments: make a change, measure performance, make another change, measure performance, and so on, perhaps using Taguchi methods to optimise your experimental design. However, sometimes physical testing isn’t an option – in the early stages of the development, for example, when components are not yet available, or during the later stages when making changes to production-quality tooling may be too costly, too risky, or take too long. So if you can’t conduct physical experiments, what can you do?

One answer is to employ some of the “heavier” computer-simulation tools such as Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD) which, for single components or very simple mechanisms, can provide a method of conducting a sensitivity study when physical testing is not possible. However, problems are rarely limited to single components or simple interactions. For devices such as injectors or inhalers, with several moving and flexing components, the limiting factor with detailed FEA (or CFD) can be the significant simulation time required – hours or even days. As a result, analysing several possible design changes simply becomes impractical.

**Unlike a physical experiment, with a mathematical model we may be able to conduct every permutation of design change a nice luxury if we are not sure of interactions in a system.**

In such cases, the solution may be to go back to basics. Using known equations of trigonometry, physics and engineering to describe the behaviour of the device in terms of input parameters (such as dimensions), and output results (such as performance characteristics), we can mathematically model the device and its behaviour. Written in software such as MathCAD or Matlab, a mathematical model often requires significantly less computational resource – and hence simulation time – than FEA or CFD, and provides a very quick way to run virtual experiments, changing design inputs and logging the performance outputs.

Unlike a physical experiment, with a mathematical model we may be able to conduct every permutation of design change – a nice luxury if we are not sure of interactions in a system. In the same way, we can use Taguchi to design a set of physical experiments to help us structure the virtual experiments and the analysis of the results, helping us make best use of time.

Speaking of time… we may have a set of individual component interaction equations describing the system as a whole, but unless we consider the related dynamics and kinematics – how interactions are dependent on time – we may miss critical aspects of performance. Consider the tablecloth trick. The force applied to the cup and saucer, via the tablecloth, is sufficient to drag them off the table. If we were to consider equilibrium, steady state conditions, we would smash the crockery every time. But if we consider the speed of movement of the cloth, we realise it has been pulled out of the way before the frictional contact forces have had a chance to accelerate the cup and saucer, which remain on the table.

**Consider the tablecloth trick. The force applied to the cup and saucer, via the tablecloth, is sufficient to drag them off the table.**

So, to really understand what is going on, we must create a dynamic, or ‘time-stepping’ model, starting with a set of equations which describe the forces on each device component at any position. At time zero, we can then determine the initial accelerations of each part, and if we assume that these forces stay constant for a very small amount of time (e.g. 0.0001s), Newton’s laws of motion can help us determine the new positions of each component after that small time step. We then recalculate the forces for those new positions, work out the new accelerations and new positions, recalculate the forces, and so on for as long as we want. This is all automated in software, and all happens very quickly.

Of course, you need to check that the mathematical model predicts what is happening in real life – to validate the model to some extent – but we can do this through physical testing and observation, and discrete calculations and analyses. Once we are confident about our model, we can use it very effectively to investigate the sensitivities of our system, using virtual Taguchi experiments if appropriate, and hence make confident predictions about whether or not our cup and saucer will remain intact – and about what we need to change to ensure that they do! Sensitivity analysis is a powerful design tool, allowing you to predict the behaviour of your device when subjecting it to a design change, and can be done early in the design process, enabling you to refine your design from the start and avoid pitfalls.

A combination of ‘back-to-basics’ mathematical modelling, some FEA, and the application of Taguchi principles, can keep your design on track and help you solve problems with minimal cost, time and disruption.

**Case Study: Auto-Injector troubleshooting**

Recently, Team was asked to assist a client with an auto-injector in the very late stages of production, which was reporting a failure in a particular performance characteristic. Unfortunately, as there were only a few reported cases, the failure was difficult to measure, and the client was very limited as to what design aspects could be changed. On top of this, big deadlines were looming. The client needed an investigation into the causes of the problem, suggestions for potential design solutions which could be implemented quickly, or to know if the problem was large enough to stop the programme.

**“It was very difficult, if not impossible, to predict device operation by looking at the individual component models or subsystems alone, as the force on each component varied through time depending on where the other components were.”**

This was a prime candidate for a sensitivity analysis: the device seemed to work, but something was causing it to occasionally not work. By conducting a sensitivity analysis we aimed to determine the effects of component tolerances on the device performance (in case it was a tolerance issue), find out the design aspect with the biggest impact on the performance parameter of interest, and determine how much an aspect of the design (such as a component dimension) had to be changed in order to prevent the device from ever failing.

Physical experiments could be conducted using production parts and devices, but not a sensitivity analysis, as it would be too costly and time-consuming to produce a myriad of parts all with slightly different dimensions or characteristics. Also, at this stage we did not know which parts we wanted to change or by how much.

The first step was to build an FEA model of the device and run simulations, starting with individual features and simple deflections. We used the CAD data to generate the model, and physical experiments on individual parts confirmed system characteristics such as material stiffness and friction coefficients. We then developed the FEA into a more comprehensive model, incorporating nonlinear and dynamic elements, to assess performance of the combined system.

At this point, two issues occurred. Firstly, the FEA simulation did not agree with real life, shown by comparing simulation results to High Speed Video footage of the injector; in the video, the sequence of mechanical events within the device was consistently A then B then C, but in the simulation it was A then C then B. Although possibly able to determine why this was happening, we were faced with the second problem: the run time for each FEA experiment was extending into days. So even if the FEA simulation was accurate, it would be completely impractical for a sensitivity analysis which requires many such experiments.

We then turned to mathematically modelling the components and subsystems of the device to see if we could gain any further insights. We created separate mathematical models for each component of interest, describing the forces it was under throughout its range of motion, but it was very difficult, if not impossible, to predict device operation by looking at the individual component models or subsystems alone, as the force on each component varied through time depending on where the other components were. If we ignored the fact that the parts were moving with speed and just considered the forces on the parts, it would give the impression that the FEA model was correct, i.e. A, C, B. However, we suspected that the “table cloth trick” might be happening, and that it was all about the speed at which every component was moving.

We needed to stitch the models together, and add the element of time. The mathematical models, written in MathCAD, were combined in one larger model with a time-stepping loop, as described above. Using this approach, we were able to predict the relative positions, velocities and accelerations of all the components during the whole injection sequence. We then confirmed that the time-stepping model was giving valid and accurate results by comparing its predictions with the performance of actual devices recorded with High Speed Video.

We then changed aspects of the device design in the time-stepping model to see how those changes affected the movements of the components. We set up and ran our virtual Taguchi experiment, and as each run only took seconds, we could run as many experiments as we wished.

**We were able to predict the relative positions, velocities and accelerations of all the components during the whole injection sequence.**

The client could only make specific alterations, and so we only simulated these alterations using the time-stepping model. We ran more than forty experiments, and used Minitab’s statistical tools to show which aspects of the design had the greatest impact on the performance characteristic we were looking at. Once we had identified the best candidate for alteration, we simulated the effects of that change using the mathematical model before going back to the client with our suggestion on how best to solve this particular problem.

The mathematical model predicted (and the High Speed Video confirmed) that the problem would always happen to some degree, but we suspected that the difficulty in spotting the event meant that only a few were noticed and reported.

**An approach for analysing a multi-component, dynamic device:**

- Describe individual components in terms of fundamental engineering and physics equations, i.e. mathematically model the individual components.
- Make FEA virtual models of those individual parts.
- Use physical testing of real parts to check the mathematical and FEA models and to get material characteristics.
- Run FEA simulations of individual parts (i.e. simple forces, no interaction) to cross check that design changes in the virtual world and mathematical model have the same effect.
- Tie the individual mathematical models together using a time-stepping simulation.
- Run a sensitivity analysis, using DoE for experiment structure.
- Use statistical tools such as ANOVA to pick apart results.