Balancing theoretical and empirical approaches in device development
When developing new products, engineers and designers are challenged to make well-informed decisions to create solutions that meet the requirements. Good design practice encourages an evidence-based approach (while regulations often require it) and, throughout all stages of product development, analytical data is relied upon to support and verify device-design decisions. Herein lies the necessity for engineering analysis, in which the primary objective is to assess and determine quantitatively whether a device, mechanism, sub-system or component is fit for purpose. Depending on the maturity of the design and the scope of the project, different approaches can be used to obtain the relevant analytical information, but how do you go about choosing the right tool for the job?
“…how do you go about choosing the right tool for the job?”
Engineering with applied science
The analytical engineering activities referred to here are scientific, logical and methodical investigative measures that are conducted to aid in design development. A vast spectrum of methods is used with, at one extreme, the purely theoretical and, at the other, the purely empirical. Examples of theoretical approaches are mathematical-modelling methods, such as Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), Tolerance Analysis or bespoke mathematical system simulations. Empirical methods usually involve physical testing, measurement, and observation of device components for direct assessment of their performance. Broad applications include tensile/compression load tests and metrology, as well as many function-specific attribute or quantitative tests, such as the indicator function or moisture-vapour transmission rate.
In reality, analytical work conducted during device development is a combination of these approaches, employing tools from both ends of the spectrum to provide the data required via the most efficient route. Planning this route depends on the information sought, the resources available and the development stage of the product. Using the example of a preloaded active medical device, different analytical processes will be explored to assess the deflection of the components within the assembly.
An example of analytical engineering
Figure 1 shows a simple sub-assembly, in which the dimension between the two internal faces is of critical importance. This could be a controlled dimension for many reasons: perhaps a compartment for a third component or separate sub-assembly, such as a pre-filled syringe (PFS) or battery cell. The level of control required is dependent on how critical the dimension is to the function, and hence the level of risk.
Tolerance analysis
Early in the design process, theoretical tolerance analysis will be conducted to ensure that the output dimension will meet requirements, despite the geometrical variation from manufacturing. In the tolerance ‘stack’ shown in Figure 1, the nominal value for the output dimension is determined by equation (1). In order to consider the worst-case cumulative geometrical variation, the tolerance of each dimension is summed, as shown in equation (2). It can then be determined whether a third element, let’s say a PFS, will ‘fit’, under the assumption that all three elements are manufactured within specification.
This analysis, from which initial design decisions will be taken, is purely theoretical. Closer to production, when large numbers of manufactured parts are available, empirical verification that each PFS assembly will fit within each device becomes feasible, but this will be far too late to discover that the objective has not been achieved.
Metrology and process capability
In the detailed design or pilot manufacturing stages, access to many parts for a full attribute test such as this may not be feasible, but a small number of parts may give sufficient insight if used appropriately. This is where a combination of empirical and theoretical techniques can be employed. Capturing metrology data of the key dimensions from sampled components and using statistical methods, such as process capability, to determine the predicted variability of the measured features, can help highlight potential errors in production. Engineers and designers are thus provided with valuable data, part empirical, part theoretical, to inform design decisions, reducing to acceptable levels (ideally zero) the probability of defective devices going into production.
Complexity with increasing variables
To further illustrate how a balanced approach can be deployed, consider a more challenging example, in which a load is applied to the components of such an assembly. This occurs for many medical devices on the market, such as auto-injectors or breath-actuated inhalers. The majority of components of such drug-delivery devices are manufactured from injection-moulded plastic, which when subject to high loads will deflect significantly, that is, their geometry will change. In order to ensure a robust design, engineers must obtain a good understanding of this deflection.
“Early in the design process…purely theoretical methods have to be adopted.”
When considering this pre-load early in the design phase, analysis of the tolerances specified on the engineering drawings is no longer enough to ensure that every device will function appropriately. As in the dimensions on the drawing, the magnitude of deflection is also variable. This means that deflection tolerance will need to be specified to determine the permissible limits of variation.
Early in the design process, with no parts to measure in a compression test or within the assembly itself, purely theoretical methods have to be adopted.
Structural simulation through finite element analysis
Using finite element analysis (FEA), engineers can model the component geometries and apply a load with the appropriate material knowledge to determine the predicted deflection (Figure 3). The system geometries can then be adjusted to the extremes of tolerance to gain some indication of predicted deflection tolerances. This can be quite straightforward when considering a static linear scenario and can often provide sufficient data to steer the design in the right direction. However, these loading scenarios within devices usually occur for extended periods of time; 2 to 4 years of storage and 12 months of use, for example. Deflection that occurs instantaneously may change significantly as the parts deform over time (creep). Modelling this time dependency with FEA requires extensive material data, which can be hard to come by, and also computational complexity, which involves non-linear conditions.
Measurement without compromise
As theoretical uncertainty grows, empirical approaches can be more effective, for example, by measuring the output dimension directly from manufactured device assemblies. Nearer production, obtaining manufactured parts may no longer be an issue, but this method has a whole other set of challenges.
With medical devices becoming smaller and more complex, and the nature of their design making it difficult to access critical internal components, it’s not always easy to take measurements without tampering with the device and hence affecting the measured features. However, modern metrology methods, such as CT scanning, make it possible to measure components within a sealed device. This method comes at a cost, a very high cost if large quantities are involved, but in many cases the information obtained well justifies the expenditure.
“It’s not always easy to take measurements without tampering with the device”
As mentioned above, the loading scenarios described occur for extended periods of time. Project timescales and deadlines cannot afford 12–36 months of real-time testing to measure this deflection before key decisions are taken, which is why storage and use conditions are frequently replicated through accelerated ageing. To achieve this, device assemblies are stored at higher than normal temperatures (typically 30–50°C) to advance part deformation or degradation over time. While there is still debate on the validity of this artificial procedure for replicating shelf-life, particularly for materials such as elastomers, it has become industry standard practice in product development and is understood well enough for valuable data to be obtained. In some cases, empirical data can be generated to characterise general material properties, and then this information can be fed into theoretical models for specific designs or load cases.
Combining the right tools for the job
Due to the respective challenges of each different method, it is quite common in product development for the most efficient and relevant route for determining an analytical variable, such as deflection, to involve a hybrid theoretical and empirical approach. In this example, one approach would be to:
Employ a basic mathematical model or analysis to identify key design parameters
Empirically measure these parameters and the deflection in a small number of devices
Develop an FEA model and validate it against the measured deflection data
Use the validated FEA model to predict overall variability
This combined approach alleviates the need for complex theoretical material data and other simulation parameters, and also addresses the lack of large numbers of components to gain an early understanding of variability.
“Relevant and high-quality data is required to best inform every step”
The value of analytical engineering is undeniable. It is a vital discipline which, when mastered, ensures the low-risk and methodical development of new technologies and products. The key to its best use is understanding the available tools and how to effectively use them to obtain the required information. These tools are employed throughout all stages of product development. Whether proving a concept, detailing a design, or verifying and validating a manufactured product prior to launch, relevant and high-quality data is required to best inform every step towards getting a product to market.
Explore
MedTech
Is there a medical device inside James Bond’s Rolex?