Empirical tools
The use of empirical methods to test and challenge a design is a fundamental part of a product’s development. In the early stages of the process, a risk-based approach often drives the development testing strategy, where efforts are focused on establishing baseline confidence in the core system technology.
Test data may be both quantitative and qualitative and should assist with developing both the engineering and usability of the design. Both forms of data are invaluable and serve as a snapshot of the future design reliability. Test approaches may be as simple as ‘A/B’ testing, where two different designs are evaluated, or employ Design of Experiments (DoE) techniques to screen critical variables from a large pool. This approach can focus subsequent activities on the most critical interactions and features, streamlining the development work.
As the design matures, so does the and scale of its testing. Critical component features may be manufactured at the extremes of tolerance to understand the design space and how reliability can vary across it. Again, a DoE can help increase the efficiency of these larger-scale activities, with output data feeding directly into efforts to either revise the design or tighten manufacturing controls.
Preconditioning and overstress testing (such as ageing, shock or thermal cycling) can be included in test programmes if deemed appropriate for the reliability requirements of the device. Finally, as test quantities increase (including as part of design verification testing), it is possible to use process capability analysis against well-established functional limits to predict system performance.
Analytical tools
In many scenarios, particularly for complex systems, it is not practical nor feasible to exclusively use empirical tools to create a high-reliability design. Instead, empirical approaches can be complemented by analytical tools to increase the breadth and depth of the analysis, ideally over a shorter timeframe.
Math modelling, for example, is a time and cost-effective way to simulate and interrogate system behaviour. The complexity of the model is directly related to two things: the physics involved and the level of fidelity required. Time or displacement-based models, based on engineering first principles and the assumptions that come with them, help identify the relationships and sensitivities between different parameters. Models can be quickly built in Excel, or with higher-end tools such as MathCAD or Python. Similarly, regression models can be derived from gathered data to interpolate or extrapolate results.
Finite Element Analysis (FEA) or Computational Fluid Dynamics (CFD) tools can offer a level of accuracy above first-principles math models and, provided the input data is representative, can be used to inform robust and reliable medical device design.
Tolerance analysis is widely used as a means of assessing the impact of variation from a manufacturing process, and to help design and manufacturing teams align on realistic and appropriate tolerances. The assessment can be both formative, to assess the variation expected from a number of concepts, or summative, to demonstrate that manufacturing controls are sufficient to support the chosen design intent. Statistical approaches to tolerance analysis, such as Monte Carlo, can be used to analyse non-linear component interactions. Again, the results of such analysis may affect design and manufacturing decisions and can also be integrated with edge-case medical device reliability testing to prototype device performance at the extremes of tolerance.
Risk management tools
Risk analysis and management is pivotal in high-reliability medical device design. As confidence in the design and manufacture of a device increases, the occurrence rate of critical failures will decrease well below what can be detected using empirical methods. As such, probabilistic techniques must be used which combine an intimate knowledge of system mechanics with its known failure modes to assess and mitigate risk, and to forecast the resultant reliability of the device.
There are various tools available to designers to achieve this. Two of the most commonly used in high-reliability design are Failure Modes, Effects, and Criticality Analysis (FMECA), and Fault Tree Analysis (FTA), which can be used in parallel to each other.
FMECA is an example of a bottom-up approach, generating a comprehensive assessment of faults at the component level without considering their system level impacts. These failures are assessed for both score then used to drive any mitigative actions.
Conversely, FTA is a top-down approach that considers the system-level impact of combined underlying faults, though it is not effective at fault identification. Fault trees are fed data on the underlying failure probabilities – these data can be derived from empirical or analytical means (i.e., other tools in the reliability toolkit), and are combined using AND/OR gates to calculate the overall probability of success (aka reliability) for the system.
Effective fault tree analysis requires high-quality data to quantify failure probabilities. Building these datasets, either through modelling or empirical testing, can be resource and time-intensive. Therefore, it is not practical to include all foreseeable events in FTA – FMECA can be used to selectively include / exclude faults based on a predetermined and justified risk threshold. Faults requiring consideration in the fault tree can be supported as required, and faults that are excluded can be easily justified via the direct link to the FMECA.