We’re only human after all: Root cause analysis in medical device development
10 Oct 201711min read
Root Cause Analysis (RCA) is the basic system of trying to find out the cause of a problem. It is used to identify methodically why there are problems – not just focusing on the symptoms. It was famously used in the 1950s by NASA to solve rocket launch problems. RCA is now renowned as an integral part of Human Factors (HF) research. In the context of medical device development it refers to the qualitative analysis of issues observed during human interaction with a medical device or system be it use difficulties, deviations, close calls or use errors.
The Food and Drug Administration (FDA) considers RCA during usability testing a best practice approach for identifying and eliminating use error (ANSI/AAMI HE75:2009). This analysis must be ‘considered in relation to the associated risks to ascertain the potential for resulting harm’ in order to determine whether additional risk mitigations are required (FDA, 2016).
Those with experience of conducting user studies will have an appreciation for the intricacies and minutiae of details that must be captured and understood in order to establish probable root cause(s). Moderators of HF studies will have no doubt found themselves questioning how the device and/or instructions might have led to the user’s confusion. While participants of studies will have inevitably found themselves assigning blame on themselves for simply not paying attention or rushing a task.
Observing, recording and subsequently asking participants about their interactions with a device requires the moderator to quickly process information and then exercise judgement on how best to facilitate discussions with participants in a non-leading manner. The qualitative data collected from such studies should then inform suitable corrective and preventative actions to be implemented in order to enhance device design.
One can see therefore that it is critical that moderators and observers of user studies establish the correct root cause(s) to an observed use error, difficulty or close call in order to mitigate or reduce the identified risks as far as is practicable.
By the very act of conducting RCA, the medical device industry recognises that moderators and observers alike are required to exercise judgements based on the information they observe and collect from test participants. While we recommend that these judgements are supported by experienced professionals with prior knowledge of HF principles, with studies conducted in accordance with best practice, it is imperative to recognise the potential for bias and error in decision making. After all we are only human.
‘Characters of the story’
As human beings much of our behaviour, decisions and thoughts, whether we like it or not, indeed whether we acknowledge it or not, are automatic. In the international bestseller, ‘Thinking Fast and Slow’, Daniel Kahneman narrates two systems of thinking, which he refers to as the characters of the story, namely System 1 and System 2:
To experience System 1 thinking try the following example. Do not try to over think, just write down the first answer that comes to mind.
A drug and its delivery device costs $1100. The drug costs $1000 more than the delivery device. How much does the delivery device cost?
At the moment of completing the final sentence above a number instantly came to mind without effort or conscious control. That answer, was inevitably $100. The answer came intuitively to you, possibly the speed with which you came to a conclusion surprised you, yet on consideration we know that this is the incorrect answer to the problem. If the delivery device was $100 then the total cost of the drug and delivery device would have been $1200 (i.e. The drug costs $1100 and the delivery device costs $100 equating to $1200). Therefore the correct answer to the example above is $50.
Those familiar with the problem, or indeed ‘Thinking Fast and Slow’ to which the basis of this article is credited, may well have come to the correct conclusion. However, it is reasonable to assume that the intuitive answer did still come to mind, yet System 2 enabled you to resist the incorrect and ‘intuitive’ answer. This raises a very significant concept for the context of this article – cognitive ease.
Cognitive ease refers to the extent to which information can be processed automatically and unconsciously without the need for ‘cognitive strain’. As Kahneman describes, the human brain is constantly processing information to assess system status and determine whether increased effort or cognition is required. Questions such as “Is anything new going on?”, “Is there a threat?”, “Should attention be redirected?” These are just a few of the ‘automatic’ assessments controlled unconsciously by System 1. In cases when the answer to those questions is ‘Yes’, System 2 is engaged to audit and control the suggested actions and thoughts of System 1.
Why does it matter?
If there is one take-away point from this article it is to acknowledge that System 1, also coined the associative machine, has the potential to jeopardise RCA in medical device development. The implications of association based on previous experiences and intuitions, a hunch if you will, not only poses a risk to conducting effective user studies but also have a direct impact upon device safety and efficacy.
‘If usability testing is not conducted carefully and systematically, the resulting data might not be valid or reliable, leading to poor design decisions and ultimately, an unsafe medical device.’ (ANSI/AAMI HE75:2009).
As with the drug delivery device question, moderators and observers of user studies are susceptible to intuitive answers based on previous experiences and the extent to which an answer feels familiar and true. Interestingly, one could argue that those with greater experience of conducting user studies and a failure to acknowledge such bias, perhaps studying a similar device to one previously tested but for a different indication, are more likely to be influenced by their preconceptions of what they may or may not observe.
In ‘Thinking Fast and Slow’ Kahneman describes two facets of System 1 that are particularly relevant for the context of this article and ultimately have the potential to be harmful during RCA.
“System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have”.1
“System 1 is radically insensitive to both the quality and the quantity of the information that gives rise to impressions and intuitions”.
Both of these characteristics of System 1 refer to what Kahneman defines with the following acronym ‘WYSIATI’ – ‘What You See Is All There Is’. In light of this moderators must challenge their intuitions and remain objective when a use error, difficulty or close call is observed. In short they must engage System 2 and resist the temptation to assign a root cause based on the ‘best possible story’ or ‘intuitive answer’ (remember the $100).
…moderators must challenge their intuitions and remain objective when a use error, difficulty or close call is observed.
Acknowledge it exists
The question is therefore: How do we mitigate against the potential for bias during root cause analysis in medical device development?
The answer is in part attributable to the acknowledgement that the bias exists. Failing to recognise the bias in the first instance leaves those conducting user studies vulnerable to acting upon previous experiences and establishing an incorrect root cause to an observed use error, difficulty or close call. As an advocate of HF principles, my first recommendation would be to refer to FDA guidance to understand the very basis of what conducting a user study is trying to achieve.
Current guidance from the FDA, Applying Human Factors and Usability Engineering to Medical Devices, states that:
‘The observational data and knowledge task data should be aggregated with the interview data and analysed carefully to determine the root cause of any use errors or problems (e.g., “close calls” and use difficulties) that occurred during the test. The root causes of all use errors and problems should then be considered in relation to the associated risks to ascertain the potential for resulting harm and determine the priority for implementing additional risk management measures’.2
Following this approach will challenge any preconceptions that moderators and observers of a user study may have and emphasise the need to focus upon both characters of the story; System 1 and 2.
Understand the criticality of tasks
Critical tasks are tasks that if performed incorrectly or not performed at all would or could cause serious harm to the user. Prior to conducting a validation study it is imperative to have a firm grasp of the critical tasks that users should perform correctly for device use to be considered safe and effective.
System 1 thinking may lead you to falsely assume that observing zero critical use errors during early formative testing means you have a safe and effective device that will pass validation testing. Remember the acronym ‘WYSIATI’. One must remember that just because a potential use error has not been observed during early formative testing does not preclude them from being observed during validation testing.
The FDA highlights that the list of critical tasks is dynamic and will change as the device design evolves through iterations and preliminary analyses thus validation study protocols should ‘include mechanisms to detect previously unanticipated use errors’ (FDA, 2016).
…seeking advice from professionals with experience in human factors engineering is advisable.
Seek expert advice
Finally, and it may sound like a cliché, but seeking advice from professionals with experience in human factors engineering is advisable. It should be stated that involving HF specialists can only be truly effective if they are incorporated into the design, preparation, implementation and analysis of a user study. Only then can you be assured and take confidence in the approach and results attained through conducting a user study during medical device development. If your intuitive reaction to that last sentence led you to the conclusion ‘here comes the sales pitch’ then you may be surprised to hear that this advice is taken directly from FDA guidance.
In ANSI/AAMI HE75: 2009, the FDA state the following:
‘Usability testing plans should be developed in collaboration with professionals with human factors expertise. Human factors expertise is also needed in the interpretation and analysis of results.’3
Conducting RCA in user studies during medical device development is inherently a cognitive activity that requires moderators and observers alike to exercise judgements. As a result, RCA in medical device development is exposed to potential biases through association and repeated experience.
Unconsciously our minds constantly seek connections or relationships to scenarios where we feel familiar and comfortable making us vulnerable to biased judgements that are based on our previous experiences and observations.
This might come as an alarming conclusion to some, however it is critical to acknowledge the limitations of conducting user studies in order to address, minimise and control the bias introduced into a study. This article reinforces the criticality of agreeing a methodical protocol amongst the project team that is supported by a documented human factors evaluation process and risks analysis.
Kahneman, D., 2011. Thinking Fast and Slow. Penguin Books.
FDA, 2016. Applying Human Factors and Usability Engineering to Medical Devices. Guidance for Industry and Food and Drug Administration Staff. February 3rd, 2016.
ANSI/AAMI HE75:2009. Human Factors Engineering – Design of Medical Devices.