Human factors, or the ways in which people interact with devices, became important in our industry back in the 1990s. We have been exploring the impact of human factors and how they relate to medical device design for over 17 years. From our experience of human factors studies we have become aware of a few misconceptions. Human factors studies are very powerful but they are not magic. Here are some of the myths we have uncovered in our work.
Myth 1: Validation is everything
You will probably run a validation study at the end of the human factors process to demonstrate that intended users can use your device properly and, importantly, without hurting themselves (or other people). It is an important study, but it is a common misconception that the validation study is where all the focus should lie.
It is true that it will probably be your largest study, and that there is clear human factors guidance from the FDA on exactly what you have to do. But don’t be lulled into thinking that lots of ‘effort’ in the validation study will get you that regulatory tick in the box.
The job of the validation study is to serve as a ‘summary exercise’, to confirm everything that you already know – nothing more.
“Formative studies will provide an insight into usability problems and use-errors”
On the other hand, formative studies, conducted throughout the product development process, are more likely to get you through that approval process. Formative studies will provide an insight into usability problems and use-errors. Formative studies tell you if design changes have worked, and formative studies provide the narrative for the regulators to demonstrate that the design has been optimised; that there is no more that can be done to improve the device or further reduce risks.
Myth 2: Human factors = Human factors studies
When people say “We need some HF”, they often mean “We need an HF study”. Many people believe that looking at human factors is the same as undertaking human factors studies. But this is only half the story. Good human factors involves two parts – empirical (studies) and analytical (theoretical) – and they complement each other. You should never underestimate the importance of analytical human factors.
For a start, you cannot conduct a validation study until you have conducted a thorough risk assessment, and criticality assessment. Analytical techniques are also very good at quickly identifying potential usability problems, and are invaluable for exploring unusual situations that would never be tested in a user study. It is often the analytical work that is key to shaping the things that need to be explored in user studies.
The only limitation of analytical human factors is that conclusions are based on assumptions about human behaviour – in other words, it is based on our informed ‘guesses’ about how people will interact with a medical device.
All of us have a little knowledge of human behaviour, but we need to be careful that these ‘guesses’ don’t lead to sweeping generalisations that could close down design options too early – how many times do you hear well-meaning people start sentences with phrases like “Users won’t see that…” or “Users will just ignore that…”. For this reason, we need to be a bit cautious with analytical work, especially when conducted within multidisciplinary teams, and know when to supplement it with empirical work (user studies) to get ‘real data’.
Myth 3: Instructions eliminate errors
Have you ever conducted a risk assessment, identified a possibility for users to hurt themselves, and then written in the risk assessment, “emphasise in the instructions”? It is a common myth that the instructions can save the users from themselves.
Instructions for use (IFU) can be very influential. A badly designed IFU can totally confuse people and make them mess up, but even a ‘perfect’ IFU will not shift deeply ingrained human behaviours. We have seen countless examples of participants reading instructions aloud and then doing something different, or getting cross with instructions because they are telling them to do something that “they don’t need to do”.
We often try visual ‘tricks’ – including making text bold, big or red – only to find that users completely ignore it. People create mental models to make sense of the world around them and then seek out information that confirms this mental model. When people interact with a device they will engage with the parts of the instructions that fit with their mental model and gloss over the parts that don’t.
Myth 4: User ratings rule!
We like to collect user ratings – whether it is how much they like a feature, or whether they find device A better than device B. Such data can be clearly presented on a chart and can be really compelling when used in presentations.
“Ratings should always be accompanied with some rationale from the participant, and it is this rationale which is generally most helpful”
However, I would strongly caution about when and how to use ratings. Regulatory bodies such as the FDA are not interested in subjective user ratings. Ratings can be interesting, and I often do collect participants’ ratings, but it is easy to collect misleading data.
Ratings should always be accompanied with some rationale from the participant, and it is this rationale which is generally most helpful. For example, it may lead to a discussion of their assumptions, or mental models or thought processes.
Ultimately, it is what people do with a device that matters – do they make mistakes or get confused?
Myth 5: Formative studies are fairly simple
It is logical to think that validation studies are the complicated ‘projects’ that require great skill and, in contrast, anyone can do a formative study.
“Formative studies have fewer guidelines or constraints and they can be surprisingly difficult to do well”
Whilst validation studies do have to be conducted in a certain way, and they do represent an important step in the regulatory process, formative studies have fewer guidelines or constraints and they can be surprisingly difficult to do well.
Formative studies serve to provide answers to questions, but unless you have a crystal ball, it is hard to know which questions will come up. As the development process continues, issues will become clearer. There may be several different questions or problems to try and address, and these have to be prioritised. Even after prioritising the issues, the design of a study is not straightforward.
You can run ‘generic’ simulated use studies, but the real value comes from thinking hard about the research questions and how best to design a study to address them – and this can be surprisingly tricky!