10 human factors study myths

28 Feb 2017 16min read

Human factors, or the ways in which people interact with devices, became important in medical device design back in the 1990s. Since then, Team has been exploring the impact of human factors and how they relate to medical device design, from device usability through to patient adherence, regulatory compliance and beyond. Having conducted several thousand human factors studies for a variety of medical devices since then, we have become aware of a few misconceptions.

The following are 10 common myths around human factors studies we have uncovered in our work.

Myth 1: Validation is everything

You will probably run a validation study at the end of the human factors process to demonstrate that intended users can use your device properly and, importantly, without hurting themselves (or other people). It is an important study, but it is a common misconception that the validation study is where all the focus should lie.

Myth-1

It is true that it will probably be your largest study, and that there is clear human factors guidance from the FDA on exactly what you have to do. But don’t be lulled into thinking that lots of ‘effort’ in the validation study will get you that regulatory tick in the box.

The job of the validation study is to serve as a ‘summary exercise’, to confirm everything that you already know – nothing more.

“Formative studies will provide an insight into usability problems and use-errors”

On the other hand, formative studies, conducted throughout the product development process, are more likely to get you through that approval process. Formative human factors studies will provide an insight into usability problems and use-errors. Formative usability studies tell you if design changes have worked, and formative studies provide the narrative for the regulators to demonstrate that the design has been optimised; that there is no more that can be done to improve the medical device or further reduce risks.

Myth 2: Human factors = Human factors studies

When people say “We need some human factors”, they often mean “We need a human factors study”. Many people believe that looking at human factors is the same as undertaking human factors studies. But this is only half the story. Good human factors research involves two parts – empirical (studies) and analytical (theoretical) – and they complement each other. You should never underestimate the importance of analytical human factors.

HF-equals-HF-studies

For a start, you cannot conduct a validation study until you have conducted a thorough risk assessment, and criticality assessment. Analytical techniques are also very good at quickly identifying potential usability problems, and are invaluable for exploring unusual situations that would never be tested in a user study. It is often the analytical work that is key to shaping the things that need to be explored in user studies.

The only limitation of analytical human factors is that conclusions are based on assumptions about human behaviour – in other words, it is based on our informed ‘guesses’ about how people will interact with a medical device.

All of us have a little knowledge of human behaviour, but we need to be careful that these ‘guesses’ don’t lead to sweeping generalisations that could close down design options too early – how many times do you hear well-meaning people start sentences with phrases like “Users won’t see that…” or “Users will just ignore that…”. For this reason, we need to be a bit cautious with analytical work, especially when conducted within multidisciplinary teams, and know when to supplement it with empirical work (user studies) to get ‘real data’.

Myth 3: Instructions eliminate errors

Have you ever conducted a risk assessment, identified a possibility for users to hurt themselves, and then written in the risk assessment, “emphasise in the instructions”? It is a common myth that the instructions can save the users from themselves.

Myth-3

Instructions for use (IFU) can be very influential. A badly designed IFU can totally confuse people and make them mess up, but even a ‘perfect’ IFU will not shift deeply ingrained human behaviours. We have seen countless examples of participants reading instructions aloud and then doing something different, or getting cross with instructions because they are telling them to do something that “they don’t need to do”.

We often try visual ‘tricks’ – including making text bold, big or red – only to find that users completely ignore it. People create mental models to make sense of the world around them and then seek out information that confirms this mental model. When people interact with a medical device, they will engage with the parts of the instructions that fit with their mental model and gloss over the parts that don’t.

Myth 4: User ratings rule!

Myth-4

We like to collect user ratings as part of our human factors research – whether it is how much users like a feature, or whether they find device A better than device B. Such data can be clearly presented on a chart and can be really compelling when used in presentations.

“Ratings should always be accompanied with some rationale from the participant, and it is this rationale which is generally most helpful”

 

However, I would strongly caution about when and how to use ratings. Regulatory bodies such as the FDA are not interested in subjective user ratings. Ratings can be interesting, and I often do collect participants’ ratings, but it is easy to collect misleading data.

Ratings should always be accompanied with some rationale from the participant, and it is this rationale which is generally most helpful. For example, it may lead to a discussion of their assumptions, or mental models or thought processes.

Ultimately, it is what people do with a medical device that matters – do they make mistakes or get confused?

Myth 5: Formative studies are fairly simple

Myth-5

It is logical to think that validation studies are the complicated ‘projects’ that require great skill and, in contrast, anyone can do a formative study.

“Formative studies have fewer guidelines or constraints and they can be surprisingly difficult to do well”

 

Whilst validation studies do have to be conducted in a certain way, and they do represent an important step in the regulatory process, formative human factors studies have fewer guidelines or constraints and they can be surprisingly difficult to do well.

Formative studies serve to provide answers to questions, but unless you have a crystal ball, it is hard to know which questions will come up. As the development process continues, issues will become clearer. There may be several different questions or problems to try and address, and these have to be prioritised. Even after prioritising the issues, the design of a human factors research study is not straightforward.

You can run ‘generic’ simulated use studies, but the real value comes from thinking hard about the research questions and how best to design a study to address them – and this can be surprisingly tricky!

Myth 6: Numbers are needed

Myth-6

We have seen, not unreasonably, people want to set numerical acceptance criteria, for example, ‘95% of the user population should be able to complete all the tasks without committing a use-error’. After all, this approach is fundamental to a lot of objective testing that we already do. You set an acceptability threshold so you can objectively conclude whether your product has passed or failed.

However, this approach cannot be applied to human factors studies. There is no concept of ‘acceptable failures rates’. If we applied this concept, it would somehow imply that a few ‘casualties’ are acceptable, when clearly they are not. All serious use-errors made (or nearly made), have to be presented, explained and justified as ‘acceptable residual risk’.

Myth 7: Positivity is persuasive

Myth-7

If we see positivity in a human factors research study, for example, ‘after training no errors were seen’, or ‘all participants successfully completed all steps on their first or second attempt’, it is tempting to brag about these in the study report. We may also be tempted to highlight how many users gave the product a 5 star rating! Whilst we should have a positive mind-set when presenting the case for a product’s safety and effectiveness, it does not mean that we should be seeking out these positive ‘trends’.

Yes, the product is good, and safe and effective – this is almost an unwritten assumption. You can start your validation report or summary report with a statement that the product is safe and effective in the hands of intended users, etc. However, the regulators are interested in the errors that people make when using the product, and how critical these errors are, so that they can judge for themselves the risks of the product going onto the market.

Ron Kaye, the previous Head of Human Factors division at CDRH in the FDA, who was the driving force behind much of the guidance we see today, said, “It’s a game of words not numbers”, and has likened the human factors submission to summing up in front of a jury.

Myth 8: Critical errors kill (the product)

Myth-8

‘Criticality’ of errors is fundamental to the human factors process. However, just because you see critical use-errors in a study does not necessarily mean that your product is going to be rejected.

Any critical errors need to be investigated (root cause analysis). You may be able to argue that the study environment caused the error. Or, that the error is unavoidable in that class of device. The important point is to demonstrate that nothing more can be done to the instructions or the device to reduce these errors. Critical errors don’t signal the death of the device.

Myth 9: No errors = no worries

It seems obvious that if we don’t see any users making any mistakes in a study, then we are home and dry – surely! Unfortunately seeing no use-errors may not be enough. The product may still have a design flaw which could lead to serious harm.

Myth-9

Let’s take the example of the cyclists using a badly designed cycle path – if we asked 60 cyclists to each test the cycle path, we probably wouldn’t see any of them actually crash; but that doesn’t mean it is an acceptable design. If we watched thousands of cyclists, we probably would see some accidents, and possibly some quite nasty ones.

Most human factors studies use relatively small sample sizes (validation studies require 15 participants per distinct user group). In a validation study, you may test your device with 60 participants (assuming four user groups). This may not be enough participants to actually see errors, but in the example of the cyclist you may see people swerve, brake or utter words of despair. More importantly, if you asked the cyclists the right questions, you would learn that the bollards were causing a problem which could be potentially serious.

In just the same way, you need to show the regulators that you have asked in-depth questions about any confusion or difficulty with using your medical device. You need to explore if the design is good enough and check that it cannot be improved.

Don’t assume low-occurrence errors aren’t important or that the regulators will allow them because of the low number. Arguably, these are of most interest to the regulators, since these are the easiest ones to slip through the net.

Myth 10: Adherence (to the human factors process) = approval

In other parts of product submission and approval, it is all about following the right path, doing the necessary tests, and ensuring all those compliance boxes can be ticked. The human factors process does not work in quite the same way.

The HF guidance is good guidance. Having practised human factors in a lot of industries, I know that it is pretty good guidance – it doesn’t ask you to do things for the sake of it, and the essence is very fair – it is all about demonstrating that you have understood the usability issues and done all you can to fix them.

But following the guidance does not guarantee approval. While the guidance should be followed (unless there is a good reason not to), it is there to help you to meet the higher level objective, which is to provide a narrative to the regulators.

Myth-10

Your narrative should argue that the usability flaws and risks have been identified and understood, that the design has been optimised in order to reduce the risks, and that the residual risks are outweighed by the benefits that will arise from the launch of the product.

Our human factors consulting services

Human factors and human factors studies are an essential part of most submissions. Due to the somewhat fuzzy nature of human factors, it can sometimes be hard to find the ‘rules of the game’. Hopefully these myths help to illustrate that whilst some of the ‘rules’ are set in stone, it is perhaps more important to understand why the rules are there in the first place – the referee is human after all!

Want to find out how we could support your human factors validation activities? Let’s talk.

 

Join the conversation

Looking for industry insights? Click below to get our opinions and thoughts into the world of
medical devices and healthcare.