6 MIN READ
10 human factors study myths: 6 to 10
This is part two of Rob’s 10 human factors study myths. Click here to read myths 1 to 5.
Myth 6: Numbers are needed
We have seen, not unreasonably, people want to set numerical acceptance criteria, for example, ‘95% of the user population should be able to complete all the tasks without committing a use-error’. After all, this approach is fundamental to a lot of objective testing that we already do. You set an acceptability threshold so you can objectively conclude whether your product has passed or failed.
However, this approach cannot be applied to human factors studies. There is no concept of ‘acceptable failures rates’. If we applied this concept, it would somehow imply that a few ‘casualties’ are acceptable, when clearly they are not. All serious use-errors made (or nearly made), have to be presented, explained and justified as ‘acceptable residual risk’.
Ron Kaye, the previous Head of Human Factors division at CDRH in the FDA, who was the driving force behind much of the guidance we see today, said, “It’s a game of words not numbers”, and has likened the human factors submission to summing up in front of a jury.
Myth 7: Positivity is persuasive
If we see positivity in a study, for example, ‘after training no errors were seen’, or ‘all participants successfully completed all steps on their first or second attempt’, it is tempting to brag about these in the study report. We may also be tempted to highlight how many users gave the product a 5 star rating! Whilst we should have a positive mind-set when presenting the case for a product’s safety and effectiveness, it does not mean that we should be seeking out these positive ‘trends’.
Yes, the product is good, and safe and effective – this is almost an unwritten assumption. You can start your validation report or summary report with a statement that the product is safe and effective in the hands of intended users, etc. However, the regulators are interested in the errors that people make when using the product, and how critical these errors are, so that they can judge for themselves, the risks of the product going onto the market.
Myth 8: Critical errors kill (the product)
‘Criticality’ of errors is fundamental to the human factors process. However, just because you see critical use-errors in a study does not necessarily mean that your product is going to be rejected.
Any critical errors need to be investigated (root cause analysis). You may be able to argue that the study environment caused the error. Or, that the error is unavoidable in that class of device. The important point is to demonstrate that nothing more can be done to the instructions or the device to reduce these errors. Critical errors don’t signal the death of the device.
Myth 9: No errors = no worries
It seems obvious that if we don’t see any users making any mistakes in a study then we are home and dry – surely! Unfortunately seeing no use-errors may not be enough. The product may still have a design flaw which could lead to serious harm.
Let’s take the example of the cyclists using a badly designed cycle path – if we asked 60 cyclists to each test the cycle path we probably wouldn’t see any of them actually crash; but that doesn’t mean it is an acceptable design. If we watched thousands of cyclists we probably would see some accidents, and possible some quite nasty ones.
Most human factors studies use relatively small sample sizes (validation studies require 15 participants per distinct user group). In a validation study you may test your device with 60 participants (assuming four user groups). This may not be enough participants to actually see errors, but in the example of the cyclist you may see people swerve, brake or utter words of despair. More importantly, if you asked the cyclists the right questions, you would learn that the bollards were causing a problem which could be potentially serious.
In just the same way, you need to show the regulators that you have asked indepth questions about any confusion or difficulty with using your device. You need to explore if the design is good enough and check that it cannot be improved.
Don’t assume low-occurrence errors aren’t important or that the regulators will allow them because of the low number. Arguably these are of most interest to the regulators since these are the easiest ones to slip through the net.
Myth 10: Adherence (to the human factors process) = approval
In other parts of product submission and approval, it is all about following the right path, doing the necessary tests, and ensuring all those compliance boxes can be ticked. The human factors process does not work in quite the same way.
The HF guidance is good guidance. Having practiced human factors in a lot of industries I know that it is pretty good guidance – it doesn’t ask you to do things for the sake of it, and the essence is very fair – it is all about demonstrating that you have understood the usability issues and done all you can to fix them.
But following the guidance does not guarantee approval. While the guidance should be followed (unless there is a good reason not to), it is there to help you to meet the higher level objective, which is to provide a narrative to the regulators.
Your narrative should argue that the usability flaws and risks have been identified and understood, that the design has been optimised in order to reduce the risks, and that the residual risks are outweighed by the benefits that will arise from the launch of the product.
Human factors and human factors studies are an essential part of most submissions. Due to the somewhat fuzzy nature of human factors it can sometimes be hard to find the ‘rules of the game’. Hopefully these myths help to illustrate that whilst some of the ‘rules’ are set in stone, it is perhaps more important to understand why the rules are there in the first place – the referee is human after all!