Getting products to market in our regulated industry can take what feels like a lifetime. While working on these long-term projects it’s important to keep abreast of new technology that can aid designers, engineers and researchers. I’ve pulled together four examples of developments that have captured our interest recently – some that we’ve seen at conferences and some that we’ve had the chance to trial on client projects.
Virtual reality (VR) and augmented reality (AR)
Virtual reality (VR) and augmented reality (AR) have experienced something of a resurgence of late, partly due to the reduced costs of the hardware involved and partly because of the increased availability of AR apps for smart devices.
For example, last year IKEA launched an app which allows users to superimpose digital models of their products into their home at scale using the camera and screen of their smart device. This is an incredibly useful tool for visualising how something will look in context before making a purchase.
During a presentation at the recent IDSA Medical Design Conference in Boston a demonstration was given centred on a physical model of an infusion stand with a
digitally superimposed console computer. Rather than using a mobile device, the design team made use of an AR Headset to create a more immersive experience and free up the designer to use their hands to interact with the model. I was struck by how easy it would be to explore multiple concepts when compared with traditional physical model-making techniques, and how this system could be used to evaluate ideas in context (say at a hospital).
Personally I don’t see great utility for headsets at the current time when compared with mobile devices for making use of AR – the technology is still too cumbersome both physically and in terms of the process required to deploy a model to the headset. I think the potential for this tool isn’t just limited to large scale product – it’s also useful on smaller products where an element of a user interface needs to be tested in context. For example it might be possible to superimpose a number of concepts for buttons and flags onto a single block model of an auto-injector. Crucially hybrid prototypes like this could be used to explore the timing aspect of user interfaces without the need to reengineer a mechanism each time (e.g. an end-of-dose indicator).
Additive manufacturing – carbon 3D1
Back in the barn at Team, we’ve made use of 3D printing technology for many years to quickly produce prototype parts. However the parts often require lots of finishing and can have poor material properties between the build layers. For many the Holy Grail of additive manufacturing is to create a part straight from CAD with perfect properties without the need for any post-processing. We may be some way away from achieving that goal, but we’re beginning to see some interesting innovations which point in that direction.
Carbon 3D for example do not see their printers as design tools, but as manufacturing equipment designed to plug the perceived gap between injection moulding and short-run machining. Their technology creates parts with excellent mechanical properties thanks to their unique process of light-curing resin through the thickness of the material. They also offer elastic and reinforced materials, plus the ability to combine them in a single component (thereby mimicking co-moulding). Currently however, post processing remains an issue and limits applications to parts produced in the 100s and low 1000s.
Whilst this technology may not be appropriate to the high volumes associated with many drug-delivery devices, one could imagine that lower volume applications such as surgical equipment could benefit from additive manufacturing technologies soon but at the moment the success of this process is still yet to be proven.
Tools for data capture and analysis
I was intrigued by another IDSA conference presentation on new methods of advanced data capture used to optimise the human factors performance of medical devices. The methods in question made use of advances in sensor technology and data analysis to provide detailed and nuanced feedback on device use. The presenter contrasted this with current HFE research methods (including questionnaire and interview) which have remained unchanged for 100 years. The testing methodologies being explored include 3D spatial tracking, high-performance eye-tracking, micro-facial expression analysis, EEG (electroencephalography) and cognitive workload analysis. Together, the presenter hoped, these and other techniques would form the basis of a “modern neuroscience-based HFE testing platform”.
3D spatial tracking involves the instrumentation of candidate devices to record motion at the sub-millimetre level. It was demonstrated how this technology could be used to track the micro-motion of a needle tip beneath a patient’s skin and we saw how device configuration and injection site yielded very different results.
Eye-tracking is a technology we have experience of using on a recent project here at Team; using a pair of glasses with an integrated camera, we were able to record a ‘participants’-eye-view’ during a series of formative studies centred on the use of a medical device and associated instruction manual (IFU). Combined with traditional interview–style data capture methods, analysis of the footage allowed us to better determine the root cause of use errors. ‘Heat maps’ of participants’ eye position on the IFU also allowed us to assess whether they had viewed critical content. Micro-facial expression analysis involves videoing participants during the use of a medical device and then using an algorithm to track the relative distance between various landmarks on the face. The aim is to quantify the non-verbal expression of emotions during device use.
Whilst these techniques have the potential to provide unique insights into usability issues, the presenter did not shy away from some of the challenges they present – most notably the expertise required to understand and interpret the data. I think the key to their successful use is having a very clear idea of what you want to know and interfering as little as possible with normal device use. One startling image showed a participant self-injecting with a device trailing wires back to a computer whilst being filmed which I would argue is unlikely to elicit natural behaviour. Both talks demonstrated how the barrier for entry to some of these new tools and techniques is dropping due to reduced costs of hardware, increased processing power of smart devices and developments in data handling and analysis. What’s less clear at this stage is whether these new ways of working are actually better, and whether they’ll result in better products. The key will undoubtedly be to understand when the use of these tools is appropriate.
Looking further ahead – I’ve been reading recently about the role that automation and Artificial Intelligence (AI) might play in the workplace of the future. The automation of ‘dull, dirty and dangerous’ tasks has been moving apace for several decades but it’s becoming apparent to those of us who feel safe in ‘highly skilled’ roles that the robots will soon be coming after our jobs too. Advances in UI and machine learning are leading to creation of increasingly sophisticated algorithms that can diagnose an illness, decide whether to extend a loan, identify a legal president or optimise the design of a mechanical component.
MIT researchers working in conjunction with Columbia University have recently unveiled a tool that works in conjunction with CAD drafting software to generate product designs based on optimisation for different metrics. They claim the tool will make it easier and more efficient for designers and engineers to explore the many compromises that come with designing new products.
As an example they have demonstrated how the tool can be used to generate alternatives geometries for the humble spanner based on optimisation for mass, stress and force/torque. It’s easy to see from the visualisations produced by the software that the familiar spanner design represents the best trade-off between all three.
Whilst I can see the benefit of using such a tool for balancing functional requirements, more intangible user requirements may be much more difficult for an algorithm to consider… for now.