12 MIN READ

IS020072: the device developer’s perspective

BS EN ISO20072:2009 is a relatively new standard covering the vast majority of inhalers, which the standard refers to as ADDDs (aerosol drug delivery devices). While it is never mandatory to comply with a standard, it is generally thought to be ‘highly advisable’ to attempt to adhere to this one. Ignoring a standard where one exists can lead to awkward questions during the submission process, and within the industry as adherence to ISO20072 ‘will likely be necessary for manufacturers seeking approval to market inhaler devices in the European Union (EU)…[and] the FDA has expressed support in general terms for the ISO process’ (Mitchell & Nerbrink, August 2012, 25(4))1.

This standard is part of the ‘risk based’ approach to design verification that is becoming more and more important in medical device design.

This standard is part of the ‘risk based’ approach to design verification that is becoming more and more important in medical device design. Although some aspects of the described testing are mandatory, the standard provides far more flexibility for developers to dictate what and how they test. As ever, this relies on developers to justify their decisions.

The standard is also intended to separately verify device performance outside of the wider pharmaceutical testing for combination product performance. This has led to a significant amount of confusion within the industry and some highly vocal criticism of the standard, as can be read in the IPAC/EPAG document2.

This article is a view from the ‘front line’ of device development, and attempts to answer these key questions honestly and practically. The answers may not solve all your problems – and may even conflict with other experts – but this is always the case when a standard continues to ‘bed in’ and there is no industry consensus on how best to interpret it.

 
What is the Device Functionality Profile (DFP)?

The standard describes the DFP (Device Functionality Profile) as being ‘based on the outcome of the risk assessment’. We have found it more useful to consider the DFP as a statement of what your device needs to do in order to ‘work’. After all, the risk assessment is an analysis of how the device might not ‘work’.

Because of this, the DFP has only an oblique relationship with the product requirements. Important success criteria will probably be duplicated, such as the maximum user forces, but important component interactions and device states are also likely to be included.The bottom line is: if you can confirm that all items in the DFP have been achieved (either yes/no or within an acceptable range) the device should be ready to emit its dose in the correct fashion.

 
What about the relationship of the DFP with the risk assessments?

There are two key ways in which the DFP interacts with the risk assessments:

  • New functions (possibly unexpected, unwanted functions), or critical component states can be discovered within the risk assessment and added to the DFP. For instance, you could state in the DFP that an actuation lever must not be capable of being rotated into a discovered incorrect position.
  • New pre-conditions can be uncovered in the risk assessment and added to the list of tests. Two common examples would be compression tests (on the assumption that your customer treads or sits on the device) and ‘bulk transport tests’, which are not in the standard, but which regularly appear in risk assessments.

 
What about the relationship of the DFP with pharmaceutical testing?

The device-only functions described above are relatively straightforward to plan for. However, the standard also states that the DFP should include a ‘system verification test’, which ‘comprises either emitted mass or dose testing, as determined from the risk assessment’. This should be expected as, after all, the primary function of the device is to deliver the correct dose. This section in particular has led to much of the confusion with – and potential rejection of – the standard as it intrudes directly into the already prescribed practices for pharmaceutical performance testing of the product2.

Although it may be considered to be dodging the issue, we have generally found that this part of the standard should be left out of the device verification programme completely until the contradiction is resolved in later versions. Justification can be drafted that pharmaceutical testing of the product is carried out separately and that it covers this aspect of the standard.

 
Do all the device functions need to be in the DFP?

Due to the link to the risk assessments, all functions that are critical to the operation of the device should be included in the DFP. If the risk assessment shows that there is no unacceptable harm from a function failing, then the function need not be tested.

However, the justification for excluding a function will need to be carefully considered. We have found it more common to want to exclude functions that are likely to fail under the environmental pre-conditions (see more below). While understandable, this requires the development team to justify that the function is not critical.

 
Why are the environmental conditions so tough…what about electronics?

Although not particularly well documented, the intention of the standard is that you test the device in relation to the environmental limits of the label claim for the product. The stated conditions are included as a guide if the developer has yet to define the label claims.

We have found that electronics pose a particular challenge for adherence to ISO20072. Moving the device from hot, humid conditions to cold, dry conditions — and back again — can lead to internal condensation and failure of these electronic functions.

However, although testing to the label claim is a good starting point, based on the standard’s link to the risk assessment, it is likely that the developer will need to test the device ata wider range of temperatures than the label claim states. It may be considered an acceptable risk to receive a low dose after the inhaler has been left on the apocryphal ‘dashboard in a car in Florida’, but it won’t be an acceptable risk if the device could lock up completely, thereby permanently ceasing to function.

We have found that electronics pose a particular challenge for adherence to ISO20072. Moving the device from hot, humid conditions to cold, dry conditions — and back again — can lead to internal condensation and failure of these electronic functions. This is especially true where devices have been designed before ISO20072 was released.

 
How does one control the test environment to the required accuracy for these tests?

The standard states that ‘unless otherwise specified, test measurements shall be performed at standard atmospheric conditions’, which are 25°C at 60% RH with relatively tight tolerances (±2degC and 5% RH). The design team needs to think about the ability of the test facility to reliably maintain these conditions for, potentially, a number of months for a typical ADDD verification programme. This presents a real challenge for the climate control (and monitoring) capabilities of the proposed test facility, and covers many aspects of laboratory climate control including:

  • Uniformity of the atmosphere across the lab – the prevention of ‘tropical hot-spots’ for example, or whether there is a good distribution of humidified air throughout the lab (rather than just a humidification ‘loop’ in the region of the humidifier).
  • The lab design, particularly with air exchange with the outside – the effect of when the lab doors open due to operator traffic, for example, and how well sealed the lab is to prevent leakage around windows, doors, light fittings and so on.
  • Many modern labs have extracting fume hoods which can make climate control even more challenging. If any aspects of DFP assessment require the use of a fume hood, then the design of the room and climate control system needs to be able to compensate for the volume of air being extracted from the room.

It is fair to say that a combination of these factors makes the maintenance of the required test environment incredibly challenging for many laboratories, especially those built to maintain environmental conditions set according to much less stringent boundaries (such as ISO11608 for liquid-based pen injector systems). Remedying this discrepancy potentially requires a great deal of investment in the test facility.

 
How does one verify the function of internal mechanisms that are not visible?

This is a difficult question and the answer will depend on the device. We have solved the problem in two separate ways:

  • You can modify the device so that the mechanism can be seen (through the use of clear materials on case-work parts, or cut-aways). You will need to justify why these modifications do not affect the results of the verification testing.
  • You can imply the successful operation of the mechanism through its end effect, and by considering what observable effect may occur if the mechanism does not function. If the only observable effect is dose accuracy, you may have to ensure that the operation of the mechanism is tested adequately in the pharmaceutical testing, further complicating the link between the device testing and the pharmaceutical testing.

 
How many devices do I need to test?

As with any statistical methodology, there is no ‘correct’ answer to this. The key factor is whether you are able to have measurable ‘variable’ results from the test, or if you have pass/fail ‘discrete’ results. If you can construct the DFP to specify ‘variable’ results, you should be able to use sample sizes of 20-30 and achieve suitable data. If you have ‘discrete’ results, you will need sample sizes of 200-300 to achieve a similar level of confidence in the results.

As with any statistical methodology, there is no ‘correct’ answer to this… if you can construct the DFP to specify ‘variable’ results, you should be able to use sample sizes of 20-30 and achieve suitable data.

Even though it is not mandatory to only have ‘variable’ or ‘discrete’ tests, you should typically try to have tests of all one type. If you have any ‘discrete’ results you will need to test hundreds of devices, but ‘variable’ results are typically more challenging and take more time to measure. If you are forced into a large sample size because of one particular ‘discrete’ result, this can lead to a very long (and expensive) test programme.

The variability of the test procedure is an important factor when deciding how many devices to test. The standard uses ‘k-factors’ to modify the acceptance criteria in the DFP to be valid for the test sample size. If a smaller sample size is selected, the standard effectively requires you to show a much smaller variation in results to be deemed acceptable. Smaller sample sizes may be shown to fail due to variation in the test, rather than the device. In this type of situation, Measurement Systems Analysis (or similar techniques) should be used as early as possible to determine the variability in apparatus and operator that you will need to account for by potentially increasing sample size.

 
How does one deal with the requirements of the standard that are not part of the DFP test?

Sections 5.1 and 8 of the standard appear only to be related to the bulk of the described tests by merit of their inclusion in the same document. They include other, more general requirements for the device, and as such are a mix of specific device requirements (such as remaining dose indication), and requirements more closely related to the complete product (such as secondary packaging markings), and to the product development (such as justification for material selection).

These sections have an uncertain relationship with the DFP aspect of the standard. Some of the requirements (such as dose indication) are likely to form part of the DFP, unless you can justify why accurate dose indication is not critical functionality and has no associated risks. Some requirements only indicate the recommended standard for software or electronic testing and risk complicating the developer’s validation documentation.

In addition, several required areas (such as the IFU content) are likely to be completed after design verification.

Because of these factors, we have generally recommended that the majority of these requirements can be covered in a properly planned and documented project review after completion of the DFP testing, and in isolation from it. This allows the developers to ‘tick off’ that they have met these additional requirements with the minimum of cost.


References

  1. Mitchell, J P, Nerbrink, O, Comparison of ISO Standards for Device Performance; 20072 and 27427: A Critical Appraisal, Journal of Aerosol Medicine and Pulmonary Drug Delivery, August 2012, 25(4), pp209-216.
  2. International Pharmaceutical Aerosol Consortium on Regulation and Science (IPAC) and European Pharmaceutical Aerosol Group (EPAG), Justification of the Request for a Negative Vote on ISO DIS 20072, Aerosol Drug Delivery Device Design Verification – Requirements and Methods, 2008.

Like it