Cybersecurity: a beginner’s guide to secure medical device design

21 Apr 2020 18min read

Team Discussion

Multiple authors

A connected medical device is any portable medical device with some form of data connection to a partnered smart device or networked application. These devices are wirelessly connected to a cloud application that performs administration, data collection, or big data analysis.

Connected devices can be passive data gatherers (e.g. blood pressure monitor, glucometer) or active (e.g. a wireless pacemaker). If such devices are not secured, then private data can suffer un-authorised access (passive device), or worst case, a patient’s physical health can be threatened by un-authorised agents (active device).

This article peels back the lid of a typical connected device and walks through the software and system design of its construction. Points of vulnerability to data attack are identified, with advice given on how to design a secure system.

What is cybersecurity? And why is it now a ‘thing’?

There is no single consensual definition of cybersecurity. The best guidance is perhaps that delivered by Wikipedia, “Cybersecurity, … is the protection of computer systems from the theft and damage to their hardware, software or information, as well as from disruption or misdirection of the services they provide”.

Smart wireless ‘connected’ devices are becoming ubiquitous, against a backdrop of growth in 5G and IoT. The threat to the security of device data is accelerating and hard to ignore. The risk of attack is further compounded by a general lack of threat awareness (manufacturers and consumers) and immature regulation.

Meanwhile, increasing complexity, cost, and time to market pressure is driving a manufacturer supplier market for common hardware platforms and highly integrated chipsets. Increasingly, connected devices are being designed around these homogenised platforms. This has resulted in the endemic reuse of common hardware platforms, software, and the toolsets used to build them.

Whilst open platforms and easily accessible software provides a lifeline to productivity in a fast-moving market, it comes with risk. The consequence is a proliferation of common vulnerabilities, know-how, and easy access for hackers to inspect and modify the running of devices, and to gain access to device operations or sensitive data.

Worse still, the rise in wireless connectivity massively increases the vulnerability of devices. Hacking used to be an activity requiring physical proximity, but it can now be done remotely, at a far reduced risk, on a scale of one-to-many.

What are the threats?

The worst-case scenario of a cybersecurity breach is physical harm to the end user or patient. Aside from patient wellbeing, a breach can have dire commercial consequences for a manufacturer, and consequences of trust and reputation for stakeholders such as doctors, medical bodies, or institutions. Increasingly, as regulations are introduced and mature, and end customer awareness improves, manufacturers or organisations without rigorous cybersecurity measures could find themselves excluded from key regions and markets.

What’s the scope?

For a typical simple connected data device, the user interacts via a screen and buttons (MMI) or via a remote ‘connected’ wireless application (a web page). These are the interaction points where a user would exchange or manage data with the device.

If we were to peel off the lid, and look inside the device, the illustration overleaf shows a typical embedded architecture that might be found. Aside from any specific medical sensors or controls, the electronics building blocks would be similar for any typical medical or IoT consumer device.

At the heart of the device is a highly integrated microprocessor device, on a single chip. This chip runs the application software and interfaces to the MMI and sensor-control components. It also provides connectivity to the internet (cloud) or a paired wireless device.

Where are the vulnerabilities?

At any point that data is entered, stored, manipulated, or moved it is vulnerable to cybersecurity attacks. How may we mitigate the risks against these attack points?

USER LOGIN

Vulnerabilities

The biggest source of vulnerability is at the point of user entry, the login access points. The simplest breach is to obtain or guess a user’s login credentials. With a bit more knowledge and persistence, a hacker can potentially ‘brute force’ guess a secret password; modern computing power can easily iterate through millions of ‘guesses’ in a short time period.

Mitigations

There are established good practices for managing passwords and their authentication, covering process and encryption technologies:

• No default passwords.
• Password strength: the better the strength the higher resilience to brute force attack.
• Passwords are not stored on the device as clear text but stored as irreversible Cryptographic Hash codes (e.g. SHA-256). This will prevent a hacker from ‘peeling open’ a device and discovering user access details.
• Limit the number of password entry retries and force time lock-outs between attempts.
• Expiration and (remote) revocation of passwords.
• Inactivity logout.
• Explicit user roles and privilege levels.
• Biometric authentication.
• AI tracking of suspicious or unusual unauthorised user behaviour.

DATA AT REST

Vulnerabilities

Data at rest refers to any data or information stored with some permanence, which could reasonably be extracted or substituted, using standard equipment or intrusive engineering techniques. Examples include data constants, measurement results, intermediate formulae variables, calibration tables, executable firmware, etc.

Such sensitive data, residing in some form of memory storage, presents a serious breach risk of unauthorised data access or a means to tamper with device behaviour.

Mitigations

The simplest and most robust mitigation is to design a system that keeps as much rest data as possible on-chip, whilst locking down all external read access or debug ports. Data should be kept within the boundary of a chip package, with no means to read it out via a pin protocol. This would present an extremely difficult barrier to access without highly specialised knowledge and equipment. Given enough energy and funds, it is possible to peel open a chip and probe internal data via an electron microscope. But even then, chip technology exists, especially specialised crypto key management chips, to defend against such attacks and destroy data at point of intrusion.

The next level of mitigation is to ensure all rest data is authenticated and encrypted by the application program. Authentication is a form of signature or hashing check, verifying that it is an intentional data set; it is trusted data that has not been corrupted or inserted by an unauthorised agent. Encrypted data is unintelligible without access to a decryption key.

DATA IN TRANSIT

Data in transit is transported within a device, or between the device and data ports or other computer systems. Within the device, data moving between electronic subsystems and the microprocessor chip may also be called ‘data in use’. Data may also be in transit within the full device ecosystem, for example streaming data between the device and an external network or cloud.

Vulnerabilities

Data in use traverses subsystem chips, on industry standard digital busses, such as I2C or SPI. The data can be easily probed or manipulated with standard engineering know-how. Data can be stolen, or sensor readings can be spoofed to alter device behaviour.

Data in transit on a network connection, wired or wireless, is vulnerable to traditional eavesdropping or tampering, or a full denial of service (DoS) attack. The full networking security mitigations presented by commercial smart devices, IoT, and Cloud solutions are beyond the scope of this article. However, a designer must be aware that mature cybersecurity technologies already exist in this area and employ them appropriately.

The most ubiquitous low power wireless link, Bluetooth Low Energy, already contains pairing authentication and data link encryption, which when used following the correct guidelines, can also be validated for medical device use.

Mitigations

Data should always be encryptedwhile in transit, preventing data from being intercepted to be stolen or manipulated (falsified). To prevent eavesdropping of the data transfer, its encryption must be ‘end to end’: between an electronic subsystem and the microprocessor chip, or between the device and a peer network machine’s processor. Anywhere a data analyser can be placed on a data path, it must return only encrypted ‘unintelligible’ data.

Data ‘end’ points should also be mutually authenticated to ensure they have not been substituted or faked, typically using digital certification, or, on an embedded device with a challenge-response protocol (exchange of a common secret, also encrypted).

ENCRYPTION KEY MANAGEMENT

Best practice encryption utilises known published ‘strong’ algorithms, such as AES, which require secret key(s) (akin to a password) known only to the data provider (encryptor) and data consumer (decryptor). The encrypted data is only as secure as the ‘key length’ (akin to number of required password ‘guesses’) and the secrecy of the keys.

By example, NIST recommends an AES key length of 256 bits (AES-256), which will outstrip hacking computer power (continuous ‘guesses’) for the reasonably foreseeable lifetime of a device manufactured today.

Then the key itself needs to be secured. Essentially, any key emplacement on a device needs to be placed in a non-discoverable or probeable memory or specialised key-store chip. The line of risk stretches from design through to manufacture and in-field support. Mitigations to maintain secrecy are both technical and procedural; such as minimising the number of people that come into contact with full clear keys.

FIRMWARE

A medical device will contain application specific executable code, which performs the device functions in close harmony with the electronics design. Its integrity is fundamental to the behaviour, safety, and security of the device. The executable code and datasets are permanently emplaced in the device build, and this is known as the ‘firmware’.

Vulnerabilities

Software will always contain bugs. A bug is a fault in computer code which causes it to run contrary to intended design, resulting in unexpected behaviour or results. Bugs represent a significant cybersecurity risk, they have the potential to expose critical data or a means to accessing or deriving critical data. With the best test will in the world, bugs are always going to be present during a product’s lifetime, when tens of thousands of units could be in use.

The risk of errant software is further exacerbated by the increasing trend of code reuse and 3rd party libraries (SOUP). Increasingly, in addition to the main application processor and code, a device may also have additional microprocessors and firmware for dedicated peripherals such as smart sensors or wireless comms protocol stacks.

Software development tools rely on debug channels into the application microprocessor. Third party SOUP items may also support instrumented debug modes, ‘back doors’, and default passwords. Hackers could access all of these functions to gain full access and control of a device. Alternatively, a hacker could seek to replace entire code units, which can be a surprisingly trivial exercise for known common hardware platforms. Once trojan software is installed, a device could be repurposed for unauthorised (unsafe) use, stored data can be accessed, or systems and users could be tricked into revealing data.

Finally, any accessible data port is potential point of access to internal software and data. Using known or disruptor protocol patterns, a hacker could provoke a run-time error that disables security measures or reveals data.

Mitigations

Software safety:
• Code firmware packages should only be distributed in encrypted form and stored encrypted if located in off-chip memory.
• Firmware should be authenticated prior to execution, to ensure it is approved software and has not been tampered with.
• Remove all logical and physical debug channels at manufacture (e.g. JTAG).
• Test, test, and test software prior to deployment to discover as many bugs and weaknesses as possible (IEC 62304 provides a framework for focussing software test coverage).
• Engage the services of a 3rd party Penetration Test facility to uncover device vulnerabilities and seek expert advice as early as possible in the development cycle.

Maintenance:
• Monitor and assess customer reported bugs.
• Monitor SOUP items for bugs and vulnerabilities, using supplier bulletins and known vulnerability databases.

Upgrade and revocation:
• Provide means to remotely revoke or disable a (connected) device.
• Provide means to securely upgrade any firmware item on the device (ideally remotely, upgrades via physical access could be prohibitive in cost).

Conclusions

Cybersecurity resilience starts from the ground-up, when considering the system design of a medical device. All sources of critical data, and their data flows, need to be analysed and designed for safety. Exactly the same as established risk and hazard analysis, this needs to be an ongoing activity, through the full design cycle, manufacture, and in-field maintenance and support. Effective cybersecurity cannot be delivered as an after-thought or ‘bolt-on’.

As medical devices become connected, and more complex, and software content continues to substantially increase, this can present a formidable task to managing cybersecurity. Fortunately, the anticipated (5G accelerated) 2020 growth explosion in industrial, transportation, and construction IoT devices will offer a lifeline to the complex development of connected medical devices. Most IoT and connected medical devices share a common architectural platform and are constructed from the same hardware and software components. The increasingly necessary support that manufacturers provide to enable IoT security is directly transferrable and leverageable for medical device construction.

Establish a security architecture, and then explicitly review and select a supplier system-on-chip platform which meets these needs, with a longevity to match your medical device life. This may also include the increasing vertically integrated pre-secured supplier offerings, such as mobile reference applications, cloud infrastructure, or chains of trust for device firmware upgrades.

The growth of software reuse and homogeneous hardware platforms is, however, a double-edged sword. The same information and development toolkits enabling developer productivity are also available to the hacker community. Designers must be prepared to counterbalance the hacker effort-reward with time and effort spent securing a system. Cybersecurity is not free, it must be included in development plans, it comes at a cost.

Once a device has been designed to be secure, it needs friendly test before being released into the hands of potential hackers. Do not just test individual secure functions do what they do, a device needs objective security testing across all access points with any and all tools and knowledge available to hackers. Again, with the growth of IoT and connected devices, the availability and access to testing frameworks and cybersecurity penetration test and technical security assessment services is on the increase.

A device can never be guaranteed 100% secure. Code size and inspectable and testable permutations is ever increasing, any data path can be probed or spoofed, and any chip can be potentially removed or replaced. Managing cybersecurity is an exercise in risk management, it is about closing as many doors to the hacker as possible, and providing a level of low risk acceptable to customers and end users.

Join the conversation

Looking for industry insights? Click below to get our opinions and thoughts into the world of
medical devices and healthcare.