ChatGPT: benefits and challenges of its use in healthcare

11 Apr 2023 6min read

One of the most exciting recent developments in artificial intelligence technology (AI) is the release of ChatGPT (generative pretrained transformer) by OpenAI in November 2022. This cutting-edge chatbot can answer almost any question and interact in a conversational way, thanks to its ability to process and understand natural language using deep learning algorithms. With 175 billion parameters and trained on a massive dataset obtained from various sources, ChatGPT is just one example of Large Language Models (LLMs) that are driving innovation in the field of AI.

Since its release, ChatGPT has gained immense popularity, leading to other tech giants like Microsoft, Google and Meta developing their own AI products. Microsoft has invested $10 billion in OpenAI and is using the partnership to launch Bing AI, an AI-powered search engine. Google’s AI chat service, Bard, functions similarly to ChatGPT but pulls its information from the web. Bard already has a waiting list of users excited to start using it. Meanwhile, Meta’s latest AI language model, LLaMA, is claiming to outperform ChatGPT with a smaller number of parameters. The leaked version of LLaMA has sparked an explosion of developments, with versions of the model running on MacBooks and smartphones, making it more accessible than other LLMs.

ChatGPT and AI: improving the efficiency and accessibility
of healthcare

ChatGPT and other generative AI applications have numerous potential benefits in the healthcare industry, particularly in improving the efficiency and accessibility of healthcare services. These include:

  • Research: they could be used to accelerate research and scientific publications by analysing large volumes of data, conducting comprehensive literature reviews, generating evidence and content. However, ChatGPT is static, it was trained on data up until 2021 and will therefore not provide the latest references or most up to date research.
  • Assist with clinical decisions: these models have access to large amounts of data, much more than a trained healthcare professional. AI models can use that knowledge to extract relevant information based on a patient’s symptoms and make a clinical decision in seconds. At the beginning of March, Glass Health announced the launch of Glass AI 2.0, which combines LLMs with a clinical knowledge database and can generate a differential diagnosis (DDx) or a clinical plan when given a diagnostic problem. However, it can make mistakes and it will need a skilled doctor to review and confirm the decisions.
  • 24/7 medical assistance: providing instant responses to patients’ queries about their health, symptoms and medical conditions. AI technology can assist patients in self-diagnosis and can provide relevant health advice based on their symptoms. This can reduce the burden on healthcare professionals and improve the accessibility of healthcare services, particularly for patients who live in remote areas.
  • Patient monitoring and care management: AI technology can assist healthcare providers in monitoring and managing patients’ health. It can remind patients to take their medications, track their vital signs and provide recommendations for lifestyle changes.
  • Assist with mundane tasks: generative AI can help healthcare professionals with mundane tasks such as summarising a patient’s interactions, medical records and help retrieve relevant patient data quickly and efficiently.
image of phones with digital health app

Challenges in the application of generative AI in healthcare

Overall, the application of generative AI in healthcare has the potential to revolutionise the industry, improving patient outcomes and accessibility to healthcare services but it also raises several major challenges such as:

  • Data privacy and security: the use of AI technologies in healthcare raises concerns about the privacy and security of patients’ sensitive health data. Healthcare providers need to ensure that patient data is securely stored and protected from unauthorised access. At the end of March, OpenAI announced a data breach where the chat history of other users was exposed. The data breach was caused by a bug in an open-source library and ChatGPT was taken offline while the bug was fixed. This highlights the importance of threat modelling and penetration testing in medical AI systems.
  • Bias and discrimination: large language models like ChatGPT are trained on vast amounts of data, which may contain biases or discriminatory language. This can lead to biased recommendations or inaccurate diagnoses. It’s essential to monitor and mitigate these biases in AI systems used in healthcare.
  • Legal and ethical considerations: the use of AI in healthcare raises legal and ethical questions, including issues of liability, informed consent and the responsible use of patient data. Healthcare providers need to ensure that their AI systems comply with relevant laws and regulations.
  • Human interaction: while AI technologies like ChatGPT can be useful in providing information and support, they cannot replace the human touch in healthcare. Patients may still need human interaction and empathy from healthcare providers, particularly in the case of serious or complex medical conditions.

How about the ultimate healthcare science-fiction fantasy, where a machine in your doctor’s surgery would carefully listen to your symptoms and analyse your medical history? Can we dare to imagine a machine that has a mastery of every scrap of cutting-edge medical research? One that offers an accurate diagnosis and perfectly tailored treatment plan?”

Hannah Fry, Author of 'Hello World'

These thoughts written by Hannah Fry in her book, Hello World (published in 2018), do not seem like a ‘healthcare science-fiction fantasy’ anymore, instead they are a possible reality.

What is the future of LLMs in healthcare?

The impressive performance of ChatGPT and other LLMs, combined with the possibility of them running on commodity hardware, opens up endless possibilities for AI technology. This is just the beginning, with increased computing power, improved models and more data, these technologies will only get better.

However, there are also significant challenges to consider, including data privacy, bias and ethical considerations. Understanding the limitations of this technology is crucial since its output will only be as accurate as the data it was trained on. LLMs can also ‘hallucinate’: they can generate responses that are plausible but entirely incorrect. This underscores the significance of human involvement and using critical thinking, especially in the healthcare sector where trust between physicians and patients is fundamental to the profession.

Join the conversation

Looking for industry insights? Click below to get our opinions and thoughts into the world of
medical devices and healthcare.