Regulation by design: designing compliant AI health tools
In the fast-moving world of digital health, 2026 has already marked some milestone moments.
In January 2026, OpenAI launched ChatGPT Health. The new direct-to-consumer tool offers a ‘dedicated, privacy-protected space’ within ChatGPT which integrates users’ uploaded personal health data to provide improved wellness guidance. ChatGPT Health is part of OpenAI’s vision of General Purpose AI (GPAI) as an ally to healthcare, positioning large language models (LLMs) as essential partners in clinical workflows. The disclaimer “not intended for use in the diagnosis or treatment of any health condition”, however, markets ChatGPT Health firmly as a wellness and lifestyle tool – a ‘safer’ choice in the context of researched concerns that LLMs can still produce severely harmful medical advice at nontrivial rates.
Soon after, Amazon One Medical launched an AI-powered Health Assistant integrated directly into its primary care platform, giving members 24/7 personalised health guidance grounded in their medical records, with built-in escalation to human clinicians when needed. Amazon emphasised HIPAA compliant design, clinical oversight and its boundaries around diagnosis and treatment. Large technology providers are gradually placing greater emphasis on regulatory and patient safety constraints rather than operating around them.