Each week, an estimated 230 million people are turning to ChatGPT for health-related inquiries, according to OpenAI. This surge in AI-driven health information seeking comes as OpenAI recently launched its ChatGPT Health product earlier this month. The central question now is whether the inherent risks of using AI for medical advice can be sufficiently minimized to provide a net positive impact on public health.
The rise of AI chatbots for health information follows a trend of individuals seeking medical advice online, a practice often referred to as "Dr. Google." For two decades, searching symptoms online has been a common first step for those experiencing medical issues. However, large language models (LLMs) are increasingly becoming the tool of choice for many.
The implications of using AI for health advice are significant. While AI offers the potential for quick and accessible information, concerns remain about accuracy, data privacy, and the potential for misdiagnosis or inappropriate self-treatment. Experts emphasize the importance of verifying AI-generated health information with qualified medical professionals.
Meanwhile, in the United States, a battle is brewing over the regulation of artificial intelligence. Tensions escalated in late 2025 when, after Congress twice failed to pass legislation banning state AI laws, then-President Donald Trump signed an executive order aimed at preventing states from regulating the rapidly growing AI industry, according to reporting by Grace Huckins. This move highlights the ongoing debate about the appropriate level of government oversight needed to balance innovation with potential risks. The conflict centers on the extent to which individual states should have the power to implement their own AI regulations, versus a more unified federal approach.
Discussion
Join the conversation
Be the first to comment