Each week, an estimated 230 million people are using ChatGPT to ask health-related questions, according to OpenAI. This surge in AI-driven health inquiries comes as OpenAI recently launched its ChatGPT Health product earlier this month, raising questions about the potential benefits and risks of using large language models (LLMs) for medical information.
For two decades, individuals experiencing new medical symptoms have commonly turned to the internet for information, a practice often referred to as "Dr. Google." Now, LLMs are increasingly becoming a source for health-related queries, prompting a debate on whether the inherent risks of using AI in this context can be adequately mitigated.
The use of AI for health information carries potential risks, including the generation of inaccurate or misleading information. Experts emphasize the importance of carefully evaluating the reliability and accuracy of AI-generated health advice.
Meanwhile, the United States is facing increasing internal conflict over the regulation of artificial intelligence. In late 2025, tensions reached a critical point when Congress failed to pass a law banning state AI laws. Subsequently, then-President Donald Trump signed an executive order on December 11, 2025, aimed at restricting states from regulating the rapidly growing AI industry, highlighting the ongoing struggle to establish a unified approach to AI governance across the country. The executive order was intended to prevent a patchwork of state-level regulations that could stifle innovation and create compliance challenges for AI companies. The move sparked immediate controversy, with some states vowing to challenge the order in court, arguing that it overstepped federal authority and infringed on their right to protect their citizens from potential risks associated with AI.
Discussion
Join the conversation
Be the first to comment