Each week, an estimated 230 million people are using ChatGPT to ask health-related questions, according to OpenAI. This figure provides context for the recent launch of OpenAI's ChatGPT Health product earlier this month. The central question surrounding this development is whether the inherent risks of using AI for health-related queries can be sufficiently mitigated to provide a net benefit to users.
For the past two decades, individuals experiencing new medical symptoms have commonly turned to the internet for information, a practice often referred to as "Dr. Google." Now, large language models (LLMs) are increasingly being used to seek medical information. The rise of AI in healthcare raises concerns about accuracy, privacy, and the potential for misdiagnosis or inappropriate self-treatment.
Grace Huckins reports on the escalating conflict over AI regulation in the United States. The battle reached a critical point in the final weeks of 2025. On December 11, after Congress failed to pass a law banning state AI laws on two occasions, then-President Donald Trump signed an executive order aimed at preventing states from regulating the rapidly growing AI industry. This action highlights the tension between fostering innovation and addressing potential risks associated with AI technologies.
The executive order sparked immediate controversy, with some states vowing to challenge its legality. Supporters of state-level regulation argue that a uniform national standard could stifle innovation and fail to address specific local concerns. Conversely, proponents of federal oversight claim that a patchwork of state laws would create confusion and hinder the development of AI technologies. The legal and political battles surrounding AI regulation are expected to continue, shaping the future of AI development and deployment in the US.
Discussion
Join the conversation
Be the first to comment