Doctors believe artificial intelligence has a role to play in healthcare, but some experts caution against relying solely on chatbots for medical advice. Dr. Sina Bari, a practicing surgeon and AI healthcare leader at data company iMerit, shared an example of how ChatGPT provided a patient with inaccurate medical information regarding a medication's potential side effects.
According to Dr. Bari, the patient presented a printout from ChatGPT claiming the medication had a 45% chance of causing pulmonary embolism. Further investigation revealed the statistic originated from a study focused on a specific subgroup of tuberculosis patients, making it irrelevant to the individual's case.
Despite these concerns, Dr. Bari expressed optimism about OpenAI's recent announcement of ChatGPT Health, a dedicated chatbot designed for health-related conversations. The new platform aims to offer users a more private environment where their messages are not used for training the AI model. "I think it's great," Dr. Bari said. "It is something that's already happening, so formalizing it so as to protect patient information and put some safeguards around it is going to make it all the more powerful for patients to use."
ChatGPT Health is expected to roll out in the coming weeks, offering users personalized guidance. The development comes amid growing interest in AI's potential to improve healthcare access and efficiency. However, experts emphasize the importance of verifying AI-generated medical information with qualified healthcare professionals. The incident involving Dr. Bari's patient highlights the potential risks of relying on unverified AI advice, particularly when it comes to medication and treatment decisions. While AI can be a valuable tool, it should not replace the expertise and judgment of human doctors.
Discussion
Join the conversation
Be the first to comment