The Rise of 'Bixonimania': A Cautionary Tale of AI Misinformation in Medicine
Bixonimania: How AI Turned A Fake Illness Into 'Real' Medical Condition, With A Prevalence Of One In 90,000 People
News 18
Image: News 18
A fictional eye condition called 'bixonimania' was created by Swedish researcher Almira Osmanovic Thunström to demonstrate how AI can propagate medical misinformation. Despite clear indicators of its fictitious nature, major AI models treated it as real, raising concerns about the reliability of AI in health-related queries.
- 01'Bixonimania' was invented as part of an experiment to test AI's susceptibility to misinformation.
- 02Major AI models propagated the fake condition, treating it as legitimate.
- 03The incident highlights the risks of relying on AI for medical advice.
- 04Patients are advised to consult qualified doctors rather than AI for health concerns.
- 05The need for stronger verification in AI systems is emphasized to prevent misinformation.
Advertisement
In-Article Ad
In a striking experiment, Almira Osmanovic Thunström, a researcher at the University of Gothenburg, created a fictitious eye condition named bixonimania to test how artificial intelligence (AI) can spread medical misinformation. The condition, purportedly caused by excessive screen time, was entirely fabricated and presented in two preprints uploaded in 2024 under a fictional researcher's name. Despite clear indications of its fictitious nature, including humorous funding sources and acknowledgments, major AI models such as Google's Gemini and Microsoft's Copilot accepted bixonimania as a real condition, with Google suggesting it was linked to blue light exposure and Perplexity AI estimating its prevalence at one in 90,000 individuals. Alarmingly, these fake papers were cited in a legitimate peer-reviewed journal, Cureus, until their retraction in March 2026, following inquiries from Nature. Thunström's experiment serves as a crucial reminder of the potential dangers of relying on AI for medical information, as these systems may confidently provide inaccurate diagnoses. Patients are encouraged to consult qualified healthcare professionals and verify any medical information from trusted sources, as AI cannot replace the expertise of a real doctor.
Advertisement
In-Article Ad
This incident underscores the critical need for patients to verify medical information and consult healthcare professionals rather than relying solely on AI.
Advertisement
In-Article Ad
Reader Poll
How much do you trust AI for medical advice?
Connecting to poll...
Read the original article
Visit the source for the complete story.


