Orthopaedic Surgery Resident Physician Oklahoma State University Medical Center
Disclosure(s):
Jason P. Den Haese, D.O., M.S., Jr.: No financial relationships to disclose
Introduction: Patients are more frequently using online search engines for medical knowledge. With the expansion of artificial intelligence in Google and ChatGPT, verifying the quality of the information being presented for frequently asked questions (FAQs) is important. The purpose of this study is to compare the FAQs and quality of resources presented by ChatGPT and Google for Minimally Invasive Spine Surgery (MISS).
Methods: The top 20 FAQs with respective answers and sources for MISS were obtained on Google and ChatGPT. FAQs were classified using the Rothwell classification. FAQ answer source information transparency was scored using the Journal of the American Medical Association’s (JAMA) benchmark criteria, and the quality was analyzed using the Brief DISCERN score. The readability of FAQ answer sources was scored using the Flesch Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) formulas.
Results: ChatGPT provided more academic references than Google. However, 45% of answer references provided by ChatGPT were fabricated website URLs. Google and ChatGPT’s most common Rothwell classifications and subclassifications were “Fact” (65.0% and 60%) and “Technical Details” (30.0% and 35%), respectively. Google sources were most commonly from medical practices (12/20, 60.0%) whereas ChatGPTs sources were most commonly Academic sources (12/20, 60.0%). The mean DISCERN score for Google and ChatGPT was 19.35 and 20.9. The average FRE and FKGL scores were 36.3 and 13.0 for Google and 52.9 and 9.7 for ChatGPT.
Conclusion : Caution should be used when referencing search engines for MISS, particularly with ChatGPT, as it can fabricate references. Our study demonstrates very poor quality overall, and the readability of these sources is likely too advanced for many Americans. However, ChatGPT offered information that a larger audience may better understand. Recognizing FAQs and improving the quality/readability of online information regarding MISS may improve patient understanding.