On 30 November 2022, ChatGPT was launched, free for all to use online. For those who are not aware, ChatGPT (Chat Generative Pre-trained Transformer) is an artificial intelligence (AI) chatbot which utilises the web to create detailed and natural human-like text in response to information the user enters. Since its (and other similar AI chatbots) launch, it has been all over worldwide media outlets with articles ranging from how it will revolutionise daily life, through to concerns over how it will render humans obsolete!
As far as healthcare is concerned, AI is considered a tool which can have a significant impact on medical diagnosis and clinical management, research and of course education. The Editors’ Choice article this month has looked at the use of ChatGPT as an education aid for preparation for the German ENT board examinations. They found that it isn’t quite as accurate a resource as expected, and that further refinement and development is needed. The same is probably true for its wider use in clinical practice, so we can all breathe a sigh of relief that we aren’t imminently being replaced by an iPad on wheels, expecting Terminators to take over, or that we’ll soon be living in the Matrix. We are still unfortunately going to have to hit the library and books hard and burn the midnight oil!
As always, many thanks to all the hard work from our contributors in putting this section together.
Nazia Munir and Hannah Cooper
The interactive, language based artificial intelligence (AI) model, ChatGPT is a powerful tool which can provide human-like answers. It is being increasingly used to answer medical and non-medical questions. The authors of this study attempted to determine the accuracy and variance of ChatGPT responses to the questions designed to prepare for the German otorhinolaryngology board certification examination. They collected a dataset from an online platform covering 15 otorhinolaryngology subspecialties. The online platform is funded by the German Society of Otorhinolaryngology, Head and Neck Surgery. The study results showed that ChatGPT answered 57% of questions correctly although the questions were complex. It was more successful in answering single choice questions (63% correct) compared to multiple choice questions (34% correct). Interestingly, ChatGPT fared better with allergy questions (72% correct responses) compared to questions on legal aspects of otolaryngology (29% correct responses). The authors conclude that ChatGPT in its current form does not provide noteworthy advantage to those preparing for the Boards. From the results of this study, we can safely conclude that ChatGPT needs extensive further refinement to improve accuracy!