Published on

An algorithm passed the radiology board exam. Still in its infancy, ChatGPT and other AI applications are changing science, healthcare, and the world we live in.

Recent articles in the journal Radiology offer information on the learning capabilities of two generations of Chat generative pre-trained transformer (ChatGPT). Artificial intelligence (AI) is already in use in many areas of healthcare, including radiology.  Research has found AI has a high sensitivity to detection of complicated patterns in imaging studies, including MRI, ultrasound, CT scans and X-rays, making it an ideal partner for the human radiologist.  AI is prone to errors just as humans are, but AI is not distracted by its cell phone, fatigue, or the need for a cup of coffee. 

AI has been around awhile, and just as it has been adapted for radiology, it is being adapted to business and other economic and commercial functions.  These AI applications are assistive but also carry the possibility of outmoding humans in a number of ways.  ChatGPT is an AI algorithm specifically trained to model conversation and dialogue. 

Recent reports peg ChatGPT as the fastest-growing user application in history, a troubling statistic for some, as the same tech workers who developed AI now warn it poses the possibility of human extinction if not regulated.  Just as when Robert Oppenheimer, the team lead on the Manhattan Project, remarked that his project had “let the genie out of the bottle” at the outset of the nuclear age, AI offers even greater swings to help or hurt humanity.

ChatGPT 3 which is a large-language model algorithm, is also trained on human scripts and dialogue. In February of this year, researchers tested ChatGPT with a 150-question multiple choice test modelled after the Canadian Royal College and America Board of Radiology exam.  The chatbot answered 69 percent of questions correctly.  In March 2023, ChatGPT 4 answered 81 percent of the questions correctly—illustrating improved ability to reason and perform in a radiology setting.

While the chatbot scored well, some of its answers were inaccurate.  Study authors noted the limitation of the bot at present, and the need to fact-check its output. Continually advancing versions of the chatbot will return better results—and higher concern for its overall regulation in human life.

Providing aggressive legal service to Maryland patients injured by medical mistake

Schochor, Staton, Goldberg and Cardea, P.A.. offers experienced legal representation to patients and their families injured through medical malpractice.  When you have questions about injury after medical treatment, contact us or call 410-234-1000 to schedule a free consultation today.