Peter Steadman’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


ChatGPT score and minimum pass mark for GSSE questions.
ChatGPT versus the Generic Surgical Science Exam (GSSE) – A.I. One step closer to the operating theatre
  • Article
  • Full-text available

June 2024

·

25 Reads

Journal of Surgical Education

John Maunder

·

·

Gayatri Bhagwat

·

Peter Steadman

Background The advent of Artificial Intelligence (AI) has sparked a revolution in medical research. ChatGPT is a generative AI chatbot powered by a large language model (LLM), which allows it to process natural text input using extensive training data and output a human-like response. This study aimed to assess the capability of ChatGPT, a language model, to pass the Generic Surgical Sciences Examination (GSSE). Methods We employed a deductive analytical approach and selected 100 questions from the Royal Australasian College of Surgeons (RACS) database, covering anatomy, pathology, and physiology. Questions were posed to ChatGPT in an open-ended format, and answers were adjudicated based on pre-existing RACS answers. Statistical analysis involved calculating confidence intervals and performing one-sided binomial tests. Results ChatGPT demonstrated an overall accuracy of 88%, surpassing the required pass mark of 63.8%. It also outperformed the individual pass marks for Anatomy (85.29% vs. 54.9%), Pathology (93.94% vs. 58%), and Physiology (84.85% vs. 61%). The p-values indicated statistical significance, reinforcing the model's capability to pass the GSSE. Conclusion This study suggests that ChatGPT can pass the GSSE, with statistically significant performance metrics. While promising, it raises ethical and practical questions about AI's role in medical education and patient care. The results warrant further investigation into AI's potential and limitations in healthcare settings.

Download