Conference Paper

Enhancing Medical History Collection using LLMs

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
This study introduces SocraHealth, an innovative method using Large Language Models (LLMs) for medical diagnostics. By engaging LLM-based agents in structured debates, SocraHealth not only refines diagnoses but also corrects historical record inaccuracies, utilizing patient data effectively. The case study, featuring GPT-4 and Bard across two experiments, showcases this approach's success in producing logical, hallucination-free debates. Demonstrating a significant advancement over traditional diagnostic techniques, SocraHealth highlights the transformative power of LLMs in healthcare, especially in enhancing diagnostic accuracy and rectifying past diagnostic errors.
Article
In 1974 several studies were conducted on the validity of medical data recorded and computerized at family medical centers affiliated with the University of Western Ontario. Fifty-nine encounters were observed. An average of 2.54 somatic, emotional, or social problems were dealt with per encounter. The residents recorded an average of 1.51 problems and the observers 2.45. This difference is highly statistically significant (p less than .001). There was no significant statistical differences among the observers. The many questions this study raises may have a bearing on medical education, medical audit, research, medical computer systems, and perhaps even on quality of care since problem-solving is based on problem identification. Further studies and evaluation are needed.
Interviewing the patient
  • George Engel
  • William L Libman
  • Morgan
  • Engel
LLM-empowered Chatbots for Psychiatrist and Patient Simulation: Application and Evaluation
  • S Chen
  • M Wu
  • K Q Zhu
  • K Lan
  • Z Zhang
  • J Zhu
  • Chen S.
Constitutional ai: Harmlessness from ai feedback
  • Yuntao Bai
  • Bai
Active prompting with chain-of-thought for large language models
  • Diao Shizhe
Rlaif: Scaling reinforcement learning from human feedback with ai feedback
  • Harrison Lee
  • Lee