LLM and A.I are both recent technologies used independently to execute tasks like content creation, image and text generation. Before the invention of these technologies, hospitals kept medical records using different methods.
These methods are: Source-Oriented Medical Record (SOMR), Problem-Oriented Medical Record (POMR), and Electronic Health Record (EHR).
SOMR evolved right from the early Egyptian era up to the mid-20th century. It only collected limited sections of clinical reports while sometimes avoiding important information. This was later changed to the POMR.
POMR focused around problems and structured medical history, which made this method more organized than SOMR. The EHR system then came to light as it offered a better comprehensive and digital approach to recording clinical information.
The integration of the EHR system and newer technologies like the A.I and LLM should be carefully evaluated and regulated on the basis of low risk and unacceptable high risk.
Given that LLM make clinical diagnosis and information collection easier, these technologies should only be used as support for human intelligence, as full reliance on them can be detrimental for clinical decision making.
Reports show that there have been problems associated with the use of LLM which include:
- LLM-generated medical notes provide too much information that may cause summarizing by consultants thereby missing out on important details.
- They can decrease a physician’s motivation and intuition level, as each documentation by a physician is relevant to patients’ real complaint.
- Absolute confidence in a machine’s output can lead to automation bias.
- There is a high risk of producing non-factual, confusing data which can impact on a doctor’s decision making.
- Increased burden on re-verification of machine misinformation.
- As training of future models is dependent on the data content of original LLM, there is a high risk of constant feedback loop.
- Physicians are bound to lose their clinical reasoning skills which might affect patient care in future, when there is an LLM failure.
While I am fully in support of these newer technologies supplemented in patient care, how trustworthy are they if unregulated? Overall, the patients, and doctors should decide on the best method of care when regulatory practices are sufficiently in place.