Artificial intelligence (AI) may radically alter the provision of emergency medicine (EM) over the coming decades. Before it does, we must consider this game-changing technology’s effect on emergency physicians and their patients. As we become increasingly dependent on AI, emergency physicians may lose their professional autonomy, decision-making abilities, and technical skills. Complex AI programs may become so embedded in the emergency-medicine decision-making process that emergency physicians may not be able to explain or understand them, and patients may not be able to refuse its use or refute its findings, even when ethical dilemmas are present.1
Overreliance on AI clinical decision aids may lead to a decline in emergency physicians’ diagnostic and decision-making skills, potentially compromising patient care if the emergency physician does not recognize an erroneous AI response. AI programs’ ability to interpret medical imaging and cytopathology often exceed human capacities to perform repetitive, complex, or intricate tasks without fatigue. Advocates assert that “losing certain skills to AI, much like the advent of calculators and the internet, is not only inevitable but also beneficial to human progress.”1
However, the erosion of human expertise is problematic if the decline in physicians’ diagnostic and decision-making skills results in clinical errors if (or when) the technology fails.2 Educators find it challenging to instill essential critical-thinking skills when students rely on AI tools to solve problems for them.3 Emergency physicians may become human mechanics, performing procedures, giving medications, and admitting patients only when instructed to do so by the AI program. Many fear that a declining emphasis on critical thought and the basic skills that comprise the art of medicine not only will compromise patient care, but also will contribute to an anti-intellectual “dumbing down” of medical practice.4
AI has the potential to reduce the amount of time needed for many repetitive tasks in the practice of the emergency physician. AI can analyze patterns of patient care and recommend more efficient patient throughput. The emergency physician might spend the time saved with patients, resulting in better patient satisfaction. However, that same increase of efficiency might result in a demand from the health care system for increased patient throughput, resulting in emergency physicians having to see more patients rather than spending more time with them.5
AI may suggest diagnoses for complex constellations of symptoms and findings, a potential boon to emergency physicians. But the danger is that, instead of consulting the literature for complex cases, the emergency physician will regard AI as an authoritative source. Emergency physicians may feel that they are giving up autonomy both in their choice of how to practice and in sorting complex and challenging analyses that are ceded to AI. Rather than helping with burnout, AI may instead become a demanding taskmaster and an enigmatic diagnostic and treatment standard for the emergency physician, leading to understandable resistance to AI by emergency physicians.
As AI becomes involved in clinical EM decisions, patient autonomy and shared decision making might suffer. If emergency physicians rely solely on AI-generated suggestions based only on objective data, they are likely to recommend treatments or interventions that are not consistent with the patient’s values and preferences.
An example of this is shared decision-making regarding hospitalization in moderate-risk HEART score chest pain. Currently, the emergency physician may calculate the HEART score and then take the data to the patient for a discussion in which the patient is able to heavily influence their follow-up plan. Such shared decision making succeeds because the provider understands and can share with the patient how the data was applied and how the statistics and risks were generated. As AI models become more complex, clinicians may not be able to clearly discuss why the recommendations are being made, and patients may no longer be able to rely on the basis for their emergency physicians’ recommendations to make an informed decision.
Implementation of new technology within the medical field forces consideration of how patients and physicians will interact with it. A rarely discussed but vital ethical issue is that emergency physicians must remain aware that, when patients prefer to have humans interacting with them rather than an algorithm, they should maintain the right to refuse its application in their care. Emergency physicians must provide patients with sufficient information (e.g., inclusion, consequences, and significance) so that they can decide whether they will allow AI to be part of their care.6 Such consent necessarily requires that AI cannot be so embedded in the EM process that its use cannot be refused; patients must be able to challenge or refuse an AI-generated recommendation. This helps ensure that the humanistic nature of medicine prevails, and EM care is tailored to patient preferences and values.
AI’s role in patient-care decisions involving ethical dilemmas, including those about the end of life, is unclear and problematic. In the early stages of AI development, and for decades to come, trained professionals, usually emergency physicians, will need to provide counseling to patients and families. AI cannot replace physician input in the nuanced and complex ethical decisions that need to be made. However, AI may be able to help frame questions that can guide physicians in determining therapies and predicting mortality. For example, in patients at a high risk of death within six months, AI helped to reduce the use of chemotherapy by three percent.7 A study of AI-triggered palliative-care decisions found a higher use of palliative-care consultations and a reduced hospital readmission rate.8 AI will undoubtedly be useful in providing emergency physicians with ethical guidance, but it cannot make ethical decisions itself.
As clinical AI systems develop and are carefully introduced into EM, emergency-department patients will undoubtedly benefit from the breadth and depth of knowledge they provide to emergency physicians. Preserving ethical and high-quality EM practice will require understanding the AI systems’ limitations and keeping emergency-department patients well-informed.
Dr. Iserson is professor emeritus in the Department of Emergency Medicine at the University of Arizona, Tucson, Arizona.
Dr. Baker is an emergency medicine specialist practicing in Perrysburg, Ohio.
Dr. Bissmeyer is a fourth year student at Kansas City University College of Osteopathic Medicine.
Dr. Derse is director of the Center for Bioethics and Medical Humanities and Professor of Bioethics and Emergency Medicine at the Medical College of Wisconsin.
Dr. Sauder is a board certified emergency medicine physician in Dayton, OH.
Dr. Walters is an emergency medicine specialist in Royal Oak, MI.
- Iserson KV. Informed consent for artificial intelligence in emergency medicine: a practical guide. Am J Emerg Med. 2023;ISSN 0735-6757. Published online ahead of print. doi: 10.1016/j.ajem.2023.11.022. Accessed December 8, 2023.
- Tsipursky G, We will inevitably lose skills to AI, but do the benefits outweigh the risks? Entrepreneur website. Published July 26, 2023. Accessed December 8, 2023.
- Rinta-Kahila T, Penttinen E, Salovaara A, et al. The vicious circles of skill erosion: a case study of cognitive automation. J Assoc Information Syst. 2023;24(5):1378-1412.
- Mahendra S. Dangers of AI—dependence on AI. Artificial Intelligence + website. Published August 31, 2023. Accessed December 8, 2023.
- Faustinella F, Jacobs RJ. The decline of clinical skills: a challenge for medical schools. Int J Med Educ. 2018;9:195-197.
- Dam TR, Leaston JI, Hla DA, et al. Could AI cause burnout in medicine? Some concern that new technology could be more of a problem than a solution. Medpage Today website. Published July 29, 2023. Accessed December 8, 2023.
- Chenais G, Lagarde E, Gil-Jardiné C. Artificial intelligence in emergency medicine: viewpoint of current applications and foreseeable opportunities and challenges. J Med Internet Res. 2023;25:e40031. doi: https://doi: 10.2196/40031.
- Manz CR, Zhang Y, Chen K, et al. Long-term effect of machine learning-triggered behavioral nudges on serious illness conversations and end-of-life outcomes among patients with cancer. JAMA Oncol. 2023;9:414-418.
- Wilson PM, Ramar P, Philpot LM, et al. Effect of an artificial intelligence decision support tool on palliative care referral in hospitalized patients: a randomized clinical trial. J Pain Symptom Manage. 2023:66:24-32.