
Support for diagnosis, reduction of medical errors, and less time spent on administrative tasks are just some of the promises offered by artificial intelligence technologies in the healthcare sector. However, according to a study published in March 2025 in the JAMA Health Forum, the use of these technologies without a regulatory framework or established legal standards could, on the contrary, ‘heighten the potential for increased burnout and errors, and ultimately undermining the very goals assistive AI seeks to achieve.’
This situation stems from an ambivalent relationship between doctors and AI technologies. According to the authors of the study, ‘they are expected to rely on AI to minimise medical errors, yet bear responsibility for determining when to override or defer to these systems.’ For example, a case study on collaborative medical decision-making revealed that social perception placed almost all responsibility on humans, even when they were assisted by AI results. Doctors are therefore faced with an additional burden to ‘determine when – and to what degree – they should incorporate AI inputs,’ which risks increasing burnout and the risk of errors
There are two risks of error for doctors when using AI-enabled technologies: over-reliance on AI advice, which can lead to false positives, or, conversely, mistrust of AI results, which can lead to false negatives. They are then faced with the challenge of finding a balance between using AI data and relying on the knowledge gained from their medical experience and intuition. Another complexity stems from the very nature of AI, which ‘generates recommendations by identifying correlations and statistical patterns from large data sets,’ whereas doctors rely on deductive reasoning, experience and the patient's context. These differences require doctors to interpret AI results with caution to avoid medical errors.
The study's authors recommend that healthcare organisations implement standardised guidelines and practices to better distribute responsibility and reduce the mental load on doctors. The study also suggests that healthcare organisations incorporate AI simulation training into medical education and on-site programmes to enable future doctors to practise, through simulations, ‘interpreting algorithmic outputs, balancing AI recommendations with clinical judgement, and recognising potential pitfalls’.
Image: Gerd Altmann by Pixaba