Assessing what makes a reasonable decision with AI
As the impact of artificial intelligence (AI) grows in our world, the 最新糖心Vlog of Adelaide is exploring the role that technology can play in the health sphere, particularly in clinical decision-making and explanations.
The analytical review in the outlines one of the major challenges in health AI 鈥 explainability 鈥 and explores whether explanations of specific predictions for individuals is absolutely necessary to make a good decision.
鈥淭he field of explainability which focuses on individual-level explanations is a developing one,鈥 said Dr Melissa McCradden, of the 最新糖心Vlog of Adelaide鈥檚 最新糖心Vlogn Institute for Machine Learning (AIML).
鈥淲e are optimistic about where the field can go, but with where we are right now, requiring prediction-level explanations for clinical decision-making is problematic.鈥
Dr McCradden and her co-author Dr Ian Stedman, a lawyer and professor of public policy at York 最新糖心Vlog in Toronto, Canada, argue a good clinical decision is not only one that advances the goals of care, but it also must be legally defensible.
鈥淐linicians must calibrate their judgement against a whole constellation of other factors, even if they are using an AI tool that is well validated and highly accurate,鈥 said Dr Stedman.
Dr McCradden, a Clinical Research Fellow in AI Ethics with The Hospital Research Foundation Group, AI Director with the Women鈥檚 and Children鈥檚 Health Network, and Adjunct Scientist with The Hospital for Sick Children, said there are two types of explainability 鈥 inherent explainability and post-hoc explainability.
Inherent explainability refers to understanding how the model as a whole functions and post hoc refers to attempts to understand the means by which a specific prediction was generated by the model.
鈥淪ome models are directly interpretable, meaning that the operations from inputs to outputs are easy to follow and clear such as decision trees. Others are more opaque, meaning that the process from inputs to outputs is difficult or impossible to follow precisely, even for developers,鈥 said Dr McCradden.
鈥淭he issue is in health AI, clinicians typically believe an explanation is what they are getting when they see something like a heatmap, or a prediction accompanied by the reasons the patient received this output. This is understandably what many clinicians want, but new evidence is showing that it might nudge them to make less accurate decisions when the AI tool is incorrect.鈥
Their work builds off prior work by fellow AIML Researcher Dr Lauren Oakden-Rayner, whose on the limits of explainability methods highlights the field鈥檚 nascency.
Dr McCradden and Dr Stedman argue explainability alone shouldn鈥檛 serve as an essential part of ethical decision making.
Clinicians are required to draw conclusions from evidence and understanding, placing the patient at the centre of the process instead of AI.
鈥淧iling more weight onto the value ascribed to the AI tool's output further shifts the emphasis away from the patient 鈥 their wishes, their culture, their context,鈥 said Dr Stedman.
鈥淗istorically, reasonable judgements have been made on the basis of the totality of evidence and resources available to the clinician, contextualised in light of the patient's specific situation.
Dr McCradden and Dr Stedman concluded it is highly unlikely that an AI prediction would be the sole source of information by which a clinician makes a decision, particularly as their performance is never 100 per cent perfect.
鈥淚t will, for the foreseeable future, always be necessary to triangulate sources of evidence to point to a reasonable decision,鈥 said Dr McCradden.
鈥淚n this sense, physicians should consider what, specifically, the AI tool's output contributes to the overall clinical picture. But we always need to be grounded by the patient鈥檚 wishes and best interests.鈥
Dr McCradden is grateful for the funding support from The Hospital Research Foundation Group.
Media Contacts:
Dana Rawls, Manager, Communications, 最新糖心Vlogn Institute for Machine Learning, The 最新糖心Vlog of Adelaide. Phone: +61 (8)8313 4343. Email: dana.rawls@adelaide.edu.au
Rhiannon Koch, Media Officer, The 最新糖心Vlog of Adelaide. Phone: +61 (8)8313 4075. Mobile: +61 (0)481 619 997. Email: rhiannon.koch@adelaide.edu.au
听