As artificial intelligence (AI) gathers speed in transforming healthcare, it’s imperative to pick up the pace in addressing ethical issues.
These issues arise from AI’s interpretation of data provided by processes that are not transparent. This makes it difficult to understand, verify and trust AI outputs. The issues are magnified when autonomous AI systems “learn” from data and environmental cues independent of programming. The consequences are ethical lapses that affect individuals and organizations. To protect patients and themselves, healthcare organizations must be aware of and address the issues and their causes.
Risks in healthcare AI include:
• Bias in data sets
• Patient anonymity
• Patient confidentiality
• Predictions used to discriminate
• Undermined doctor-patient relationship
• Unclear ownership of and liability for results
Organizational leaders must acknowledge that their AI algorithms could be skewed toward such profiling as income and race, resulting in limited options and underestimated needs. And they must recognize that traditional data governance frameworks aren’t up to today’s tasks. The standard functions weren’t built for this new technological scope.
To achieve the vast benefits of AI in preventing, detecting and treating disease, injury, addition, suicide and other physical and mental afflictions, leaders will be engaged in the development and implementation of algorithms and governance. They will reconcile the dual goals of improving health and generating profit, ensuring that algorithms are scrupulously objective. They will reshape their governance framework to adopt key ethical principles across clinical, research, education and IT activities.
Read Ethical healthcare in the age of artificial intelligence
for guidance in avoiding bias, and establishing and managing ethics-based healthcare for your patients.
Alicia McDonald Martinovich
Manager, Risk Advisory Services
+1 703 562 6668
Director, Digital Health & Informatics
+1 703 637 3088