Artificial Intelligence (AI) holds “enormous potential” for
improving the health of millions around the world if ethics and
human rights are at the heart of its design, deployment, and use,
the head of the UN health agency said on Monday.
“Like all new technology, artificial intelligence…can also be
misused and cause harm”, warmed
Tedros Adhanom Ghebreyesus, Director-General of the World health
Organization (WHO).
To regulate and govern AI, WHO published new guidance that
provides six principles to limit the risks and maximize the
opportunities intrinsic to AI for health.
Governing AI
WHO’s Ethics and
governance of artificial intelligence for health report
points out that AI can be and, in some wealthy countries is
already being, used to improve the speed and accuracy of
diagnosis and screening for diseases; assist with clinical care;
strengthen health research and drug development; and support
diverse public health interventions, including outbreak response
and health systems management.
AI could also empower patients to take greater control of their
own health care and enable resource-poor countries to bridge
health service access gaps.
However, the report cautions against overestimating its benefits
for health, especially at the expense of core investments and
strategies required to achieve universal health coverage.
Challenges abide
WHO’s new report points out that opportunities and risks are
linked and cautions about unethical collection and use of health
data, biases encoded in algorithms, and risks to patient safety,
cybersecurity and the environment.
Moreover, it warns that systems trained primarily on data
collected from individuals in high-income countries may not
perform well for individuals in low- and middle-income settings.
Against this backdrop, WHO upholds that AI systems must be
carefully designed to reflect the diversity of socio-economic and
health-care settings and be accompanied by digital skills
training and community engagement.
This is especially important for healthcare workers requiring
digital literacy or retraining to contend with machines that
could challenge the decision-making and autonomy of providers and
patients.
Guiding principles
Because people must remain in control of health-care systems and
medical decisions, the first guiding principle is to protect
human autonomy.
Secondly, AI designers should safeguard privacy and
confidentiality by providing patients with valid informed consent
through appropriate legal frameworks.
To promote human well-being and public interest, the third
principle calls for AI designers to ensure regulatory
requirements for safety, accuracy and efficacy, including
measures of quality control.
As part of transparency and understanding, the fourth
principle requires information to be published or documented
before the AI technology is designed or deployed.
Although AI technologies perform specific tasks, they must be
used responsibly, under suitable conditions by appropriately
trained people, which is the fourth principle.
The fifth is to ensure inclusiveness and equity so that AI for
health is accessible to the widest possible number of people,
irrespective of age, gender, ethnicity or other characteristics
protected under human rights codes.
The final principle urges designers, developers and users to
transparently assess applications during actual use to determine
whether AI responds adequately and appropriately to expectations
and requirements.