Filter by input type
Select all
News
Pages
Events
Filter by category
Select all
AI ANALYTICS
Mobile Apps and Internet of Things
Advancement of science
big data
Connected communities
Coronavirus
Courses and training
DIAGNOSIS
Initial Editorial
Editorials
A world in the cloud
Events
Infographics
Artificial Intelligence and Science
IoT Apps
News
Digital platforms
Social networks
Review of scientific publications
Course Summary
Synopsis of essay
Overview of reference frames
Synopsis of recent publications
Use of Digital Platforms
WHO publishes guidance on Ethics and Governance of Artificial Intelligence for Health

The World Health Organization (WHO), has published this guide after 18 months of work with experts in ethics, digital technology, law, human rights, as well as experts from health authorities in member states.

After two years since the beginning of the development of the document, the Science Division - Digital Health and Innovation and Research for Health, in which a leading group of 20 experts on the subject of ethics and Artificial Intelligence (AI) were involved, the WHO has published the final document: "Ethics & Governance of Artificial Intelligence for Health".

The six core principles identified by the WHO Expert Group are the following:

  • Protect autonomy: humans should remain in full control of health-care systems and medical decisions. AI systems should be designed demonstrably and systematically to conform to the principles and human rights with which they cohere; more specifically, they should be designed to assist humans, whether they be medical providers or patients, in making informed decisions.
  • Promote human well-being, human safety, and the public interest: AI technologies should not harm people. They should satisfy regulatory requirements for safety, accuracy and efficacy before deployment, and measures should be in place to ensure quality control and quality improvement.
  • Ensure transparency, explainability, and intelligibility: AI should be intelligible or understandable to developers, users and regulators. Two broad approaches to ensuring intelligibility are improving the transparency and explainability of AI technology.
  • Foster responsibility and accountability: Humans require clear, transparent specification of the tasks that systems can perform and the conditions under which they can achieve the desired level of performance, this helps to ensure that health-care providers can use an AI technology responsibly.
  • Ensure inclusiveness and equity: Inclusiveness requires that AI used in health care is designed to encourage the widest possible appropriate, equitable use and access, irrespective of age, gender, income, ability or other characteristics.
  • Promote AI that is responsive and sustainable: Responsiveness requires that designers, developers and users continuously, systematically and transparently examine an AI technology to determine whether it is responding adequately, appropriately and according to communicated expectations and requirements in the context in which it is used.

“Artificial intelligence has enormous potential for strengthening the delivery of health care and medicine and helping all countries achieve universal health coverage. This includes improved diagnosis and clinical care, enhancing health research and drug development and assisting with the deployment of different public health interventions, such as disease surveillance, outbreak response, and health systems management”, explains Dr. Soumya Swaminathan, WHO chief scientist.

In addition to the basic principles, the document provides recommendations for the governance of AI for health to benefit the population, both in the public and private sectors.

Read the WHO document at the following link:

https://www.who.int/publications/i/item/9789240029200#.YNmue-9sQAk.linkedin

Outstanding news

News by country

Related Content

Secured By miniOrange