In brief
On 18 January 2024, the World Health Organization (WHO) published a New Guidance on the ethics and governance of large multi-modal models (LMMs) of Artificial Intelligence (AI), targeting governments, technology companies and healthcare providers to promote the appropriate use of AI and protect public health.
Key takeaways
The WHO believes that LMMs can be widely used in the health sector for scientific research and drug development, for diagnosis and clinical care, for medical and nursing education, for carrying out administrative tasks, such as collecting and cataloguing medical examinations in the electronic medical records, but also by patients, for example to search for information on symptoms and treatment modalities.
However, the WHO also identifies potential risks arising from the use of AI, mainly related to the generation of false, inaccurate or incomplete data, which could be detrimental to those who use such information to make decisions about their health, and to the potential presence of bias and distortion in the output generated by AI, when trained on poor quality data or calibrated by bias. Therefore, WHO has made a series of recommendations to governments, who are responsible for setting standards for development and deployment of AI, and to developers, who should include potential users and stakeholders during the design phase so that they can improve the capacity of health systems and promote the patients’ interests.