Login Register
Follow Us

AI poses ethical challenges in health sector

The WHO has sounded a note of caution in its new advisory on the ethics and governance of AI, particularly the use of large multi-modal models (LMMs) in healthcare.

Show comments

Dinesh C. Sharma
Science Commentator

ARTIFICIAL intelligence (AI) tools like ChatGPT are gaining popularity across sectors. Doctors, patients and healthcare agencies have started using such tools. The World Health Organisation (WHO) has sounded a note of caution in its new advisory on the ethics and governance of AI, particularly the use of large multi-modal models (LMMs) in healthcare.

LMMs like ChatGPT and Bard can accept data inputs such as text and video and generate outputs that are not limited to the data inputted. These systems are designed to mimic human abilities and can even perform tasks they are not programmed for by learning and adapting. Therefore, they pose a grave risk of generating false, inaccurate, biased or incomplete information relating to health.

It is also now clear that AI systems can be trained by providing false or doctored data — relating to gender identity, ethnicity, race, etc — to generate harmful or biased content.

As the use of AI started picking up, the WHO in 2021 issued general guiding principles that recognised the potential of using the new technology in the health sector, while setting broad ethical principles. According to these principles, AI technologies must protect autonomy; promote human wellbeing, human safety and public interest; ensure transparency, ‘explainability’ and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote technologies that are responsive and sustainable. Explainability means making available in the public domain sufficient information about the design or deployment of AI to enable public consultation on its use and misuse.

The new guidelines issued last week deal with the development and adoption of LMMs in the health sector. To understand the use of LMMs, it is critical to understand the value chain of any AI technology. At every stage of this chain, critical decisions have to be made which can impact the future course of the technology and its consequences.

The chain starts with the development of an LMM tool. The developer could be a large technology firm, a university, a technology startup, a national health system or a public-private partnership. It depends on the resources and capacity of these players and is guided by the availability of data and the expertise available.

Subsequently, a provider, which could be a third party, may offer a programming interface through fine-tuning and further training of the LMM. The provider is responsible for the integration of the LMM into a larger software system so that it becomes a service or an application.

The third major player in the AI value chain is the deployer, who makes the service available to the end user. In the health sector, the provider could be a large hospital chain, government health agency or a pharmaceutical company.

At each stage of this value chain — developer, provider and deployer — multiple ethical and governance issues are involved. Most of the time, the developer is a large tech company because such firms possess the computing power, technical skills and finances to develop LMMs. The LMMs can reinforce the dominance of tech giants while keeping the algorithms and their potential risks opaque. The lack of corporate commitment to transparency is a major concern.

Regulatory and legal authorities are worried about the compliance of AI tools with the existing legal and regulatory frameworks, international human rights obligations and national data protection laws. Governments and regulatory agencies were caught off guard when the LMMs were announced. The European Union, for instance, had to revise its Artificial Intelligence Act in the last stages of its drafting to include the LMMs.

Algorithms, according to the WHO report, might not comply with the legal and regulatory frameworks because of the way data is collected to train LMMs or managed and processed by the entities in the value chain. As has been found in studies, LMMs are open to the possibility of ‘hallucinating’ (giving false responses) and exhibit non-compliance. Another worry relates to liability resulting from errors, misuse and harm as LMM use in the health sector gains traction. These systems are also vulnerable to cybersecurity risks, like other technological applications, and endanger patient information.

Five broad applications of LMMs for health have been identified in the WHO guidance: diagnosis and clinical care; investigating symptoms and treatment; clerical and administrative tasks, like generating summaries of patient visits; medical and nursing education; and scientific research and drug development. The use of AI, particularly LMMs, is growing in the health sector, as in replying to messages of patients based on their health records and visits. This helps reduce the time health workers spend on such tasks. A company is developing a medical LMM meant to answer questions and summarise findings from medical texts and synthesise patient data like X-ray reports to generate the final reports for clinicians.

Here the fear is that even specifically trained systems may not necessarily produce the correct responses. A larger fallout of AI tools in health would be a changed doctor-patient relationship, which has already worsened due to the increasing use of search engines by patients.

The way forward is not to discard the use of LMMs and AI tools but to promote their transparent development and responsible use. To begin with, governments should establish a regulatory agency to assess and approve LMMs and applications intended for use in healthcare or medicine. Simultaneously, the effort should be to develop not-for-profit or public infrastructure, including computing power and public datasets, which should be made accessible to developers in the public, private and not-for-profit sectors. All ethical and human rights and standards that apply to health services and products should be extended to AI technologies and tools. There should be mandatory post-release auditing and impact assessment by independent third parties whenever an LMM is deployed on a large scale for health and medical applications.

Let us not overestimate the benefits of AI and downplay its risks.

#Artificial Intelligence AI #ChatGPT

Show comments
Show comments

Trending News

Also In This Section


Top News


View All

Indian Air force rescues 2 NRI women tourists from forest of Himachal Pradesh’s Sirmaur

Local administration warns tourists not to venture on the Churdhar track without information

56% disease burden in India due to unhealthy dietary habits

Report links consumption of processed food, excessive use of mobile phone with obesity, diabetes

Half a century after receiving Maha Vir Chakra in 1971 war, injured Navy diver who trained Mukti Bahini seeks grant of special pension

In a petition filed before the Chandigarh Bench of the AFT, 80-year old Chiman Singh, then a Leading Seaman, has sought quashing of the order issued by the authorities to deny him special pension

10-year-old Delhi boy runs food cart to support family after father’s death; businessman offers help

Sharing a video on X, Anand Mahindra extends support to the boy


Most Read In 24 Hours