The Indian Council of Medical Research (ICMR) recently released broad ethical guidelines on the use of artificial intelligence (AI) – the talk of the town of late – in healthcare. These guidelines delve into issues such as human autonomy, data privacy, biases, accountability and so on, and offer guiding principles to address them.
Mint explains the opportunities and risks of deploying AI in India’s health sector.
How disruptive could AI be for healthcare?
Very. It promises better diagnoses and more precise therapeutics, apart from removing the drudgery of scheduling tasks and appointments and other menial tasks.
NITI Aayog in partnership with Microsoft and Forus Health is developing an algorithm for early detection of diabetes complications. Startups such as Qure.ai and DeepTek are working with hospitals and governments in tuberculosis (TB) screening using radiological tools, analysing computerised tomography (CT) scans for early detection of lung cancer, and so on.
There are also the likes of HealthPlix, which provides solutions for e-prescription generation, lab management, billing, and sending reminders to patients using AI-based apps. Wysa, a mental-health startup, uses an AI-based chatbot to help users manage their mental health, using cognitive behaviour therapy techniques and on-demand support.
Another high-potential area is epidemiology – using data for early detection to limit the spread of a communicable disease, which was leveraged by various authorities during the pandemic.
Indian healthcare is plagued with a lack of critical infrastructure such as proper diagnostic tools, especially in rural areas, and a shortage of doctors and health staff. AI-based solutions could help with these challenges.
How could things go wrong?
One of the most immediate challenges concerns informed consent. The simple patient-clinician relationship is set to change with the advent of AI. Doctors and professionals themselves may not understand how a certain AI reached a decision, which could pose a challenge to taking the consent of patients.
A related concern is the availability of accurate and reliable data to train AI systems. Patients and users will need to be educated to allow them to share data to this end. In case of vast use of data, its privacy will also be a concern. AI systems will not just have to be transparent but also be able to provide information in language that’s comprehensible to a lay person so they can make an informed decision on sharing their data.
The lack of diversity in training data could also lead to the design of an exclusionary algorithm. AI applications should be designed to include users of all ages, income, region, and so on. As decisions will increasingly be made by AI systems, ensuring accountability and liability if something were to go wrong will also pose a challenge.
What are the guiding principles suggested by ICMR?
The ICMR has laid out principles for the validation and deployment of AI technologies for all stakeholders (researchers, industry, sponsors, hospitals, clinicians, patients, regulators, etc). It has listed 10 governing principles for issues pertaining to AI – autonomy; safety and risk minimisation; trustworthiness; data privacy; accountability and liability; optimisation of data quality; accessibility, equity and inclusiveness; risk minimisation and safety; collaboration; non-discrimination and fairness; and validity.
These principles attempt to include humans in the decision-making process and not concede all control to AI. As India is already updating its laws on data protection and information security in healthcare, ICMR suggests that these should be binding on all AI technologies as well.
Download The Mint News App to get Daily Market Updates.
More
Less