Clear Cut Magazine

When the Algorithm Doesn’t Know Your Face: WHO’s Push for AI, That Works All Inclusive


The WHO’s “AI for All” framework highlights the urgent need to eliminate bias in healthcare AI and ensure equitable, inclusive technology for underserved populations worldwide.


Global Health & Technology I Social Development

There is a village in rural Bihar, India, where the nearest physician is 40 kilometers away. A child develops a fever at midnight. What does her mother, with her old smartphone and weak internet connection, do? She uses an AI-tool she found on a government website. The tool makes a recommendation. But the question that is not asked loudly enough remains, “Was that tool ever designed with any knowledge of her child? Was it ever designed with any knowledge of her child’s genetic makeup? Her child’s eating habits? Environment? The diseases that plague her child’s region? Or was it designed, as so much in global health seems to be designed, in the image of someone else, carelessly?

The question is not an idle one. It is precisely the question that the WHO is grappling with in their new framework. “AI for All: Bridging the Global Health Divide.” It may be the most important recognition that the use of AI in global health is a revolutionary force in the creation of a new kind of inequality.

The Ghost in the Machine: What is Data Bias?

In order to understand the importance of the World Health Organization’s intervention, one must first experience the quiet violence of algorithmic bias. When programmers build a medical AI system, whether it is diagnosing pneumonia from a chest X-ray or projecting the likelihood of diabetes or remotely triaging a patient, the system is trained on data from thousands or even millions of past cases.

An increasingly large volume of research demonstrates the ways in which AI health technologies can exacerbate existing biases and even introduce new ones. In one of the most famous studies on the subject, researchers discovered racial bias built into an algorithm meant to determine the distribution of healthcare resources and found that the bias stemmed from the algorithm using historical healthcare costs as a proxy to predict the need for healthcare resources – costs that are lower for black patients not because they are healthier, but because they are less likely to have received healthcare.

Fixing this racial bias could have increased the proportion of black patients receiving additional care from 17.7% to 46.5%, a figure that should stop the reader dead in their tracks. It is not an insignificant figure. It is the difference between life and death, masquerading as arithmetic.

A World Trained on the Wrong Patients

The problem is compounded when we extend our focus southwards. The health tools we use today were largely designed with data that was collected in wealthy countries. As such, they may not function as intended or provide accurate results when used in the Global South. The reason is not that we wish them to fail but that we are caught in a situation that is determined by economic power. The wealthy countries that we live in produce more health data, more research is conducted, and more clinical trials are held. The Global South, on the other hand, is burdened with more of the world’s illnesses but is least represented in the very tools that we seek to use to cure them.

In much of the Global South, healthcare is still conducted with paper-based data collection. As such, we see decentralized and fragmented analytics, even in areas such as infectious disease monitoring. An AI tool that has been largely designed with electronic health records from Boston or Berlin is essentially meeting a person that it was never designed to understand.

WHO Draws a Line in the Sand

The WHO’s “AI for All” framework focuses on equity as the non-negotiable base. It is not just an afterthought, but instead a requirement. To effectively leverage AI to improve healthcare outcomes, there are existing biases that are already encoded in the training data, which are based on race, ethnicity, age, and gender, and the digital divide must be closed by governments. There are also very specific requirements set out by the framework, including the need to train diagnostic algorithms on diverse data that accurately represents the Global South.

The alternative; deploying biased technology in countries with weak regulatory settings, is inefficient and creates a storm. Countries with weak or non-existent data privacy laws are at risk of having suboptimal algorithms integrated into national healthcare policy with no oversight. It is almost as if the WHO is trying to prevent the technology from being used as a form of digital colonialism.

The Equity-by-Design Argument

The argument that’s been going on with people who are against strict regulations on AI is that there’s a need to allow progress to accelerate. However, the WHO framework suggests that progress is not really accelerated with strict regulations; in fact, progress is only possible with equity. Thus, it’s not possible to add equity later on; instead, it must be part of the actual design.

This means that there’s a need for multidisciplinary teams that must include people from the Global South and other marginalized communities. In fact, the STANDING Together initiative managed to bring in over 350 stakeholders from 58 different countries. These stakeholders were able to agree on recommendations that included the need for proper documentation of health data and the assessment of AI tools in diverse populations. This is actually part of the WHO framework.

What This Means at the Grassroots

Let’s go back to the mother in Bihar for a moment. What does it really mean if the WHO framework actually works for this woman? It means that the diagnostic tool on her phone has been, at least in part, trained on data from people like her daughter, people who live like she does, people who share diseases that are common in her region. When the algorithm flags a potential problem, it will have been trained on information that goes beyond a hospital in Hamburg, Germany. It will have been trained on information about people like her own community.

There are studies that show AI systems that, even when they are attempting to be fair, are trained on data that expects people of color to exhibit symptoms of diseases or health problems at a greater severity than whites in order to receive the same or better care, whether it is heart surgeries or kidney transplants. The numbers are not just numbers; they are the difference between a child going home or a child receiving medical care.

Voices of Caution and Hope

However, not all people share the same level of hope. Some experts are cautioning that without any kind of enforcement, such frameworks are merely declarations, not solutions. Others are saying that these large multimodal AI systems are actually perpetuating automation bias in health care providers, such as doctors, which can lead them to miss mistakes or pass on difficult decisions to the AI system itself, as it appears to be authoritative.

There is also the rather uncomfortable reality of the business side of things. When large tech companies own the lion’s share of health data, analytics, and algorithm development, it creates a situation in which these companies are in a better position to make the decisions that should ultimately be in the hands of the various health entities and individuals themselves. Without a regulatory framework that directly confronts the reality of the global tech industry, it runs the risk of being made obsolete by the very entities it is attempting to regulate.

However, the WHO’s take on AI frameworks is important because it represents a kind of global institutional acknowledgment that the status quo is no longer acceptable. It serves as a kind of statement of intent for developers, policymakers, and funders around the world, in that equity is not optional, and effective regulation and oversight are key in ensuring that AI has a positive impact on all people, especially those in underserved communities around the world.

Conclusion: The Algorithm Must Learn New Faces

At bottom, technology is simply a mirror. It reflects our values, our prejudices, our aspirations. For too long, the mirror of medical AI has reflected a small segment of humanity and called it universal. The WHO’s “AI for All” framework does not promise that this will change anytime soon. But it does declare, with great clarity and authority, that the algorithm has to learn to see new faces: the sun-scorched farmer in Malawi, the baby in Bihar, the elderly woman in rural Bolivia who has never seen a doctor but just maybe will one day.

That is not a small thing. That is, in fact, everything.

References

  1. https://www.brookings.edu/articles/health-and-ai-advancing-responsible-and-ethical-ai-for-all-communities/
  2. https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2025.1643180/full
  3. https://www.weforum.org/stories/2024/09/racial-bias-healthcare-data-equity/
  4. https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/
  5. https://iris.who.int/server/api/core/bitstreams/d2913ae3-c8e0-4a46-b6ff-b4b121e936f4/content
  6. https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models
  7. https://pmc.ncbi.nlm.nih.gov/articles/PMC12214331/
  8. https://www.nature.com/articles/s41746-025-01534-0
  9. https://www.unwomen.org/sites/default/files/Headquarters/Attachments/Sections/CSW/63/official-meetings/Claudia%20Wells%20updated.pdf

Clear Cut Health Desk
New Delhi, UPDATED: March 19, 2026 05:00 IST
Written By: Tanmay J Urs

Share

Leave a Reply

Your email address will not be published. Required fields are marked *