Guest Articles

April 24

Mark Wien

AI in African Healthcare: The Good, the Bad and the RECKLESS

Artificial intelligence (AI) has gone mainstream in the past year, with revolutionary tools now available to almost everyone in every industry. The benefits speak for themselves — from improving efficiency and streamlining workflows, to using generative AI and large language models (LLMs) to strengthen industries and provide new impact.

In healthcare, the potential for AI is huge. It provides opportunities to improve the cost and quality of care, and to reduce the impact of the physician shortage and lack of specialists —problems that are particularly acute in Africa and other emerging economies. For instance, AI enables new tools that can improve diagnosis and screening by analyzing test results ranging from EKGs to x-rays to ultrasounds, then quickly identifying abnormalities or risks that may otherwise go unnoticed — a process that would normally require the involvement of a specialist physician who may not be as accessible, introducing delays. Thanks to AI, a simple chest x-ray along with a basic patient history can now be used to predict heart disease faster and more accurately than before, making more advanced radiological testing (which is often unavailable in developing countries) less essential. AI also opens the door for new medical research, since it can scan and interpret data from countless sources in seconds.

However, unlike in many other industries, healthcare deals with human lives. So the consequences of poor decisions or mistakes can be a lost life instead of lost money. This is why there are many standards in healthcare, including both government regulations and voluntary industry standards, which — as frustrating as they may be to providers who are bound by them — are designed to provide safeguards to ensure best practices and quality care. This same scrutiny is applied to medical research, where new research must undergo peer-review before publication, with all sources also being referenced and peer-reviewed.

But though the need for these standards and scrutiny is universally recognized in the sector, this need is currently being ignored when it comes to the use of AI in healthcare in emerging markets, where new solutions are being introduced into actual practice without the proper safeguards. And even worse, the people who are applying AI to healthcare processes are often non-medical professionals and technologists, rushing to roll out these new tools without caution — and without acknowledging or perhaps even understanding the risks they’re causing.

In developed markets, relying on unproven AI for anything related to patient care without any human verification would lead to lawsuits and other penalties if a patient faced any avoidable harm or complications from the treatment. In the U.S., for example, the existing rules and regulations — combined with the significant legal, professional and financial threats to providers if anything goes wrong — are effective deterrents that ensure widespread adherence to best practices. Unfortunately, in developing countries these consequences aren’t as clearly defined, leading to a greater tendency to hurriedly adopt new technologies and treatments without a full sense of their risks and downsides. Speeding toward widespread AI adoption is creating grave risks in these countries, whose health systems are already plagued by significant shortages of workers, resources, medications and more.


AI-Driven Healthcare in Africa: A Case of Malpractice

One of the most glaring examples of this issue is happening across Africa. In my work at my company PocketPatientMD, I have come across multiple companies attempting to roll out AI-powered health products and services. This includes using unproven and untested chatbots to provide actual medical advice, diagnosis and guidance in treatment. This is a significant crime in the U.S. and Europe, where greater oversight exists. It is simply malpractice: A doctor can lose his license for using an unproven or untested solution like these AI tools if it harms a patient. Yet regardless of these global industry norms and standards, many developers and companies in Africa are already offering AI chatbots, using free platforms and APIs like OpenAI to provide medical recommendations and care, and providing these tools to doctors, healthcare workers and health organizations — despite the fact that they have not yet been tested. Even worse, these developers and tech companies are presenting AI as a “proven” technology when that is very much not the case.

This heedless rush to adopt AI solutions not only puts the health of individual patients at risk, it can also impact public health more broadly through the dissemination of inaccurate but highly convincing disinformation. AI is only as good as the data being used to train it, and the use of statistically insignificant data from markets that lack more reliable data — combined with AI’s alarming tendency to “hallucinate” and confidently fabricate research findings — can easily result in flawed guidance that is hard for the public to differentiate from reliable health information. This is a risk the World Health Organization has publicly recognized.

This is a scary situation, and there needs to be oversight to address it. Just because a new technology is available doesn’t mean it is necessarily safe or ready for universal use. While international companies are aware of the privacy and security standards and best practices around patient care that require them to proceed with caution in their AI adoption, local businesses and health systems in developing countries often lack this awareness. And this doesn’t even touch upon the need for rules and regulations around patient privacy and data security in health-focused AI, establishing the type of information being shared — since in some cases, AI tools have access to patients’ personal and identifiable information.


Why Africa’s Healthcare Industry Must Move Toward Better AI Practices

Despite these risks, AI can offer promising solutions in healthcare settings, and at PocketPatientMD we are already looking into how algorithms, generative AI and large language models can enhance our work. But anything we decide to incorporate will follow international best practices and standards. It is extremely important that other healthcare providers in Africa follow this lead. AI can provide a huge impact and greatly bridge significant gaps in healthcare capacity if used right. But while AI technology may be new and cool, developers don’t see the harm they’re causing when a patient is prescribed the wrong treatment or gets improper medical advice from their products.

It is up to healthcare industry stakeholders themselves to step up to protect these patients. Any time AI is proposed for use in patient care, these tools must be subjected to thorough scrutiny from governments and/or international agencies, which must ensure that reliable standards are in place to govern their use — and to impose consequences for their misuse. And for doctors, it is important to always ask non-medical professionals — including technology experts and lawyers — if the tools they are considering are proven, tested and safe according to best practices and standards.

Doctors themselves also must learn more about the uses and risks of AI in healthcare: Being ignorant of a tool’s potential to harm patients doesn’t excuse doctors from responsibility if they fail to avoid that harm. By learning more about AI, doctors can also begin to overcome the distrust many health professionals feel about these tools, which many fear may replace them or take their jobs. Regardless of the emergence of new technologies, a physician will always be needed to verify, see and treat patients, even if AI tools are utilized in their care. Doctors are central to patient care, and they need to have trust in the tools they use and a voice in the development and implementation of those tools — something which is also in the patient’s best interest.

Health-focused AI companies should encourage this learning and engagement process among doctors, who are typically late adopters of technology already, and who may comprise a large part of the market for AI products and tools. When a new technology is introduced by non-medical professionals who don’t take patient safety and other physician priorities into account, any failures create further setbacks and distrust among physicians, making it more important that the technology is introduced only when it is ready. Ironically, by slowing down and approaching the process more cautiously and deliberately, health-focused AI companies may ultimately speed the adoption of their products in emerging markets. It is also important to show doctors and health organizations how the responsible use of AI can benefit them financially, increasing their income and profits while reducing inefficiencies and waste — and thus providing them with an additional incentive to carefully introduce these tools into their practice.

AI is here to stay, and as we move further into 2024, its impacts and benefits will only grow. However, it is important for countries in Africa and other emerging regions to continue to develop and follow best practices and standards, as these are meant to protect patients and improve care. The African Union is trying to take the lead in establishing these standards, but ultimately this process will require individual countries to pass their own laws — though consortiums or large organizations like the UN can also help. Accreditation organizations might also help meet this need, if they can establish credible global certificates or voluntary standards. Going further, there also need to be consequences for those AI developers who are preying upon overburdened doctors or uninformed health workers by offering solutions that are unproven, untested and potentially unsafe.

I hope the industry will do better in the coming year, and I hope that in African countries and other developing regions this issue will be addressed with the seriousness it deserves. Without stronger standards — and consequences for disregarding them — we risk rushing into a dangerous future where quality of care diminishes, costs and waste increase, and health outcomes grow substantially worse across the world’s most vulnerable health systems.


Mark Wien is Founder and CEO of PocketPatientMD.

Photo courtesy of Ivan Samkov.




Health Care, Technology
artificial intelligence, data, healthcare technology, public health, research