Delivering responsible AI in the healthcare and life sciences industry

11 months ago 36
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

The COVID-19 pandemic revealed disturbing data about health inequity. In 2020, the National Institute for Health (NIH) published a report stating that Black Americans died from COVID-19 at higher rates than White Americans, even though they make up a smaller percentage of the population. According to the NIH, these disparities were due to limited access to care, inadequacies in public policy and a disproportionate burden of comorbidities, including cardiovascular disease, diabetes and lung diseases.

The NIH further stated that between 47.5 million and 51.6 million Americans cannot afford to go to a doctor. There is a high likelihood that historically underserved communities may use a generative transformer, especially one that is embedded unknowingly into a search engine, to ask for medical advice. It is not inconceivable that individuals would go to a popular search engine with an embedded AI agent and query, “My dad can’t afford the heart medication that was prescribed to him anymore. What is available over the counter that may work instead?

According to researchers at Long Island University, ChatGPT is inaccurate 75% of the time, and according to CNN, the chatbot even furnished dangerous advice sometimes, such as approving the combination of two medications that could have serious adverse reactions.

Given that generative transformers do not understand meaning and will have erroneous outputs, historically underserved communities that use this technology in place of professional help may be hurt at far greater rates than others.

How can we proactively invest in AI for more equitable and trustworthy outcomes?

With today’s new generative AI products, trust, security and regulatory issues remain top concerns for government healthcare officials and C-suite leaders representing biopharmaceutical companies, health systems, medical device manufacturers and other organizations. Using generative AI requires AI governance, including conversations around appropriate use cases and guardrails around safety and trust (see AI US Blueprint for an AI Bill of Rights, the EU AI ACT and the White House AI Executive Order).

Curating AI responsibly is a sociotechnical challenge that requires a holistic approach. There are many elements required to earn people’s trust, including making sure that your AI model is accurate, auditable, explainable, fair and protective of people’s data privacy. And institutional innovation can play a role to help.

Institutional innovation: A historical note

Institutional change is often preceded by a cataclysmic event. Consider the evolution of the US Food and Drug Administration, whose primary role is to make sure that food, drugs and cosmetics are safe for public use. While this regulatory body’s roots can be traced back to 1848, monitoring drugs for safety was not a direct concern until 1937—the year of the Elixir Sulfanilamide disaster.

Created by a respected Tennessee pharmaceutical firm, Elixir Sulfanilamide was a liquid medication touted to dramatically cure strep throat. As was common for the times, the drug was not tested for toxicity before it went to market. This turned out to be a deadly mistake, as the elixir contained diethylene glycol, a toxic chemical used in antifreeze. Over 100 people died from taking the poisonous elixir, which led to the FDA’s Food, Drug and Cosmetic Act requiring drugs to be labeled with adequate directions for safe usage. This major milestone in FDA history made sure that physicians and their patients could fully trust in the strength, quality and safety of medications—an assurance we take for granted today.

Similarly, institutional innovation is required to ensure equitable outcomes from AI.

5 key steps to make sure generative AI supports the communities that it serves

The use of generative AI in the healthcare and life sciences (HCLS) field requires the same kind of institutional innovation that the FDA required during the Elixir Sulfanilamide disaster. The following recommendations can help make sure that all AI solutions achieve more equitable and just outcomes for vulnerable populations:

  1. Operationalize principles for trust and transparency. Fairness, explainability and transparency are big words, but what do they mean in terms of functional and non-functional requirements for your AI models? You can say to the world that your AI models are fair, but you must make sure that you train and audit your AI model to serve the most historically under-served populations. To earn the trust of the communities it serves, AI must have proven, repeatable, explained and trusted outputs that perform better than a human.
  2. Appoint individuals to be accountable for equitable outcomes from the use of AI in your organization. Then give them power and resources to perform the hard work. Verify that these domain experts have a fully funded mandate to do the work because without accountability, there is no trust. Someone must have the power, mindset and resources to do the work necessary for governance.
  3. Empower domain experts to curate and maintain trusted sources of data that are used to train models. These trusted sources of data can offer content grounding for products that use large language models (LLMs) to provide variations on language for answers that come directly from a trusted source (like an ontology or semantic search). 
  4. Mandate that outputs be auditable and explainable. For example, some organizations are investing in generative AI that offers medical advice to patients or doctors. To encourage institutional change and protect all populations, these HCLS organizations should be subject to audits to ensure accountability and quality control. Outputs for these high-risk models should offer test-retest reliability. Outputs should be 100% accurate and detail data sources along with evidence.
  5. Require transparency. As HCLS organizations integrate generative AI into patient care (for example, in the form of automated patient intake when checking into a US hospital or helping a patient understand what would happen during a clinical trial), they should inform patients that a generative AI model is in use. Organizations should also offer interpretable metadata to patients that details the accountability and accuracy of that model, the source of the training data for that model and the audit results of that model. The metadata should also show how a user can opt out of using that model (and get the same service elsewhere). As organizations use and reuse synthetically generated text in a healthcare environment, people should be informed of what data has been synthetically generated and what has not.

We believe that we can and must learn from the FDA to institutionally innovate our approach to transforming our operations with AI. The journey to earning people’s trust starts with making systemic changes that make sure AI better reflects the communities it serves.   

Learn how to weave responsible AI governance into the fabric of your business

The post Delivering responsible AI in the healthcare and life sciences industry appeared first on IBM Blog.

Read Entire Article