RAISE Health Newsletter
 

Issue 12 | March 25, 2025

 
 
 

In this issue...

 

Read about RAISE Health’s seed grant recipients, computer vision that can help triage patients in the emergency department, and an ethical framework for biomedical AI.

  

 
 
 
 
 

Feature: Inaugural seed grant recipients announced


 
 
 
 
   

“By fostering research and initiatives that emphasize transparency, fairness, and accountability, we can harness the transformative power of AI to improve patient outcomes, accelerate new discoveries, and enhance the quality of care for all communities,” said Lloyd Minor, MD, dean of the Stanford School of Medicine and vice president for medical affairs at Stanford University.

   

The projects range from creating reliable datasets for training AI models to refining methods for assessing how patients with multiple health conditions are impacted by disease.

      

Read the announcement and learn about the projects.

 
 
 
 
 

Feature: AI catalogs interactions between healthy and cancerous cells


 
 
 
 
 
   

In a recent study, Sylvia Plevritis, PhD, chair of the Department of Biomedical Data Science, and her team developed lab models of lung cancer, then used AI to analyze them, identifying noncancerous cells and how they organized within and around the tumor cells.

   

As they collect more data, the team plans to create catalogs of maps that correspond to different cell states for a variety of cancers. “Then we can begin to see whether certain spatial motifs are shared between cancer types, regardless of where they originate in the body,” Plevritis said. “That could reveal universal rules of tumor behavior and guide the design of more broadly effective treatments.”

   

Read the study and an article about it.

 
 
 
 
 

Feature: Machine learning algorithm increases diagnostic accuracy, study finds


 
 
 
 
 
    

In a study of nearly 600 people — some healthy, others with infections such as COVID-19 or autoimmune diseases including lupus and Type 1 diabetes — the algorithm, abbreviated Mal-ID for “machine learning for immunological diagnosis,” was remarkably successful. It identified which patients had had certain viruses, diseases, or vaccines based only on the patient’s B and T cells, two types of immune cells. B cell receptors recognize free-floating pathogens, whereas T cell receptors recognize cells in the body infected with pathogens.

    

Mal-ID may also help researchers identify new therapeutic targets for many conditions.

    

Read the study and an article about it.

 
 
 
 
 

Feature: AI predicts hospital admission based on short videos


 
 
 
 
 
    

“One possible explanation is that video data implicitly encodes certain biometric parameters, such as respiratory rate and heart rate, and includes markers of patient distress and alterations in breathing patterns and uncomfortable movements,” the investigators wrote.

    

The best predictions happened when the AI analyzed both the clinical information and video clips, which were shot on a mobile phone and lasted no longer than 10 seconds, the study said. The findings demonstrate the potential of computer vision algorithms to support triage in the emergency department.

    

The senior author of the study is Lawrence Hofmann, MD, professor of radiology at Stanford Medicine.

    

Read the study.

 
 
 
 
 
 

Feature: A conversation with insitro CEO Daphne Koller


 
 
 
 
 

How is AI being used in drug discovery? In the most recent episode of The Minor Consult podcast, Lloyd Minor, MD, dean of the Stanford School of Medicine and vice president for medical affairs at Stanford University, was joined by Daphne Koller, PhD, founder and CEO of insitro, for a conversation about how machine learning algorithms are accelerating the design of new therapies.

    

Listen to or watch their full conversation.

 
 
 
 
 

RAISE Health Symposium — Register now for virtual participation!


 
 
 
 
 

Join us virtually to hear from world-class experts as we take stock of AI’s rapid evolution in biomedicine and discuss real challenges — and solutions — to ensure its safe and responsible use.

    

Register here

 
 
 
 
 
 

AI de-jargonator


 
 
 
 

Explaining AI jargon, one concept at a time

 
 
 
 
 
 
 

Illustration by Emily Moskal

 
 
 
 
 

Explainable AI

 

Explainable AI refers to artificial intelligence systems designed to make their decision-making processes transparent and understandable to humans, rather than function as mysterious “black boxes.” These explainable systems allow users to see why the AI reached specific conclusions, with the goal of building trust and enabling people to identify and correct potential errors or biases in the models’ reasoning.

  

For example, consider an AI model that predicts whether a patient is likely to be readmitted to the hospital within the next 30 days. An explainable version of this model wouldn’t just give the prediction — it would also highlight the key factors that influenced the outcome, such as recent lab results, previous hospital visits, or certain risk factors in the patient’s medical history.

 
 
 
 
 
 

INFOGRAPHIC: Scholars propose ethical framework for mitigating AI risks in biomedicine


 
 
 
 
 

A consortium of scientists and scholars are producing a series of papers on the best ways to manage AI risks in biomedicine. Most recently, they published a paper in Nature Machine Intelligence that proposes an ethical framework (see Figure 1). The goal is to help biomedical researchers account for and protect against unintended negative consequences of working with AI.

    

The ongoing project of developing the framework stems from a need to keep pace with rapidly advancing technology, which pushes into new ethical territory faster than institutions can create protective guardrails and regulations, according to David Magnus, the senior author of the paper and the Thomas A. Raffin Professor in Medicine and Biomedical Ethics at Stanford Medicine.

 
 
 
 
 
 

Image courtesy of Quinn Waeiss

 
 
 
 
 
 

Read the study or an article about it.

 
 
 
 
   
 
 
 

Share our newsletter with your community.

 
 
 
 
   
 
 
 

A joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education, and patient care.

To unsubscribe from future emails, CLICK HERE.