RAISE Health Newsletter
 

Issue 11 | Feb. 25, 2025

 
 
 

In this issue...

 

Meet our first faculty champion for AI medical education, hear from Google’s chief health officer, and learn about the promise of AI built on multimodal data.

  

 
 
 
 
 

Feature: Making a case for more long-term health data to evaluate large language models


 
 
 
 
   

To address this, Stanford HAI researchers developed EHRSHOT, INSPECT, and MedAlign — three new de-identified longitudinal electronic health record (EHR) datasets. These datasets — freely available to researchers — provide patient data that spans extended time periods, enabling more rigorous evaluation of AI models among diverse populations and health systems.

   

By making high-quality, long-term EHR data accessible, the researchers hope to improve the safety, fairness, and reliability of AI in health care. With better datasets, researchers can develop better AI that enhances patient care without reinforcing disparities.

   

Visit the datasets section of our Resource Hub to learn more about EHRSHOT and INSPECT.

   

Read the full article

 
 
 
 
 

Feature: Stanford-designed AI tool helps predict cancer prognoses and treatment response


 
 
 
 
 
   

For example, the model correctly predicts disease-specific survival 75% of the time, compared with 64% for traditional models. It is also better at identifying which patients will benefit from immunotherapy, achieving 77% accuracy, versus 61% for standard tests.

   

Trained on 50 million medical images and more than 1 billion pathology-related texts, the model demonstrates the potential of AI trained on multimodal datasets — those combining medical images, text, and other data for deeper clinical insights.

   

Read the full story

 
 
 
 
   
 
 
 
 

On Jan. 15, Jonathan Chen, MD, PhD, assumed the role of the Stanford School of Medicine’s inaugural faculty champion for AI in medical education. Chen will lead the implementation of the school’s strategic initiative to integrate artificial intelligence concepts and competencies into Stanford Medicine’s medical education curriculum.

  

What is your mandate for this position?

Define, develop, and disseminate the curriculum needed to train (and retrain) a generation of our health care workforce to safely and effectively integrate AI into their clinical practice and improve patient care response to the idea of creating community consensus around responsible AI.

  

Why do you think it’s important for AI to be a part of a medical student’s education?

While much “AI” of prior years was overhyped, the arrival of usable large language model chatbots and other generative AI systems have many qualities of a disruptive technology. Although there will still be false starts and overpromises, I liken this to the arrival of the internet. Particularly when many, including our own researchers, have found that automated chatbots can now outscore medical students and practicing physicians on medical knowledge and reasoning exams, our entire structure of medical education and assessment has been turned on its head and needs to be rethought.

  

Who would be teaching these classes?

The exciting opportunity and challenge at hand is the inherently interdisciplinary nature of the subject.

  

We’ll need to overcome some culture shock so that the medical community can mutually learn from other disciplines such as computer science and engineering, while also teaching the teachers to enable our medical community to bring their wisdom, experience, and judgment on where AI tools can, should, and will make a difference.

  

When can medical students expect to take courses on AI?

Part of the reason I was brought on is that there is currently no AI at all in the formal standard medical student curriculum. Much of the content and expertise already exists on our campus, but work is needed to organize and assimilate it into existing curriculums and make them broadly accessible so that Stanford can lead the nation and world in guiding and training the broader community in this rapidly evolving area.

  

Can you provide some examples of what the students would be learning about?

As much as everyone uses the internet, we all should learn about the capabilities, limitations, and implications of emerging AI technologies.

  

It surprises me how many still have not even tried using a large language model AI chatbot, so there is some foundational learning to do around how these started as highly advanced auto-complete systems but with emergent properties that create the illusion of intelligence that has captured everyone’s imagination. Practical learning topics would include how to use and “prompt” these systems to make them useful, while understanding their limitations such as confabulations and hallucinations that result in inadvertent and difficult to recognize misinformation.

  

We’ve arrived at a bizarre moment in history where human-versus-computer-generated, real-versus-fabricated information cannot be reliably distinguished, requiring everyone to be savvy to what is reliable information and believable media.

  

Would there be educational opportunities for faculty?

Yes, there are many existing workshops and seminars on our campus and beyond. I’ve been on tour around the country for the past year doing many grand rounds and conference presentations to get broader communities up to speed on a rapidly moving topic. More significantly, we are likely to find that we as a community can learn a lot from each other. The revolutionary element of emerging generative AI systems is that with more natural and intuitive “chat” interfaces, anyone can try them out, discover capabilities, and teach others about it, without having to wait for a programmer or data scientist to unpack it.

 
 
 
 
 
 

Spotlight: A conversation with Google’s chief health officer, Karen DeSalvo


 
 
 
 
 
    

This episode is the fourth in a series that delves into AI and its implications for biomedical innovation.

    

Watch and listen to the conversation

 
 
 
 
 

Think Health: AI for Healthy Communities


 
 
 
 
 
    

The discussions focused on how communities in America’s Heartland can use and benefit from today’s AI advances and develop regional capacity to meaningfully participate in the health AI revolution.

 
 
 
 
 
 
 
 
 
 
 
 

  

Read the article

 
 
 
 
 
 

Save the date: 2025 RAISE Health Symposium


 
 
 
 
 

RAISE Health’s second symposium will take place at Stanford Medicine on June 2, 2025. This year aims to build on last year’s inaugural event by focusing more deeply on practical challenges and actionable solutions to ensure the safe and responsible use of AI in biomedicine. Stay tuned for all upcoming event details.

 
 
 
 
 

AI de-jargonator


 
 
 
 

Explaining AI jargon, one concept at a time

 
 
 
 
 
 
 

Illustration by Emily Moskal

 
 
 
 
 

Multimodal data

 

Multimodal data refers to information such as text, images, audio, video, and more that are used to train AI models. Each of these modalities can provide unique insights and details about a subject or phenomenon.

For example, in health care, multimodal data might include:

  • Clinical notes (text) from physicians
  • Medical imaging (images) from X-rays or MRIs
  • Genomic data (structured data) from genetic tests
  • Patient demographics (structured data) like age and gender

Combining these different types of data can provide a more comprehensive understanding of a patient's health, leading to better diagnosis and treatment options. Analyzing multimodal data often requires specialized algorithms and techniques to effectively integrate and interpret the diverse information.

 
 
 
 
 
 

  

 
 
 
 
   
 
 
 

Share our newsletter with your community.

 
 
 
 
   
 
 
 

A joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education, and patient care.

To unsubscribe from future emails, CLICK HERE.