RAISE Health Newsletter
 

Issue 7 | Oct. 15, 2024

 
 
 

In this issue...

 

The summary paper coming out of the 2024 RAISE Health Symposium, the complexities of adjusting for race in clinical algorithms, AI-guided workflows needing to pass the “FURM” test to be used in clinical settings, an upcoming seminar on bridging gaps between medical foundation models and clinics, and what “model deterioration” means.

  

 
 
 
 
 

Feature: Takeaways from RAISE Health Symposium now online


 
 
 
 
   

Read the full paper

 
 
 
 
   
 
 
 
 

What do you think people will learn from reading this summary paper from the RAISE Health Symposium?

They’ll see where the biggest opportunities and challenges lie for AI in medicine. It’s a great snapshot of what’s happening now, what’s needed to address the challenges and capitalize on the opportunities, and how we can responsibly develop these tools to improve care.

   

What findings in the paper stood out to you?

The need for collaboration is clear. Successful AI initiatives cannot be just tech-driven — they must involve clinicians, patients, ethicists, and engineers working together. Another key point is how essential it is to address bias and data quality at the beginning of the development process, when we are deciding what problem to solve. If we don’t, we risk amplifying disparities instead of reducing them.

  

Some of the challenges outlined in the paper seem insurmountable — what do you see as some immediate next steps to ensure AI is responsibly deployed in biomedicine?

It starts with building trust. We need to put clear guidelines and best practices in place, keep an open dialogue across the health and tech sectors, and make transparency the norm. Small, intentional steps — like ensuring diverse datasets and prioritizing explainability — can help set a solid foundation.

  

Who is ultimately responsible for ensuring the guidelines and guardrails for using AI in health are implemented?

Responsibility is shared. Health care organizations, technology companies, and regulators all play a role, but it is the leaders in these organizations who must champion ethical AI practices and ensure that they are followed. Clinicians and health system executives, in particular, have a critical role in ensuring that AI tools are used to support — not replace — clinical judgment. Establishing a culture of accountability and a commitment to ongoing evaluation and monitoring are key to ensuring these systems continue to benefit our patients.

  

 
 
 
 
 

Stanford Center for Digital Health wants to hear from you about AI


 
 
 
 
 

The Stanford Center for Digital Health is conducting a national survey to gather health care workers’ perspectives about the use of artificial intelligence in medicine. We encourage all health care professionals to participate. Your insight is critical as AI technologies continue to evolve and show potential for improving patient outcomes, reducing administrative burdens, and enhancing care delivery. The survey is anonymous and takes only a few minutes to complete.

   

Take the survey

 
 
 
 
 
 

Policy brief explores complexities of accounting for race in clinical algorithms


 
 
 
 
 
  
    

Read the policy brief

 
 
 
 
 

Researchers develop framework for evaluating AI in health care


 
 
 
 
 

Stanford Medicine researchers have developed a tool for evaluating whether AI-guided workflows will be useful in clinical settings. The Fair, Useful, and Reliable AI Model (FURM) testing and evaluation mechanism is designed to gauge the ethical, financial, clinical, and workforce implications of AI models in health care. In an article published last month in NEJM Catalyst Innovations in Care Delivery, the researchers summarize FURM assessments of six AI-guided tools proposed for use at Stanford Health Care. Two were subsequently approved for implementation. Since February, FURM assessments have been required for every AI system proposed for deployment at Stanford Health Care, the researchers say.

   

Read the article

 
 
 
 
 

HAI seminar: Closing the gap between medical foundation models and clinics


 
 
 
 
 

Sheng Wang, PhD, an assistant professor in the School of Computer Science and Engineering at the University of Washington, will give a seminar on gaps in medical foundation models that must be closed for such models to be useful in clinics. Hosted by Stanford’s Institute for Human-Centered AI, the free event is scheduled from 12 - 1:15 p.m. on Nov. 6 and can be attended in person or via Zoom.

  

A former postdoctoral scholar at Stanford Medicine, Wang will address three gaps — unmatched patient information, privacy, and constraints of graphics processing units — and the models that can help close them. The talk will conclude with a vision of “everything everywhere all at once,” in which medical foundation models and generative AI benefit every patient in every clinic simultaneously.

   

Register for the seminar

 
 
 
 
 

A conversation with The Washington Post on how AI is transforming health care


 
 
 
 
 

Lloyd Minor, dean of the School of Medicine and vice president for medical affairs at Stanford University, recently sat down with Yun-Hee Kim, technology editor with The Washington Post, for a wide-ranging discussion about AI’s growing influence in health care. From care delivery to drug development to mental health, Minor expressed his excitement about the promise and potential of AI and the need to safely, responsibly, and equitably develop and deploy AI tools for health care.

    

Watch the conversation

 
 
 
 
 

AI De-jargonator


 
 
 
 
 
 
 
 
 
 
 

Illustration by Emily Moskal

 
 
 
 
 
 

Model deterioration — or model degradation — refers to the decline in performance or accuracy of a predictive model over time due to changes in data distributions or other factors. In health care, the phenomenon is particularly concerning and distinct from common types of distribution shifts because it can impact the effectiveness of models used for diagnosing diseases, predicting outcomes, or recommending treatment plans.


For instance, if a predictive model for breast cancer screening is trained on data that mainly includes older patients with certain characteristics, but over time the patient population changes to include younger patients with different characteristics, the algorithm may not perform as well for the new patient population.


To prevent model deterioration, algorithms should be updated and retrained.


 
 
 
 
   
 
 
 

Share our newsletter with your community.

 
 
 
 
   
 
 
 

A joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education, and patient care.

To unsubscribe from future emails, CLICK HERE.