RAISE Health Newsletter
 

Issue 5 | August 6, 2024

 
 
 

In this issue...

 

Learn about a therapeutic antibody made 25 times more potent with machine learning, a framework for regulating large language models in psychotherapy, how AI developers and regulators can better develop sound AI policies in medicine, and more.

  

 
 
 
 
 

New AI approach optimizes
antibody drugs


 
 
 
 
Biochemistry professor Peter S. Kim co-led research on a new machine learning-based method for designing antibody drugs. The method was shown to improve an FDA-approved SARS-CoV-2 antibody that had been discontinued due to ineffectiveness. Photo: Steve Fisch
  
  
 
 
 
 
Image: Varun Shaker
    
  
   

Read the full story

 
 
 
 
 

Toward responsible development
and evaluation of LLMs in psychotherapy


 
 
 
 
 

In a policy brief, Stanford University researchers and their colleagues sound a cautionary note about the potential of large language models, such as OpenAI’s GPT-4, to support, augment, and automate psychotherapy practices.

  

Policymakers have the responsibility to ensure that mental health practitioners and product developers evaluate these innovations carefully, taking into consideration their potential limitations, ethical considerations, and risks, the authors assert.

  

In the brief, they propose a framework for evaluating and reporting on whether AI applications are ready for clinical deployment in behavioral-health contexts based on safety, confidentiality, privacy, equity, effectiveness, and implementation concerns.

  

Read the full policy brief

 
 
 
 
 
   
 
 
 
 

Curt Langlotz, MD, PhD, associate director of HAI, led the workshop and highlighted some of the day’s conversation during a Q&A. The following is an excerpt:

  

What do regulators need to know about AI?

The Food and Drug Administration is our primary federal regulator of clinical AI. They already know a lot about AI and are doing a fine job under difficult constraints, balancing safety and innovation using a 50-year-old regulatory regime designed at the time of paper records and fax machines.

  

I would emphasize the challenges faced by the potential purchasers of AI algorithms right now. In my specialty, radiology, there are over 600 FDA-cleared algorithms, and over 100 companies selling AI products to radiologists. We know that these algorithms don’t generalize well to new populations. So, many potential customers are having a difficult time determining whether a given AI product will work in their practice. We need more transparency about the data on which these products were trained.

   

What do AI developers need to better understand about regulation?

Developers have a tendency to think of regulations as a problem to overcome. But in many ways, we are fortunate that health AI is already a regulated industry with a neutral party ensuring that we build safe and effective systems. Lately, we have seen in other industries how a lack of standards can undermine public trust in AI.

  

We also should do more to eliminate the wasted effort that occurs when developers aren’t aware of the rigorous evaluations that regulators expect. If we applied the required rigor from the start, we would avoid the need to re-run experiments later.

  

Read the full Q&A

  

 
 
 
 
 

How insights from neuroscience can inform better AI development


 
 
 
 
 

What does neuroscience have to say about how AI could become as flexible, efficient, and resilient as the human brain?

  

That is the question addressed in a recent episode of From Our Neurons to Yours, a podcast of Stanfords Wu Tsai Neurosciences Institute. Surya Ganguli, PhD, associate professor of applied physics and a member of the institute, speaks about what the new generation of powerful AI tools, such as OpenAI’s DALL-E and Anthropic’s Claude, might teach us about our own biological intelligence, and vice versa.

  

Ganguli’s lab produced some of the first diffusion models, which are at the foundation of todays AI revolution, and is now researching how complex emergent properties arise from biological and artificial neural networks.

  

Listen to the full episode

 
 
 
 
 

RAISE Health education dashboard, AIMI highlight


 
 
 
 
 

If you want to grow your knowledge about AI and its use in medicine, the RAISE Health website has educational resources for you. The education dashboard offers a variety of resources to support your learning journey, from basic definitions and explainer videos to recorded lectures and in-depth coursework.

  

Featured class: AIMI NextGen Tech Talks

  

AIMI NextGen Tech Talks is an engaging live webinar series tailored for high school students interested in exploring the dynamic intersection of AI in medicine and health. This series provides an opportunity for attendees to gain valuable insights from distinguished experts as they share their professional journeys in shaping the future of health care through technology. Webinar participants will have the chance to actively engage in a live Q&A session during the webinar, fostering direct interaction with the speakers.

  

Date: Monday, August 26, 2024, 5-5:45 p.m. Pacific time

Format: Live webinar presentation and Q&A

Registration: Free and open to all ages

 
 
 
 
 

AI De-jargonator


 
 
 
 
 
 
 
 
 
 
 

Illustration by Emily Moskal

 
 
 
 
 
 

Imagine that leaders at a group of hospitals wanted to develop an AI model to detect early signs of a disease from medical images. They each have lots of patient data but cant share it due to privacy concerns.

  

With federated learning, instead of pooling the data in one place, each hospital trains its own AI model using its own data. Then, the hospitals share only what the AI has learned (not the actual data) with a central server. The server combines these updates to improve the AI model and sends the improved model back to the hospitals. This process repeats, each time enhancing the model without sharing sensitive patient information.

  

This collaborative approach helps maintain data privacy and security while allowing the AI model to be improved.

 
 
 
 
   
 
 
 

Share our newsletter with your community.

 
 
 
 
   
 
 
 

A joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education, and patient care.

To unsubscribe from future emails, CLICK HERE.