RAISE Health Newsletter
 

Issue 6 | September 9, 2024

 
 
 

In this issue...

 

Read about the new RAISE Health Seed Grants Program, AI for training peer counselors, machine learning for checking the growth of drug resistance in microbes, the ethical considerations of using AI in clinical trials, and more.

  

 
 
 
 
 

Deadline for RAISE Health seed grant proposals: Sept. 30


 
 
 
 
  
  
   

Learn more and apply

  
   
 
 
 
 
 

HAI awards Hoffman-Yee Research Grants


 
 
 
 
 

Among the many teams who applied, six interdisciplinary research teams from across Stanford were awarded Hoffman-Yee Research Grants in 2024; four teams included researchers from the School of Medicine. Each team will be awarded $500,000 in the first year, with the opportunity for as much as $2 million more over the following two years.

  

Administered by the Stanford Institute for Human-Centered Artificial Intelligence, the grants support work on significant scientific, technical, or societal challenges that require a bold, interdisciplinary approach. The research projects span key areas of the institute’s focus: understanding the human and societal impact of AI, augmenting human capabilities, and developing AI technologies inspired by human intelligence.

  

The grants are made possible by a gift from philanthropists and Stanford alumni Reid Hoffman and Michelle Yee.

  

Read the announcement

 
 
 
 
 
 

Applying machine learning to curb antibiotic resistance


 
 
 
 
 
  
  
   

Read the full story

 
 
 
 
 

Using AI to train peer counselors


 
 
 
 
 

In a new working paper, researchers from Stanford University, Carnegie Mellon University, and Georgia Institute of Technology present an AI-based model that offers feedback to novice peer counselors to improve their ability to help people with mental illness.

  

The project is rooted in a unique partnership between Stanford Medicine psychologist Bruce Arnow, PhD, and Stanford computer scientist Diyi Yang, PhD. They are co-authors of the paper, which received support from the Stanford Institute for Human-Centered AI.

  

“Interdisciplinary collaboration was essential in undertaking this project,” Arnow said. “AI has enormous potential to help improve both the quality and efficiency of psychotherapy training, but the mental health community is not well-equipped to develop an AI-assisted training model, and the computer science community is not grounded in counseling intervention skills. Forming a team with both disciplines enabled us to progress in this exciting new area of investigation.”

  

Read the full story

 
 
 
 
   
 
 
 
 

In the following Q&A, he discusses the paper’s findings and what patients should consider before agreeing to participate in a clinical trial of AI.

  

What kind of ethical considerations go into planning a clinical trial?

The National Institutes of Health has published seven principles for guiding ethical clinical trials. Those principles are social value and clinical value, scientific validity, fair subject selection, a favorable risk-benefit ratio for participants, independent review of the data, informed consent for all participants, and respect for all participants.

   

Do these considerations change in trials that involve AI?

We investigated the generalizability of these seven principles to trials of AI and the unique ethical concerns that emerged for researchers.

Our study’s key themes revealed several ethical challenges unique to clinical trials of AI. These included difficulties in measuring social value, establishing scientific validity, ensuring fair participant selection, evaluating risk-benefit ratios in various patient subgroups, and addressing the complexities inherent in the data-use terms of informed consent.

  

Can you give an example of how one of these challenges might play out?

For a principle like fair subject selection, consider a trial that uses an autonomous AI tool designed to expand disease-screening access for populations that – for socio-economic reasons – don’t currently have adequate access to screening. This scenario presents two significant ethical questions. The first: Is it OK to use a biased AI tool? Using AI to expand screening presents a tension between reducing access inequity and employing AI with biased training data – by definition, populations without access are under-represented in the current data. Second: Is it OK to provide a different type of care to these populations? More vulnerable populations would be getting screened by an AI tool instead of the physician screening that better-resourced populations have access to.

  

What questions do you think patients should be asking before they agree to participate in a clinical trial involving AI?

While our research was focused more on considerations for those designing trials, an issue of growing importance for patients will be informed consent and balancing the unknown risks of AI screening against the known risks of untreated or unscreened-for disease. Risk-benefit ratios are likely to be different when AI risks are compared with current screening methods or the absence of screening.

  

 
 
 
 
 

Educating faculty on the equitable application of AI


 
 
 
 
 

Tina Hernandez-Boussard, PhD, MPH, professor of medicine, of biomedical data science, and of surgery, grew up in Bishop, California, raising livestock and competing in rodeo events. Hernandez-Boussard notes that as AI becomes more prominent in health care, it could skew diagnoses if it’s not trained on medical data from diverse patient populations. She said this is especially worrisome in rural areas like Bishop with marginalized populations.

  

These areas often operate as “health care deserts” that put huge burdens on the providers who serve them. For such providers, the allure of AI to save time and money could mean it’s deployed for populations most likely to be underrepresented in the training data.

  

Turning things around demands a new approach — one for which Hernandez-Boussard is uniquely qualified. In her latest role as associate dean of research for the School of Medicine, she is focusing on educating faculty across the university on the equitable application of AI.

  

Read the full story

 
 
 
 
 

How to put AI tools into the hands of primary care physicians


 
 
 
 
 
   

Watch the episode

 
 
 
 
 

Want to learn more about AI?


 
 
 
 
 

RAISE Health aims to create engaging and informative content about the use of AI in health and medicine to share with the community and feature on the initiative’s recently launched Education Dashboard.

  

Your response to this short survey will help identify the most relevant and impactful topics. Please take a few moments to answer the following questions.

   

Take the survey

 
 
 
 
 

ICYMI: Understanding the Future of Medicine: Generative AI Workshop


 
 
 
 
 
   
   
   
 
 
 
 
 

AI De-jargonator


 
 
 
 
 
 
 
 
 
 
 

Illustration by Emily Moskal

 
 
 
 
 
 

Diffusion models are a type of artificial intelligence that can create new images by learning from existing ones. They start with a completely random pattern, like a static-filled TV screen, and slowly transform it into a clear, detailed image by following patterns they’ve learned from other pictures. This step-by-step process helps the AI “imagine” and create realistic images from scratch. In health care, diffusion models have exciting potential. For example, they can help create detailed images for medical research and training, like generating high-quality scans or images that mimic real medical conditions. These models could also be used to predict how certain conditions might look in medical imaging, assisting doctors in early diagnosis or planning treatment.

 
 
 
 
   
 
 
 

Share our newsletter with your community.

 
 
 
 
   
 
 
 

A joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education, and patient care.

To unsubscribe from future emails, CLICK HERE.