RAISE Health Newsletter
 

Issue 4 | July 9, 2024

 
 
 

In this issue...

 

Learn about a project to develop computer vision for tracking patient health and about AI- versus human-written summaries of medical records, as well as how advances in spatial intelligence could lead to better clinical outcomes and how AI could provide a more accurate picture of a person’s mental health.

  

 
 
 
 
 

Autonomous patient monitoring
in the ICU


 
 
 
 
  
  
 
 
 
 
 

Research highlights


 
 
 
 

  

Customizable pathology tool increases speed and accuracy of diagnosis

 

Pathologists have trained an AI tool to provide customizable assistance when identifying cells that might indicate diseases, such as cancer or endometritis.

  

A paper describing the tool, called nuclei.io, which was developed by Stanford Medicine researchers James Zou, PhD, and Thomas Montine, MD, PhD, was recently published in Nature Biomedical Engineering.

  

Researchers found that nuclei.io not only sped up pathologists’ work, but also improved the accuracy of their diagnoses and decreased the frequency with which they had to request additional images from a patient sample.

 
 
 
 
 
 

Green boxes highlight plasma cells — an indicator of infection — in a sample of the tissue lining the uterus. Zou lab and Montine lab

  

Read the full story

 
 
 
     
 
 

  

AI can outperform humans in writing medical summaries

 

Doctors rated AI generally better than humans at summarizing medical records, according to a recent study by researchers at the Stanford School of Engineering and the Stanford School of Medicine, along with their colleagues. The findings, published in Nature Medicine, are important because summarizing medical records is difficult, highly consequential, and time-consuming work.

  

For the study, the researchers adapted eight large language models, or LLMs, to clinical text and tested their summarization skills against those of human medical experts. The results show “the potential of LLMs to integrate into the clinical workflow and reduce documentation burden,” said the study’s senior author, Akshay Chaudhari, PhD, assistant professor of radiology.

  

Read about the study

 
 
 
     
 
 

  

Beyond ‘How often do you feel blue?’

 

Self-reporting is how most psychiatric disorders are diagnosed and monitored. Yet this approach provides only subjective impressions at brief points in time, usually recorded in environments outside a person’s daily life, such as a psychiatrist’s office.

  

Today, researchers at Stanford Medicine are developing artificial intelligence tools to not only provide a more accurate picture of a person’s mental well-being but also to flag those in need of help and guide providers in choosing treatments. Certainly the stakes are high — with concerns for privacy, safety, and bias — but AI is opening up unprecedented possibilities in psychiatry.

  

Read the full story

 
 
 
 
 
 

Illustration by Juan Bernabeu

 
 
 
 
 
 

TEDx talk: Spatial intelligence


 
 
 
 
 

Spatial intelligence in AI allows computers to act on visual information, just as humans do, said Fei-Fei Li, PhD, professor of computer science and co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) during a TED talk in April.

  

One promising application of this technology is health care, she said. “My lab has been taking some of the first steps in applying AI to tackle challenges that impact patient outcome and medical staff burnout,” Li said. She described how her Stanford Medicine collaborators are piloting smart sensors that can detect when, for example, patients are at risk of falling or physicians fail to wash their hands properly.

  

“Imagine an autonomous robot transporting medical supplies while caretakers focus on our patients or augmented reality, guiding surgeons to do safer, faster, and less invasive operations,” she said.

  

Watch the full TED talk

 
 
 
 
 
 

In case you missed it…


 
 
   
 
 
 

AI dejargonator


 
 
 
 
 
 
 
 
 
 
 

Illustration by Emily Moskal

 
 
 
 
 
 

A distribution shift in AI model development happens when the data your model encounters in real-world use changes compared with the data your model was trained on. Here’s a simple way to think about it:

  

Imagine you train a model to recognize different types of fruit using pictures of apples, bananas, and oranges taken with a specific camera. The model learns to identify these fruits based on the characteristics in those pictures.

 
 
 
 
 
 

Image generated by AI

 
 
 
 
 
 

Now, if you start using the model with pictures taken by a different camera or in different lighting conditions, the characteristics of the fruit images might look slightly different. This change is a distribution shift because the new data (pictures) has a different distribution (appearance) than the training data.

  

In short, a distribution shift is when the type or quality of the data your model sees in the real world is different from what it saw during training, which can affect the model’s performance.

 
 
 
 
   
 
 
 

Share our newsletter with your community.

 
 
 
 
   
 
 
 

A joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education, and patient care.

To unsubscribe from future emails, CLICK HERE.