RAISE Health Newsletter
 

Issue 2 | May 7, 2024

 
 
 

In this issue...

 

Get the scoop on the Stanford Institute for Human-Centered Artificial Intelligence (HAI)'s newly released AI Index, learn more about research that uses generative AI as a foundation for new antibiotics, and listen to a recent conversation between School of Medicine dean Lloyd Minor and FDA commissioner Robert Califf on regulating AI.

 
 
 
 
 
 

  

Don’t miss: RAISE Health’s Inaugural Symposium

 

On May 14, Stanford Medicine and HAI will host the inaugural RAISE Health symposium on responsible AI innovation in health and medicine.

  

Register today to secure a spot at our livestreamed event, where leaders across sectors will address crucial issues to ensure AI upholds the highest ethical, clinical and research standards.

  

If you have any questions, the RAISE Health team is here to help. Email us at: [email protected].

 
 
 
 
 
 

In the news: The AI Index


 
 
 
 
HAI recently published its seventh annual AI Index, a comprehensive report on the state of AI around the world. Among its many features, this year’s index examines responsible AI and includes a new chapter dedicated to science and medicine.
  
 
 
 
 
   
 
 
 
 

What is the AI Index?

Now in its seventh year, the AI Index tracks, collates, distills and visualizes data related to AI. The mission is to provide unbiased, rigorously vetted, broadly sourced data for policymakers, researchers, executives, journalists and the general public to develop a more thorough and nuanced understanding of the complex field of AI. The index is developed by HAI but is led by a steering committee of leaders in academia, industry and government across multiple disciplines.

  

Science and medicine has its own chapter this year – can you tell us why that’s significant?

In previous years, information about AI progress in science and medicine was included in various parts of other chapters. For example, in the 2022 report, we highlighted various scientific discoveries in the technical performance chapter. Health-related information appeared in the economy and ethics chapters. This year we saw so many advances in medical and science AI that we knew we needed to devote a chapter to it.

  

What stood out for you?

2023 was an incredible year for AI. First, we saw bigger, more sophisticated models and more multimodal capabilities. We also saw a continued trend of industry dominating the field. Industry released 51 notable AI systems. Academia released 15, and government was barely on the chart. A big reason for this could be the model training costs. In 2017, it cost about $1,000 to train a transformer model [a transformer model is a deep learning model essential to machine learning and natural language processing]. In 2023, it cost $78 million to train GPT-4 and about $190 million to train Gemini Ultra. In only a handful of years, the costs have increased significantly, to a point where only a few organizations can afford to create new models.

  

What are some key takeaways for health and medicine from the AI Index’s chapter on responsible AI?

Much of the content in the responsible AI chapter is also applicable to health and medicine. For example, the AI Index highlights the lack of agreement on benchmarks that AI developers use to evaluate the models, including for truthfulness, potential bias, and generation of inappropriate or harmful content. Comparing model capabilities in a standardized way plays an important role in enhancing transparency. This is especially critical when we are applying AI to important application such as health care.

 
 
 
     
   
 
 
 

  

AIMI Symposium 2024

 

The Center for Artificial Intelligence in Medicine and Imaging is hosting its annual symposium on Wednesday, May 15, in person and online. This year’s gathering will highlight cutting-edge research and methods, showcase current and emerging real-world clinical applications, and address critical issues related to fairness and societal impact.

  

Learn more about the event

 
 
 
 
 
 

How do we ensure AI creates real value for medical professionals?


 
 
 
 
Absent standardized guidelines for AI in medicine, and with the growing number of AI tools entering the market, researchers are racing to define how these tools should be assessed — not just for quality control, but to ensure they produce tangible benefits for medical professionals and create real value. Below are two notable papers that tackle this issue in care delivery and medical education.

 
 
     
 
 
 

Researchers examine how generative artificial intelligence can be adopted responsibly, with the most value for health systems, in response to the presidential executive order on AI.

  

Read the article

 
 
 
     
 
 
 

What are the potential opportunities and limitations of generative AI in medical education? Scientists identify and explore possible applications and challenges of generative AI in education and use them to guide future areas for exploration.

  

Read the article


 
 
 
     
 
 

  

Want more studies on health AI tools?

 
 
 
 
 
 
 

  

AI in action

 

How AI improves physician and nurse collaboration: A new AI model helps physicians and nurses work together at Stanford Hospital to enhance patient care.

 
 
 
 
 
 

Research Highlights


 
 
 
 
 

Stanford Medicine researchers have devised a new AI model dubbed SyntheMol (synthesizing molecules), which creates recipes for chemists to synthesize drugs capable of treating antibiotic-resistant bacteria. Next: testing the drugs to see if they work in a living body.

  

Read the article

 
 
 
     
 
 
 

Scientists at Stanford Medicine have developed a noninvasive imaging method to create a cell-by-cell reconstruction of skin or other tissue without taking a biopsy. The method could transform how pathologists diagnose and monitor disease in the future.

  

Read the article

 
 
 
 
 

Emily Moskal

 
 
 
 
 

In the spotlight


 
 
   
 
   
 
 
 

Share our newsletter with your community.

 
 
 
 
   
 
 
 

A joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education and patient care.

To unsubscribe from future emails, CLICK HERE.