RAISE Health Newsletter
 

Issue 3 | June 6, 2024

 
 
 

In this issue...

 

In mid-May, Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) played host to an unofficial “artificial intelligence health week” as four separate AI events converged on Stanford’s campus. The inaugural RAISE Health Symposium, held May 14, convened stakeholders in academia, industry, health care, government, and advocacy to address responsible AI use in health and medicine. It was followed by the fourth annual Artificial Intelligence in Medical Imaging (AIMI) Symposium as well as two private forums: the first convening of the Coalition for Health AI (CHAI) and an event hosted by HAI that included policymakers examining AI’s implications for the future of health policy. This issue features a recap of the RAISE Health event, including key takeaways from the discussions.

  

 
 
 
 
 

RAISE Health Symposium:
In three minutes


 
 
 
 
Did you miss the event and want a quick overview? Here is a snapshot of the many insightful panels and discussions that took place.
  
 
 
 
 
 

Dive deeper: Recap and recordings


 
 
 
 
 

The inaugural RAISE Health Symposium, co-hosted by Stanford Medicine and HAI, brought thousands together in person and online to discuss the responsible use of AI in biomedical research, education, and patient care. You can read a full recap of the event here.

  

Speakers explored what it means to bring AI into the fold of medicine, including the opportunities and challenges ahead. From accelerating fundamental discovery and drug development to improving care delivery, AI’s potential warrants excitement, according to the speakers. But we should be equally concerned about the potential for missteps and misuse, they said. Having checks throughout the AI innovation life cycle will be imperative to ensure the technology is safe, fair, equitable and, ultimately, useful for those who would benefit from it.

  

The following day, AIMI held a public symposium that focused on AI-powered medical innovations, how to evaluate AI technology in health and medicine, and the importance of cross-sector collaboration.

  

Watch a recording of the RAISE Health Symposium here.

Watch a recording of the AIMI Symposium here.

 
 
 
 
 

Event photo highlight


 
 
 
raise health
 
 
raise health
 
 
 
 
raise health
 
 
raise health
 
 
 
 
raise health
 
 
raise health
 
 
 
 
 
 

Photo credit: Steve Fisch

  

“We believe this is a technology to augment and enhance humanity. We need computer scientists to work with multiple stakeholders — from doctors and ethicists…to security experts and more — to develop and deploy [AI] responsibly.” — Fei-Fei Li, PhD, professor of computer science and HAI co-director

  

“The internet brought us access to information; generative AI is bringing and enhancing our access to knowledge — and that has implications for everything we do. There are going to be important applications and implications for how we educate the next generation of physicians…the next generation of biomedical scientists…[and] the next generation of managers and leaders. There’s no better place to start that dialogue and those discussions than right here through the work being done at Stanford and in so many other places.” — Lloyd Minor, MD, dean of the Stanford School of Medicine and vice president for medical affairs at Stanford University

 
 
 
 
 
 

Top takeaways


 
 
 
 

  

Start with the user

 

Those developing AI must have a healthy obsession with the end user. In the past, digital transformation efforts in medicine, such as electronic health records, didn’t give enough attention to the needs of patients or providers. We must learn from the past and strive to put patients and care providers at the center of AI-driven innovation.

 
 
 
     
 
 

  

Transparency + inclusion = trust

 

Especially in medicine, AI adoption will move at the speed of trust. Earning that trust will require transparency on multiple fronts: who is represented in the data used to train an algorithm, clarity on exactly what the algorithm is intended to do, and how patient data is and will continue to be used to help the algorithm learn and improve.

 
 
 
     
 
 

  

Public-private partnerships are key

 

The combined knowledge, expertise, and resources of academia, government, and industry are critical to AI’s success in health and medicine. Government can provide much-needed clarity, standards, and incentives for cooperation; academia brings multi-disciplinary knowledge essential to responsible AI development; and industry brings technology expertise, resources, and the tools necessary to scale.

 
 
 
 
 
   
 
 
 
 

Why host a symposium focused on the responsible use of AI in health now?

The public launch of ChatGPT really opened people’s eyes to the technology’s potential to impact our society, for better and worse. In the biomedical sector, where the stakes are always high, we saw a burning need to convene a discussion about how to navigate this emerging frontier. We recognize that no one group has all the answers, nor should they. Charting a path is going to require many different stakeholders coming together to ask and debate important questions. Questions like "What role should AI have in our health?" and "Who is ultimately responsible for ensuring that AI aligns with and protects our interests?" These are challenging topics, and I have no illusion about solving them overnight, but they are essential to informing future policies and practices governing AI’s use in health and medicine. That’s why this convening and future ones are so important.

  

What was your biggest takeaway from the day?

First, the excitement was palpable. Everyone was energized by the conversations. What struck me and, frankly, what I found assuring, is how leaders from across various sectors have a genuine interest in getting this right. We have all witnessed how past tech revolutions carried with them many unintended consequences. Leaders are feeling a sense of urgency to anticipate problems far before they reveal themselves. I’m encouraged by that. To that end, another key takeaway was a repeated call for partnerships. There was consensus among our guest speakers that industry, academia, government, and advocacy groups must make a concerted effort to work together. From pooling the necessary resources to develop AI models to ensuring vital perspectives are reflected in their development, partnerships will be essential to this work.

  

What comes next?

Through RAISE Health, Stanford Medicine and HAI will continue to act as convenors of these kinds of discussions. We look forward to announcing future opportunities to build on the significant ground we covered in May. Near term, we are working on a summary paper that will offer a synthesis of what we learned at our event. That includes insights from a series of private working groups that we hosted in the afternoon: digging deeper into issues concerning AI’s use in research, education, and patient care. We plan to publish the paper later this summer.

  

 
 
 
 
 
 

Stay tuned...


 
 
   
 
   
 
 
 

Share our newsletter with your community.

 
 
 
 
   
 
 
 

A joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education, and patient care.

To unsubscribe from future emails, CLICK HERE.