On Jan. 15, Jonathan Chen, MD, PhD, assumed the role of the Stanford School of Medicine’s inaugural faculty champion for AI in medical education. Chen will lead the implementation of the school’s strategic initiative to integrate artificial intelligence concepts and competencies into Stanford Medicine’s medical education curriculum.
What is your mandate for this position?
Define, develop, and disseminate the curriculum needed to train (and retrain) a generation of our health care workforce to safely and effectively integrate AI into their clinical practice and improve patient care response to the idea of creating community consensus around responsible AI.
Why do you think it’s important for AI to be a part of a medical student’s education?
While much “AI” of prior years was overhyped, the arrival of usable large language model chatbots and other generative AI systems have many qualities of a disruptive technology. Although there will still be false starts and overpromises, I liken this to the arrival of the internet. Particularly when many, including our own researchers, have found that automated chatbots can now outscore medical students and practicing physicians on medical knowledge and reasoning exams, our entire structure of medical education and assessment has been turned on its head and needs to be rethought.
Who would be teaching these classes?
The exciting opportunity and challenge at hand is the inherently interdisciplinary nature of the subject.
We’ll need to overcome some culture shock so that the medical community can mutually learn from other disciplines such as computer science and engineering, while also teaching the teachers to enable our medical community to bring their wisdom, experience, and judgment on where AI tools can, should, and will make a difference.
When can medical students expect to take courses on AI?
Part of the reason I was brought on is that there is currently no AI at all in the formal standard medical student curriculum. Much of the content and expertise already exists on our campus, but work is needed to organize and assimilate it into existing curriculums and make them broadly accessible so that Stanford can lead the nation and world in guiding and training the broader community in this rapidly evolving area.
Can you provide some examples of what the students would be learning about?
As much as everyone uses the internet, we all should learn about the capabilities, limitations, and implications of emerging AI technologies.
It surprises me how many still have not even tried using a large language model AI chatbot, so there is some foundational learning to do around how these started as highly advanced auto-complete systems but with emergent properties that create the illusion of intelligence that has captured everyone’s imagination. Practical learning topics would include how to use and “prompt” these systems to make them useful, while understanding their limitations such as confabulations and hallucinations that result in inadvertent and difficult to recognize misinformation.
We’ve arrived at a bizarre moment in history where human-versus-computer-generated, real-versus-fabricated information cannot be reliably distinguished, requiring everyone to be savvy to what is reliable information and believable media.
Would there be educational opportunities for faculty?
Yes, there are many existing workshops and seminars on our campus and beyond. I’ve been on tour around the country for the past year doing many grand rounds and conference presentations to get broader communities up to speed on a rapidly moving topic. More significantly, we are likely to find that we as a community can learn a lot from each other. The revolutionary element of emerging generative AI systems is that with more natural and intuitive “chat” interfaces, anyone can try them out, discover capabilities, and teach others about it, without having to wait for a programmer or data scientist to unpack it.