|
|
|
|
The summary paper coming out of the 2024 RAISE Health Symposium, the complexities of adjusting for race in clinical algorithms, AI-guided workflows needing to pass the “FURM” test to be used in clinical settings, an upcoming seminar on bridging gaps between medical foundation models and clinics, and what “model deterioration” means.
|
|
|
|
|
|
|
|
A report summarizing key takeaways from the
RAISE Health Symposium earlier this year has been posted online. The report highlights comments from participants in a series of working groups held on May 14. More than 60 experts in health care, research, academia, technology, public policy, and advocacy discussed issues ranging from equitable resource access for AI development to the importance of human-centered design and critical ethical issues facing the biomedical field. The report also outlines steps that need to be taken now to ensure the successful integration of AI in biomedicine over the next decade and beyond.
Read the full paper
|
|
|
|
|
|
Curt Langlotz is the senior associate vice provost for research; a professor of radiology (thoracic imaging), of medicine (biomedical informatics research) and of biomedical data science; and a senior fellow at the Stanford Institute for Human Centered Artificial Intelligence (HAI).
|
|
|
|
|
The Stanford Institute for Human-Centered AI held a workshop in May to tackle regulatory challenges introduced by the rapid integration of AI into health care. More than 50 policymakers, academics, health care providers, software developers, ethicists, and patient advocates participated in the discussion, which is summarized in “Pathways to Governing AI Technologies in Healthcare.”
|
|
|
|
|
|
|
|
|
What do you think people will learn from reading this summary paper from the RAISE Health Symposium?
They’ll see where the biggest opportunities and challenges lie for AI in medicine. It’s a great snapshot of what’s happening now, what’s needed to address the challenges and capitalize on the opportunities, and how we can responsibly develop these tools to improve care.
What findings in the paper stood out to you?
The need for collaboration is clear. Successful AI initiatives cannot be just tech-driven — they must involve clinicians, patients, ethicists, and engineers working together. Another key point is how essential it is to address bias and data quality at the beginning of the development process, when we are deciding what problem to solve. If we don’t, we risk amplifying disparities instead of reducing them.
Some of the challenges outlined in the paper seem insurmountable — what do you see as some immediate next steps to ensure AI is responsibly deployed in biomedicine?
It starts with building trust. We need to put clear guidelines and best practices in place, keep an open dialogue across the health and tech sectors, and make transparency the norm. Small, intentional steps — like ensuring diverse datasets and prioritizing explainability — can help set a solid foundation.
Who is ultimately responsible for ensuring the guidelines and guardrails for using AI in health are implemented?
Responsibility is shared. Health care organizations, technology companies, and regulators all play a role, but it is the leaders in these organizations who must champion ethical AI practices and ensure that they are followed. Clinicians and health system executives, in particular, have a critical role in ensuring that AI tools are used to support — not replace — clinical judgment. Establishing a culture of accountability and a commitment to ongoing evaluation and monitoring are key to ensuring these systems continue to benefit our patients.
|
|
|
|
|
|
|
|
|
|
The Stanford Center for Digital Health is conducting a national survey to gather health care workers’ perspectives about the use of artificial intelligence in medicine. We encourage all health care professionals to participate. Your insight is critical as AI technologies continue to evolve and show potential for improving patient outcomes, reducing administrative burdens, and enhancing care delivery. The survey is anonymous and takes only a few minutes to complete.
Take the survey
|
|
|
|
|
|
|
|
|
|
Chronic kidney disease affects more than 1 in 7 adults in the United States, and Black patients are at least three times more likely than non-Hispanic white patients to progress to kidney failure. For a long time, two of the most widely adopted clinical algorithms for evaluating the severity of chronic kidney disease incorporated a Black or non-Black race variable. But in 2021, a new clinical algorithm excluded the variable based on the understanding that it could propagate racial bias in decision making.
In this brief, Stanford Medicine researchers present the first assessment of this new equation’s effect on care decisions for patients with chronic kidney disease in the Stanford Health Care system. Their findings underscore the need for health equity research and highlight the limitations of employing technical “fixes” to address deep-seated health inequities.
Read the policy brief
|
|
|
|
|
|
|
|
|
|
Stanford Medicine researchers have developed a tool for evaluating whether AI-guided workflows will be useful in clinical settings. The Fair, Useful, and Reliable AI Model (FURM) testing and evaluation mechanism is designed to gauge the ethical, financial, clinical, and workforce implications of AI models in health care. In an article published last month in NEJM Catalyst Innovations in Care Delivery, the researchers summarize FURM assessments of six AI-guided tools proposed for use at Stanford Health Care. Two were subsequently approved for implementation. Since February, FURM assessments have been required for every AI system proposed for deployment at Stanford Health Care, the researchers say.
Read the article
|
|
|
|
|
|
|
|
|
|
Sheng Wang, PhD, an assistant professor in the School of Computer Science and Engineering at the University of Washington, will give a seminar on gaps in medical foundation models that must be closed for such models to be useful in clinics. Hosted by Stanford’s Institute for Human-Centered AI, the free event is scheduled from 12 - 1:15 p.m. on Nov. 6 and can be attended in person or via Zoom.
A former postdoctoral scholar at Stanford Medicine, Wang will address three gaps — unmatched patient information, privacy, and constraints of graphics processing units — and the models that can help close them. The talk will conclude with a vision of “everything everywhere all at once,” in which medical foundation models and generative AI benefit every patient in every clinic simultaneously.
Register for the seminar
|
|
|
|
|
|
|
|
|
|
Lloyd Minor, dean of the School of Medicine and vice president for medical affairs at Stanford University, recently sat down with Yun-Hee Kim, technology editor with The Washington Post, for a wide-ranging discussion about AI’s growing influence in health care. From care delivery to drug development to mental health, Minor expressed his excitement about the promise and potential of AI and the need to safely, responsibly, and equitably develop and deploy AI tools for health care.
Watch the conversation
|
|
|
|
|
|
|
|
|
|
Illustration by Emily Moskal
|
|
|
|
|
|
|
|
|
|
Model deterioration — or model degradation — refers to the decline in performance or accuracy of a predictive model over time due to changes in data distributions or other factors. In health care, the phenomenon is particularly concerning and distinct from common types of distribution shifts because it can impact the effectiveness of models used for diagnosing diseases, predicting outcomes, or recommending treatment plans.
For instance, if a predictive model for breast cancer screening is trained on data that mainly includes older patients with certain characteristics, but over time the patient population changes to include younger patients with different characteristics, the algorithm may not perform as well for the new patient population.
To prevent model deterioration, algorithms should be updated and retrained.
|
|
|
|
|
|
|
|
A joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education, and patient care.
To unsubscribe from future emails, CLICK HERE.
|
|
|
|
|
|