| |
| |
|
|
| |
Read about the new RAISE Health Seed Grants Program, AI for training peer counselors, machine learning for checking the growth of drug resistance in microbes, the ethical considerations of using AI in clinical trials, and more.
|
| |
|
|
|
| |
| |
| |
The 2024 RAISE Health Seed Grants Program is accepting proposals until Sept. 30. The grants will support research and education projects designed to advance responsible AI innovation in medicine. Of particular interest are research projects that aim to advance evaluation methods for fair algorithms in health care or investigate ethical or legal implications of AI in health care, or education programs that help patients, care providers, and researchers navigate AI advances.
“The goal of these projects is to build greater trust in AI, which is essential for broader adoption of AI technologies in medicine. Projects should also demonstrate the responsible integration of AI to ensure that all communities benefit from these innovations,” said Lloyd Minor, MD, Dean of the School of Medicine and Vice President for Medical Affairs at Stanford.
The RAISE Health 2024 Seed Grants Program expects to offer five one-year grants of up to $100,000 each.
Learn more and apply
Have questions about your proposal? Join us for an informational webinar on September 16, 2024, from 12 noon to 1 p.m.
Registration is required.
|
|
| |
|
| |
| |
|
|
| |
Among the many teams who applied, six interdisciplinary research teams from across Stanford were awarded Hoffman-Yee Research Grants in 2024; four teams included researchers from the School of Medicine. Each team will be awarded $500,000 in the first year, with the opportunity for as much as $2 million more over the following two years.
Administered by the Stanford Institute for Human-Centered Artificial Intelligence, the grants support work on significant scientific, technical, or societal challenges that require a bold, interdisciplinary approach. The research projects span key areas of the institute’s focus: understanding the human and societal impact of AI, augmenting human capabilities, and developing AI technologies inspired by human intelligence.
The grants are made possible by a gift from philanthropists and Stanford alumni Reid Hoffman and Michelle Yee.
Read the announcement
|
| |
|
|
|
| |
| |
| |
|
|
| |
Jonathan Chen, MD, PhD, assistant professor of medicine, and Mary K. Goldstein, MD, professor emerita of health policy, are harnessing AI to combat drug-resistant bacteria, due in part to the overprescription of antibiotics.
With a grant from the National Institutes of Health, Chen is leading a project to build personalized “antibiograms” (a chart that summarizes the antibiotics that are effective in treating different types of bacteria) with machine-learning tools that can predict antibiotic susceptibility in individuals based on patterns learned from large collections of prior examples.
“We can use the vast amounts of data provided by electronic medical records to create predictive models to optimize the accuracy and consistency of the current ‘educated guesswork’ of empiric antibiotic prescribing,” Chen wrote in the May 2022 issue of the
Journal of Antimicrobial Chemotherapy.
Read the full story
|
| |
|
|
|
| |
| |
| |
|
|
| |
In a new working paper, researchers from Stanford University, Carnegie Mellon University, and Georgia Institute of Technology present an AI-based model that offers feedback to novice peer counselors to improve their ability to help people with mental illness.
The project is rooted in a unique partnership between Stanford Medicine psychologist Bruce Arnow, PhD, and Stanford computer scientist Diyi Yang, PhD. They are co-authors of the paper, which received support from the Stanford Institute for Human-Centered AI.
“Interdisciplinary collaboration was essential in undertaking this project,” Arnow said. “AI has enormous potential to help improve both the quality and efficiency of psychotherapy training, but the mental health community is not well-equipped to develop an AI-assisted training model, and the computer science community is not grounded in counseling intervention skills. Forming a team with both disciplines enabled us to progress in this exciting new area of investigation.”
Read the full story
|
| |
|
|
|
| |
| |
| |
Q&A with Danton Char on the ethics of implementing clinical trials of AI
Danton Char, MD, associate professor of anesthesiology, perioperative and pain medicine, recently co-authored a paper in JAMA with postdoctoral scholar Alaa Youssef, PhD, on ethical issues that arise in clinical trials of AI.
|
|
 |
 |
| |
The Stanford Institute for Human-Centered AI held a workshop in May to tackle regulatory challenges introduced by the rapid integration of AI into health care. More than 50 policymakers, academics, health care providers, software developers, ethicists, and patient advocates participated in the discussion, which is summarized in “Pathways to Governing AI Technologies in Healthcare.”
|
|
|
| |
|
| |
| |
|
|
| |
In the following Q&A, he discusses the paper’s findings and what patients should consider before agreeing to participate in a clinical trial of AI.
What kind of ethical considerations go into planning a clinical trial?
The National Institutes of Health has published seven principles for guiding ethical clinical trials. Those principles are social value and clinical value, scientific validity, fair subject selection, a favorable risk-benefit ratio for participants, independent review of the data, informed consent for all participants, and respect for all participants.
Do these considerations change in trials that involve AI?
We investigated the generalizability of these seven principles to trials of AI and the unique ethical concerns that emerged for researchers.
Our study’s key themes revealed several ethical challenges unique to clinical trials of AI. These included difficulties in measuring social value, establishing scientific validity, ensuring fair participant selection, evaluating risk-benefit ratios in various patient subgroups, and addressing the complexities inherent in the data-use terms of informed consent.
Can you give an example of how one of these challenges might play out?
For a principle like fair subject selection, consider a trial that uses an autonomous AI tool designed to expand disease-screening access for populations that – for socio-economic reasons – don’t currently have adequate access to screening. This scenario presents two significant ethical questions. The first: Is it OK to use a biased AI tool? Using AI to expand screening presents a tension between reducing access inequity and employing AI with biased training data – by definition, populations without access are under-represented in the current data. Second: Is it OK to provide a different type of care to these populations? More vulnerable populations would be getting screened by an AI tool instead of the physician screening that better-resourced populations have access to.
What questions do you think patients should be asking before they agree to participate in a clinical trial involving AI?
While our research was focused more on considerations for those designing trials, an issue of growing importance for patients will be informed consent and balancing the unknown risks of AI screening against the known risks of untreated or unscreened-for disease. Risk-benefit ratios are likely to be different when AI risks are compared with current screening methods or the absence of screening.
|
| |
|
|
|
| |
| |
| |
|
|
| |
Tina Hernandez-Boussard, PhD, MPH, professor of medicine, of biomedical data science, and of surgery, grew up in Bishop, California, raising livestock and competing in rodeo events. Hernandez-Boussard notes that as AI becomes more prominent in health care, it could skew diagnoses if it’s not trained on medical data from diverse patient populations. She said this is especially worrisome in rural areas like Bishop with marginalized populations.
These areas often operate as “health care deserts” that put huge burdens on the providers who serve them. For such providers, the allure of AI to save time and money could mean it’s deployed for populations most likely to be underrepresented in the training data.
Turning things around demands a new approach — one for which Hernandez-Boussard is uniquely qualified. In her latest role as associate dean of research for the School of Medicine, she is focusing on educating faculty across the university on the equitable application of AI.
Read the full story
|
| |
|
|
|
| |
| |
| |
|
|
| |
In this episode of Stanford Engineering’s
The Future of Everything, host Russ Altman, MD, PhD, talks to Steven Lin, MD, clinical professor of medicine, who explains how AI could improve health care logistics, optimize patient care, and significantly reduce the clerical burdens that not only cost the U.S. health care system billions of dollars a year, but also keep physicians from spending more time with their patients.
Watch the episode
|
| |
|
|
|
| |
| |
| |
|
|
| |
RAISE Health aims to create engaging and informative content about the use of AI in health and medicine to share with the community and feature on the initiative’s recently launched Education Dashboard.
Your response to this short survey will help identify the most relevant and impactful topics. Please take a few moments to answer the following questions.
Take the survey
|
| |
|
|
|
| |
| |
| |
|
|
| |
This Sept. 5
event was the first in a series of annual workshops bringing together leading Stanford faculty, collaborators, and other experts to discuss the implications of rapidly evolving technologies in medicine and beyond.
The inaugural in-person workshop was a focused and interactive experience designed to deepen participants’ understanding of generative AI. The workshop also encouraged participants to use generative AI as a model for evaluating the current capabilities, future possibilities, and potential applications of new technologies in health care.
Faculty shared their expertise, strategies, and diverse perspectives on the risks and benefits associated with generative AI in health care.
RAISE Health faculty research council member
David Magnus, PhD, shared his insights on the ethical issues posed by deploying generative AI in health care.
|
| |
|
|
|
| |
| |
| |
|
|
| |
Illustration by Emily Moskal
|
| |
|
|
|
| |
| |
| |
|
|
| |
Diffusion models are a type of artificial intelligence that can create new images by learning from existing ones. They start with a completely random pattern, like a static-filled TV screen, and slowly transform it into a clear, detailed image by following patterns they’ve learned from other pictures. This step-by-step process helps the AI “imagine” and create realistic images from scratch. In health care, diffusion models have exciting potential. For example, they can help create detailed images for medical research and training, like generating high-quality scans or images that mimic real medical conditions. These models could also be used to predict how certain conditions might look in medical imaging, assisting doctors in early diagnosis or planning treatment.
|
| |
|
|
|
| |
| |
| |
A joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of AI across biomedical research, education, and patient care.
To unsubscribe from future emails, CLICK HERE.
|
|
| |
|
|
|