As AI matures, its ability to process information like humans—but at warp speed— makes it well positioned to synthesize large amounts of information. This ability is particularly suited for medicine and healthcare, which require clinicians to use their knowledge of existing and ever-changing medical research to interpret patients’ symptoms.
“This is going to be the biggest wave to hit medicine,” says Asha Zimmerman, MD, a transplant surgeon at Dartmouth Hitchcock Medical Center (DHMC) and assistant professor of surgery at the Geisel School of Medicine at Dartmouth (Geisel), who is working on his own applications for AI in medicine.
Already, two in three physicians report that they use AI in their practice, according to an American Medical Association survey. Physicians reported using AI to assist them in taking visit notes, drafting discharge summaries and care plans, and summarizing medical research and standards of care.
But how will the use of AI change for physicians, and how is Dartmouth Health contributing?
What is AI?
The term AI or “artificial intelligence” was coined in 1956 when researchers gathered at Dartmouth College to determine what it would take for machines to simulate human intelligence.
As a term, “AI” doesn’t refer to a specific technology to achieve that goal.
Rather, AI is a way to describe the concept of computing that emulates human thinking and through which technology can be trained to learn patterns and make inferences from vast datasets without being explicitly programmed to do so.
Parallels between how AI works and how doctors diagnose
The AI process is similar to how doctors make diagnoses.
A doctor identifies patterns in a patient’s symptoms and vital signs, comparing those to patterns learned in training, from the latest research, and over time from diagnosing hundreds of patients.
“It’s not very far-fetched that, if these models train correctly on a large amount of data, on the body of current knowledge of medicine, they could mimic what your typical physician would do in the same situation,” says Saeed Hassanpour, PhD, professor of biomedical data science, computer science, and epidemiology at Geisel, and the director of the Center for Precision Health & Artificial Intelligence (CPHAI) at Dartmouth College.
AI's expanding role in healthcare
In the future, AI technologies are likely to take on more than a doctor's administrative tasks.
As large language models (think ChatGPT) are refined, AI technology is getting better at more complex tasks, inferring next steps unprompted, and interpreting human interactions.
Could we one day see AI replacing clinicians on the front lines of medicine?
“That’s the direction we’re heading,” explains Hassanpour, who is one of the stauncher advocates for deploying AI tools in healthcare.
But Hassanpour also thinks it’s unlikely that medicine will ever become an entirely digitized endeavor without a human in the loop.
What would a "Dr. AI" look like?
AI will not take over every task for a clinician, experts say.
“When we talk about replacing somebody with a generalized AI, we are thinking about their job as if it were a simple task, a single task, that they do repeatedly. And that’s not what a clinician, or most any healthcare provider, does,” says Brandon Hill, MS, a machine learning specialist and cofounder of the Center for AI Research in Orthopaedics at DHMC.
Hassanpour agrees that there likely won’t be a “self-governing, independent AI model that works on its own.” Instead, future patients might not interact directly with a human, though they may benefit from what Hassanpour calls a “multiagent-based approach.”
“Currently most of these [AI] models can be looked at as tools,” he says, explaining that, so far, validated AI-driven technologies in medicine tend to be highly specialized, such as visual systems trained specifically to detect cancerous cells from images or language models that transcribe patient interactions and turn them into appointment notes. “They’re very narrow,” he adds. “And that’s different from how medicine is being practiced: You narrow the domain until you arrive at a certain conclusion.”
With the support of future models, experts could assemble these narrowly focused AI models into a more comprehensive, multi-agent approach. “These agents can work together to make a diagnosis,” Hassanpour says, with a manager or coordinator directing and delegating tasks to them.
In this scenario, the physician becomes the supervisor, verifying the AI team’s conclusions rather than going through the process of making an inference “from scratch,” Hassanpour says. And given current technology, he adds, this model could be built within a few years.
Zimmerman has a similar vision and is already building an AI-powered platform to make it a reality, along with Thomas D’Angelo, a data analyst in the transplant department at DHMC. Called “Vox Cura,” the team’s platform is in clinical trial planning stages after winning an award from the Dartmouth Innovation Accelerator for Digital Health last fall.
The Vox Cura chat application is trained to ask questions, just like a physician would, to gather information from a patient. It will then offer a likely diagnosis, along with a few other possibilities. The goal is to deploy this tool to rural and remote populations in northern New England and around the world, bringing health information to areas where there are no doctors, reducing barriers and costs. The app can’t prescribe treatment, but it can provide a starting point for a patient.
What are some of the concerns around doctors using AI to diagnose and treat patients?
Zimmerman says one concern about using AI-driven tools to make life-or-death diagnoses and treatment plans is that large language generative AI tools have been known to “hallucinate,” or perceive patterns that don’t exist and then spit out inaccurate information with confidence. When asking ChatGPT to explain why the sky is blue, the consequences of a wrong answer are probably low, but with medicine, they could be dire.
But technology is improving, Zimmerman says, and that’s one justification for the multiagent-based approach that Hassanpour supports, whereby if one model hallucinates, others can essentially outvote its conclusions.
Another concern is that AI-driven tools have learned to take shortcuts, which could introduce irrelevant information. In a study co-authored by Hill, Dartmouth Health researchers investigated the mechanism behind shortcuts by asking AI to predict whether patients eat refried beans or drink beer based solely on examining X-rays of their knees. The models performed shockingly well.
“A knee should have nothing to do with beer or beans,” says study senior author Peter Schilling, MD, MS, an orthopaedic surgeon at DHMC and an assistant professor of orthopaedics at Geisel. Instead, he says, the AI tool "has some understanding of where the image is taken, and something about the averages of demographics within that area, so then it can leverage those little hints to draw conclusions.” That's even if the hints are essentially meaningless.
The regulatory impact on AI use
Another factor that will likely hold AI-powered tools back from being deployed across the healthcare industry is the regulatory process surrounding new medical tools, Zimmerman says.
Typically, the U.S. Food and Drug Administration (FDA) has to approve diagnostics narrowly, determining their utility for specific diagnoses. So under current procedures, he explains, a generalized “Dr. AI" tool would be difficult to vet.
But even with these and other limitations, some AI-powered tools have already shown they can play doctor quite well. For example, a generative AI chatbot developed by a Dartmouth team to provide therapy, called Therabot, was shown in an 8-week clinical trial to meaningfully reduce psychological symptoms of users with depression, anxiety, or an eating disorder.
Why clinicians are remaining part of the conversation
As AI tools become increasingly accurate in emulating clinicians, Hill recommends continuing to involve humans. “Most AI is trained based on past human diagnoses. That means, in many cases, the AI can only be as good as a second doctor in the room.”
Once more, while large technology companies are often leading the latest advances in AI, even in the medical sphere, Zimmerman says it’s imperative that clinicians are part of the conversation to ensure that AI use in healthcare is ethical, best serves patients, and doesn’t make egregious errors that those outside of healthcare might not consider.
“My hope is that more people start to look for ways to be leaders in this field, because it’s important to have a voice at the beginning,” he says.
Institutionally, Geisel and Dartmouth Health are taking strides so that future healthcare leaders are well-versed in AI, both through initiatives like CPHAI and through curricula. For example, training in both the mechanisms behind AI and AI-powered tools is now embedded in medical students’ preclinical curriculum at Geisel.
“Traditionally, physicians have received no formal education in AI and machine learning, unless it’s been part of a niche specialty or their research,” says Thomas Thesen, PhD, associate professor of medical education and of computer science at Geisel, and a faculty leader in developing AI curriculum at Geisel, which has been one of the first institutions to add AI training to its preclinical curriculum. Thesen and other Geisel faculty are also using an AI-generated patient actor to help medical students become better clinical communicators.
The importance of keeping humanity in healthcare
The end result of all these developments is that this AI revolution is likely to be iterative, Hill says. “It doesn’t replace humans in the way that we think about, where it does somebody’s entire job. It means people can do more with less, that’s all it comes down to. But that’s what computers have been all along,” he says.
“The role of the physician has changed through technology,” Thesen says. “We don’t require physicians to store so much information in their heads anymore. Vast amounts of information are just a click away and physicians can look up information much quicker, much more reliably than before.”
Now, with the addition of AI, he says, more than ever, “the role of a physician is to be human.”
Related stories:
- Dartmouth Health physicians: AI can transform rural healthcare, but not without significant technology, access improvements
- AI Can Deliver Personalized Learning at Scale, Study Shows
This article first ran under the title “Dr AI: Could Artificial Intelligence Replace Clinicians” in the Fall 2025 Issue of Vitals Magazine and has been edited for this website.


