Artificial intelligence is making impressive strides in healthcare, and there is ever-growing interest in its potential to revolutionize the field.
Radiology has become a focal point of this conversation, driving speculation over whether AI will eventually replace radiologists entirely. In 2016, the pioneering computer scientist Geoffrey Hinton infamously predicted that AI would surpass radiologists within 5 years.
But are these fears warranted? In this article, we’ll cover the uses of AI in radiology, and if you’re interested in a career in the field, whether you should be worried about your future job prospects.
Can Artificial Intelligence Replace Radiologists?
So, can artificial intelligence replace radiologists?
The short answer is no, not anytime soon. Let’s go through a few of the reasons why.
How Does Image Recognition Work?
First, let’s quickly go through the basics of how image recognition works.
AI excels at pattern recognition, which forms the basis for image recognition models. These models rely on artificial neural networks, typically a specific type called a convolutional neural network (CNN).
CNNs consist of multiple layers arranged sequentially, and each layer can be thought of as recognizing specific features of an image. (A convolution is a special type of mathematical function. In this context, it can be thought of as something akin to a “filter” that slides across the input image to detect these features.)
Early layers typically start with simpler elements, such as lines and shapes, while deeper layers detect increasingly complex patterns, such as objects or faces. The network then compares these patterns to a vast dataset of labeled images it was trained on, enabling it to make predictions about the content of the input image.
For those interested in more details, here is a great video explaining the basics of CNNs. Also, note that large language models like ChatGPT are not based on CNNs but on a different type of neural network called a Transformer.
Current Image Recognition Models Are Task-Specific
In radiology, convolutional neural networks have had success in specific applications, such as detecting large vessel occlusions in acute stroke and identifying pulmonary nodules. Commercial software packages using such models are already available for purchase by hospitals.
However, all of these models are highly specialized and “task-specific,” meaning each is trained to detect a single pathology. Stroke, for example, is one of hundreds of different brain conditions that can present on imaging—so even if you could train a model for every condition, that would require running hundreds of models simultaneously. Additionally, many conditions present similarly on imaging, which means…
Radiology Requires Clinical Knowledge
…Radiology demands more than pattern recognition—it also requires clinical knowledge. Radiologists interpret imaging findings within the broader clinical context, integrating them with the patient’s medical history and current condition to provide accurate differential diagnoses.
For example, lung opacities are a frequent finding on chest x-rays, and while a task-specific AI model can be trained to detect them, it would be unable to determine the underlying cause. Are the opacities due to atelectasis, infection, pulmonary edema, or even hemorrhage?
Differentiating between these requires medical knowledge and clinical reasoning. Moreover, radiologists also use clinical judgment to recommend correlating imaging findings with specific lab tests or suggest further imaging when necessary. These decision-making processes go far beyond mere pattern recognition.
What About ChatGPT?
As previously noted, the AI models currently used in radiology are narrow in scope and useful in highly-specific tasks. To fully replace a radiologist, however, an AI model would need to accurately interpret thousands of imaging findings across the entire body, perform clinical reasoning, and integrate other inputs, such as lab values and the patient’s medical history.
These hypothetical models fall under the category of “generalized” medical artificial intelligence.
Recent advancements in large language models (LLMs), like ChatGPT, can be considered as steps toward generalized AI. These models are increasingly “multimodal,” sometimes called vision language models (VLMs), because they can now process images and videos alongside text.
So how do they fare when tasked with radiological tasks?
While they perform well on radiology board exams and can identify imaging modalities and anatomic regions, their actual diagnostic accuracy remains quite limited. One study of pediatric images showed that these models correctly diagnosed only 27.8% of cases and another reported accuracy below 50%. Another recent study highlighted the potential risks of relying on GPT:
“The hypothetical impact of the non-radiologist relying on GPT-4V would have been positive in 2% (nine of 450), neutral in 53.3% (240 of 450), and negative in 43.3% (195 of 450) of images.”
Moreover, cases the models do get correct in these studies may be influenced by biases from the provided history. For example, if given an abdominal CT scan and a history of weight loss and painless hematuria, the model might suggest a cancer diagnosis based solely on textual cues without analyzing the image itself.
VLMs are also vulnerable to “data leakage.” If images used in model evaluation are publicly accessible online, it is possible those images were already used in the model’s training dataset, which compromises evaluation integrity. Finally, just like LLMs, VLMs are also prone to hallucinations and repeating misinformation.
Overall, these findings show that current AI models are far from capable of making diagnoses at the level of a human radiologist.
Radiologists Do More Than Generate Reports
So far, we have discussed challenges AI faces in making radiological diagnoses—but radiologists also do much more than simply generate reports! They engage in frequent communication with other physicians, discussing imaging findings on the phone or in person to help guide clinical decisions. They also play a vital role in tumor boards, actively contributing to the multidisciplinary management of complex cases.
In certain subspecialties, like breast imaging, radiologists often speak directly with patients to discuss sensitive topics, such as abnormal mammogram results. Since these conversations involve personal and emotional information, it is likely that patients will always prefer to receive their medical diagnoses from a human.
And while it goes without saying that interventional radiology cannot be replaced by AI, diagnostic radiologists also perform plenty of procedures, such as image-guided biopsies.
Legal and Ethical Issues
There are a number of legal and ethical challenges that need to be solved before AI could feasibly replace radiologists in clinical settings. Currently, radiologists are held liable for diagnostic errors; therefore, they can be targets in malpractice suits.
But who is responsible in the case of an AI error—the manufacturer, the user or owner of the AI system, or both?
If we consider the case of a truly autonomous AI without human oversight, any errors it makes could feasibly fall under two different legal theories: malpractice and product liability.
Malpractice law is well-established, and to bring about a valid malpractice claim, the plaintiff must prove, among other things, that a doctor-patient relationship existed, and that the doctor failed to meet the accepted standard of care, i.e., the treatment that another doctor in the same specialty would provide under similar circumstances.
But what does a doctor-patient relationship even look like if the doctor is not a human but an AI system? And what is the standard of care—is it that of a human radiologist or of other AI systems? And if the latter, does this mean human radiologists would, in turn, be held to the same standard as AI?
If the product liability category is applied, then the legal responsibility would fall on the AI system’s developer. However, courts have been hesitant to extend product liability law to software as opposed to tangible goods. Furthermore, given that machine learning inherently involves software evolving and improving over time, is it reasonable to hold developers liable for errors that arise from changes beyond their control?
Unlike human radiologists, who are legally required to have a medical licence, AI currently lacks a formal licensing framework. Regulatory bodies like the FDA would need to approve AI systems and establish clear guidelines for their use before they can make independent diagnoses.
Many deep learning systems also face the “black box problem.” While we may have a general understanding of how AI models work, the specific processes and decisions they use to come to their conclusions often remain opaque. This lack of transparency may complicate the principle of informed consent and raise ethical and safety concerns. Are we really comfortable making life-altering decisions based on a process we don’t fully understand?
Potential Bias
Finally, just like humans, AI systems suffer from biases. In one infamous example, when Amazon tried to train a model to predict the best candidates to hire from a stack of resumes, the system selected only men.
Specific to radiology, studies have shown that AI models can underdiagnose pathology in already underserved patient populations, perpetuating or even amplifying existing human biases. Such biases pose a risk of exacerbating existing healthcare disparities, making it imperative to address this issue before AI systems can be widely adopted in clinical settings.
How AI is Helping Radiologists
Instead of replacing them, there are numerous ways that AI may help radiologists and make them more efficient.
1 | Reporting Negative Studies
First, AI models capable of confidently identifying negative studies and auto-generating reports could alleviate the workload on radiologists, allowing them to focus on complex and positive cases. This is a practical way to help mitigate the current radiologist workforce shortage while maintaining high-quality patient care.
This kind of “AI triage” also aligns with the increasing demand for imaging, which radiologists already have trouble keeping up with. Software that auto-generates reports for negative chest x-rays has already received regulatory approval in European markets, demonstrating the feasibility of such solutions.
2 | Enhancing Image Analysis
A further potential use of AI is enhancing image analysis. Measuring anatomical structures or pathological features on radiological images is essential for diagnosis, monitoring treatment progress, and planning interventions.
Currently, these measurements are often performed manually by radiologists. An AI system capable of detecting and measuring relevant structures could save time, reduce errors associated with manual measurement, and ensure consistency by applying the same process across subsequent exams.
Another important task is segmentation, which involves partitioning an image into distinct regions or structures. Examples include separating a tumor from healthy tissue and delineating organs. Convolutional neural networks have performed particularly well at automating this process.
Finally, an emerging field called radiomics focuses on the extraction and quantitative analysis of imaging data. By analyzing features such as texture, shape, and intensity patterns, radiomic techniques may reveal imaging characteristics and patterns imperceptible to the human eye. These methods are largely powered by machine learning and AI.
3 | Non-Interpretive AI
AI may also help with non-interpretive tasks—those unrelated to directly interpreting images. An example is having AI prioritize studies with a higher likelihood of positive findings, ensuring that the radiologist reviews them first. Additionally, AI could help with operational challenges, such as optimizing patient scheduling to maximize scanner utilization and efficiency.
4 | Large Language Models
Finally, LLMs like ChatGPT are being explored for their uses in radiology. One potential use is auto-generating report impressions based on the content of the report body. Another is creating more “patient-friendly” versions of radiology reports, translating medical language into jargon-free explanations.
The Future of Radiology
While it’s impossible to predict exactly what the job landscape will look like in 100 years, it seems likely that radiologists’ roles are secure for the foreseeable future. By the time AI has the potential to fully replace radiologists, it is probable that many other professions will have already been impacted, and society as a whole will have undergone significant shifts.
What is more certain, however, is that AI will continue to play an increasingly prominent role in radiology, enhancing workflows, boosting efficiency, and increasing throughput.
Rather than replacing radiologists, AI will likely serve as a powerful tool that supports their work, allowing them to focus on more complex tasks while automating routine processes. Some of the non-interpretive tasks mentioned above are already starting to be implemented in hospitals, and this should continue to increase over the next few years.
So while AI will undoubtedly change the field of radiology, radiologists will continue to play an essential role, with technology augmenting—not replacing—their expertise. It’s an exciting time to enter the field!
Radiology ranks in the top 10 most competitive specialties to match into. Are you curious about entering the field? Read our guides on How to Become a Radiologist, Radiology Career Pros & Cons, and 10 Radiology Subspecialty Career Paths Explained.