During a recent conversation on The Executive Outlook, Shree Periakaruppan, Head of Data Engineering and Analytics at American College of Radiology (ACR), offered a rare inside look at how AI in radiology is being used in the real world—not just in pilot decks.
On the surface, ACR looks like a professional society. Behind the scenes, it operates like a data and AI organization, running one of the largest medical imaging data hubs in the US, validating AI, and supporting radiologists who interpret hundreds of millions of studies every year.
Across the discussion, Shree returns to one core message: AI in radiology only creates real value when it runs on a strong data platform, is governed properly, and is measured against business and patient outcomes—not just model accuracy.
Inside ACR: A data-rich engine behind radiology
ACR is best known as a membership-based organization, representing around 41,000 radiologists, radiation oncologists and medical physicists. Its mission is to serve patients and society by helping its members advance the practice, science and profession of radiological care..
One of ACR’s most critical roles is accreditation. In the US, CT, MRI and other imaging scanners must be accredited every three years under an FDA mandate. ACR is a key player in providing accreditation across the country.
Beyond accreditation, ACR is deeply involved in clinical research, AI validation in radiology, and continuing medical education. It also operates a huge medical imaging data hub, collecting data from more than 39,000 facilities, around 5 terabytes of new imaging data every month, with total assets approaching a petabyte.
On paper, ACR looks like a professional society. In practice, it’s a data-rich, tech-driven organization at the heart of medical imaging innovation.
AI in radiology: already here, not a future bet
In many parts of healthcare, AI still feels experimental. Radiology is several steps ahead.
While some departments might see AI adoption near 20%, She notes that in radiology around 86% of sites in the US now use some form of AI, up from just over 50% a few years ago. Adoption is no longer a future ambition—it’s a day-to-day reality.
She explains two broad categories of AI in radiology:
- Interruptive (imperative) AI – keeps a human in the loop. For example, an AI model flags potential intracranial hemorrhage and pushes those cases to the top of the radiologist’s worklist so urgent patients are seen faster.
- Non-interruptive (non-imperative) AI – works in the background. AI organizes patient data from the EHR, pulls relevant prior studies, sets up “hanging protocols” for side-by-side comparisons, and adjusts viewing settings such as contrast and orientation.
These tasks are repetitive and time-consuming when done manually. AI in radiology doesn’t replace the radiologist; it removes friction so they can focus on interpreting images and making clinical decisions.
How leaders decide what to automate with AI
For Shree, the question “Where should we use AI?” starts with a simple lens: time and effort.
There are only about 40,000 radiologists in the US reading roughly 600 million imaging studies a year. They already handle hundreds of studies a day. Adding more manual work on top of that makes no sense.
So she asks:
- Is the task monotonous and repetitive?
- Does it consume a lot of highly skilled human time?
- Will automation give physicians time back to read more studies or handle complex cases?
If the answer is yes, it’s a strong candidate for AI in radiology.
Shree also places AI in the context of three major human revolutions:
- Electricity and the industrial revolution—automating physical labor.
- Computers and the web—augmenting mental processing and access to information.
- AI—blending brain and physical work, reshaping how industries operate.
Seen this way, AI isn’t a gadget. It’s the next wave of how work gets done.
Accuracy, fear and why AI governance matters
Whenever AI enters clinical workflows, fear and skepticism follow especially around accuracy and safety. For Shree, the only way to handle that is through data, facts and governance.
ACR is heavily invested in AI governance in radiology. When AI runs at a site, ACR doesn’t simply accept vendor’s claims. It:
- Receives the AI output from the manufacturer
- Receives the radiology report from the site
- Uses generative AI to parse the radiologist’s report and extract what the human concluded
- Compares the AI result against the radiologist’s finding to detect concordance or discordance
- Feeds those facts back to the site
Over time, this monitoring builds a clear picture of where AI performs well, where it struggles, and how often it aligns with human experts.
Shree Periakaruppan contrasts this with traditional software. In classic systems, you build, test, deploy and mostly “forget”—as long as inputs stay stable, behavior is predictable. In AI in radiology, the incoming data can shift over time. A model that once performed well may degrade as data patterns change.
That’s why governance and monitoring often take more effort than the initial build. Deployment isn’t the finish line; it’s the start of a new lifecycle of continuous oversight.
Building the data lake that made AI in radiology possible
Underneath ACR’s AI work sits a vital foundation: its data platform.
A key part of that story begins with the Neiman Health Policy Institute, a division of ACR that handles claims data. They receive US-wide claims data from CMS and, a few years ago, added around 300 terabytes of private payer data from a third-party provider.
Traditionally, this team:
- Stored about 30 terabytes of claims data on-premises
- Ran analytics in SAS on a central server
- Had 15 researchers remoting into that box to run 15-year longitudinal cohort analyses
A typical cohort query—such as “all women in a certain age range with a specific condition over 15 years”—took 8–10 hours. Researchers would start it in the evening and only see results the next morning.
When the extra 300 terabytes of data arrived, that approach hit its limit. Storing and querying that volume on-prem just wasn’t viable.
The team briefly explored scaling SAS on a Hadoop-like architecture with master–slave clusters. But the numbers didn’t work:
- Around $500,000 in infrastructure
- Roughly $10,000 per user in licensing
- Full responsibility for maintaining it all—heavy for a non-profit
So they stepped back and focused on people, process, then technology.
The solution they landed on:
- Move to an AWS-based data lake
- Use Athena and AWS tools to create cohorts directly over the data lake
- Only move the resulting 1–2 GB cohort into SAS for further statistical analysis
To prove the change, an engineer took a cohort query that previously ran in SAS and executed it on Athena.
- Old world: 8 hours
- New world: 1 minute 30 seconds
After that, there was no going back.
The data lake:
- Allowed ACR to onboard the full 300 TB of new data
- Reduced typical query times from overnight to a few minutes
- Became the foundation for ACR’s broader data strategy and scalable AI in radiology
Why radiology had a head start: standardized data
One reason AI in radiology has advanced so quickly is that the data foundation was already strong.
The DICOM standard for medical imaging was established decades ago. That means:
- Imaging devices across vendors follow a well-defined structure
- Data coming off scanners is relatively consistent compared with many other healthcare domains
When you combine a robust data platform like ACR’s data lake with a standardized format like DICOM, it becomes much easier to build, validate and scale AI safely and repeatably.
For Shree, the high adoption of AI in radiology boils down to:
- Need – a huge workload and a limited number of radiologists
- Foundation – standardized data that AI can reliably learn from
The real AI challenge: trust, not tech
When asked about the biggest challenge in designing AI infrastructure, Shree is clear: it’s not the technology.
There are plenty of smart people and mature tools that can make almost anything work technically. The harder problems are:
- People—getting clinicians, researchers and stakeholders to trust and use AI
- Process—embedding AI into real workflows instead of treating it as a separate experiment
- Trust—proving over time that the system behaves reliably and improves care
Every AI solution has to start with a simple question: “What do we need to do for people to trust this system?”
Without that, even the best technical solution stays on the shelf.
Measuring value: KPIs that actually matter
For Shree, AI governance and value measurement must stay business-aligned, not just technical.
In ACR’s context, that means focusing on:
- Time to diagnosis
- Operational efficiency and turnaround time
- Patient outcomes and safety
- Cost savings and resource optimization
She simplifies this into a framework any organization can use. Your AI KPIs should tie back to at least one of:
- Increase the top line—revenue or growth
- Decrease the bottom line—cost and waste
- Improve customer or patient satisfaction—which often feeds into revenue and retention
If a metric doesn’t clearly support one of these, it’s probably not the right KPI to track AI value. In radiology, the strongest outcome is usually patient care—saving lives, reducing delays and making access more equitable.
Red flags: where AI projects go wrong
Asked about a key red flag to watch for before things unravel, Shree highlights two:
- Unclear ROI—If a project doesn’t have a clearly defined return—money, time, outcomes or satisfaction—that’s a warning sign. Teams can spend months without knowing what success looks like.
- Weak workflow integration—AI is not “build once, deploy, forget”. It has a full lifecycle—from development to deployment, monitoring, governance and retraining. If AI isn’t wired into everyday workflows with feedback loops, it either quietly fails or introduces risk.
For Shree, successful AI in radiology is never just about the model. It’s about end-to-end integration, clear value and continuous oversight.
Final thoughts: platform, governance and purpose
Across ACR’s work in accreditation, research, data and AI, Shree’s view stays grounded:
- A strong data platform is the base for any serious AI in radiology
- Standardized data (like DICOM) gives radiology a major advantage
- AI governance and monitoring are non-negotiable if you want trust
- KPIs must link to revenue, cost, satisfaction or—most importantly in healthcare—patient outcomes
Above all, AI in radiology isn’t about replacing radiologists. It’s about giving them time back, reducing friction and making sure patients get the right diagnosis and care faster.
In Shree’s words and worldview, AI will sit alongside electricity and computing as a defining human revolution. But its real impact will depend on something deeply human: clear purpose, strong foundations and earned trust.
Watch the full video conversation here to explore more insights and hear the complete discussion.