John Lynn explains why AI in Healthcare will scale through trust, workflow redesign, patient acceptance and practical leadership, not hype alone.

During a recent conversation on The Executive Outlook Isha Taneja spoke with John Lynn, Founder of Healthcare IT Today and one of the most respected voices in healthcare technology. His experience spans more than two decades, but what makes his perspective valuable is not just longevity. It is clarity. John has watched healthcare move from paper driven processes to electronic records, from fragmented tools to broader platforms, and now into a new phase where AI in Healthcare is beginning to reshape both operations and care.
What stood out immediately was how practical his thinking remains. John does not speak about technology as something to admire from a distance. He speaks about it as something that must prove its value inside real workflows. He is interested in what helps organizations function better, what reduces friction for clinicians, and what makes adoption possible in environments where trust matters deeply. In a space crowded with predictions and product noise, his view felt grounded in the reality of how healthcare actually changes.
That mindset has shaped his journey from the beginning. He came into healthcare through technology. He started in higher education IT and then moved into healthcare through electronic medical record implementation at UNLV. What he learned there still shapes how he sees the market today. At first, the instinct was to replicate paper-based workflows in digital form. But the real breakthrough came when the organization realized technology should not just copy old processes. It should redesign them. That early experience gave him a lasting belief that technology matters most when it helps people do things they previously thought were impossible, or at least far more difficult. That same belief now informs the way he thinks about AI in Healthcare.
He also shared one of the hardest moments of his career. During COVID, Healthcare IT Today faced a sudden shock as conferences stopped and marketing budgets froze. For a media business built around industry presence, it was a deeply challenging period. Yet even here, his story reflects the larger theme of the conversation. Crisis created reinvention. A bold idea to conduct one hundred interviews in one hundred days eventually helped video become a core part of the business. What began as a difficult response to disruption became a lasting source of strength. It was a reminder that transformation often becomes visible only after leaders decide to move through uncertainty rather than wait for it to pass.
One of John’s sharpest observations is that healthcare is not lacking AI innovation. If anything, it is facing the opposite problem. There are too many promising tools, too many vendors, and too many narrowly focused solutions entering the market at once. That sounds like abundance, but for enterprise leaders it can feel more like fragmentation.
He explains this through a pattern healthcare has seen before. Before electronic health records matured into broader enterprise platforms, hospitals often used separate systems for pharmacy, laboratory, revenue cycle, and records. Over time, integrated platforms won because leaders needed simplicity, continuity, and fewer disconnected systems. AI in Healthcare, he argues, is now in that earlier phase. There are countless point solutions that do one thing very well, but there is no single platform that can yet handle everything a health system might want across clinical AI, radiology AI, and revenue cycle management.
That creates a real executive dilemma. CIOs know they need an AI plan. They know standing still is not a strategy. But they also know the future platform landscape is still evolving. They are being pushed to act before the market has fully organized itself. He captures that tension clearly. Leaders must decide what to do today while also thinking about which vendors or platforms may still matter two or three years from now. In other words, the challenge is not deciding whether AI matters. The challenge is deciding how to move without overcommitting too early.
Perhaps the most important insight John offers is that AI adoption is not being slowed only by technical concerns. It is being slowed by human psychology.
He draws on a framework that explains why people resist AI. First, people value autonomy. They want control over decisions and outcomes. Second, people often face an asymmetry of information. Something may actually be safer or more effective than they believe, but they have not yet internalized that reality. Third, people are far more tolerant of human failure than machine failure. A person can make a mistake and still be forgiven as human. A machine makes a mistake and suddenly the technology itself feels unacceptable.
In healthcare, these barriers become even stronger. Doctors want agency. They want to trust their own judgment. They may resist the idea that AI could outperform them in some situations, even when that possibility is becoming more real. He is careful not to oversimplify this point. He does not say AI is always better than clinicians. In some cases it is not. But he does point out that AI is getting better quickly, and that creates a new type of tension. Technical progress may be moving faster than emotional acceptance. Leaders who ignore that reality will struggle to scale AI in Healthcare, no matter how advanced the tools become.
When John talks about impact, he does not start with abstract futurism. He starts with use cases that reduce friction right now.
The first is the AI scribe, or ambient clinical voice. For him, this is one of the most transformative developments healthcare has seen in years. A clinician can enter the room, record the interaction with patient consent, and focus on the patient rather than documentation. When the visit ends, the notes are already created. Increasingly, the technology can also support coding and surface prompts on what else the doctor may need to consider. John sees this not as a small administrative convenience, but as a major shift in clinical workflow. It removes operational minutiae from doctors and gives them more time and attention for care itself.
The second major area is revenue cycle management. Here, He makes a particularly useful point. AI changes the economics of effort. He gives the example of low balances, where the amount owed is so small that it often costs more for a human to chase payment than the balance is worth. In a traditional operating model, those amounts are often ignored. But with AI, the outreach becomes inexpensive enough to justify the effort. Calls can be made, messages can be sent, and follow up can happen at a lower cost. What was once uneconomical becomes viable. He also points to prior authorization as one of the most interesting areas for improvement inside revenue cycle, especially because it remains such a frustrating and inefficient process across the system. This is one of the clearest examples of how AI in Healthcare is not just making systems smarter. It is making previously impractical work finally worth doing.
One of the more striking parts of the conversation comes when the focus shifts from operational value to patient acceptance. John’s answer is honest and slightly uncomfortable. In many cases, he does not think acceptance will come from perfect enthusiasm. It will come because the technology becomes part of how service is delivered.
He notes that AI bots calling patients have already become so realistic that many people, especially older individuals, may not realize they are speaking with a bot at all. Some of the recordings, he says, are convincing enough that people genuinely believe they are interacting with a person. For some, that may feel unsettling. But John’s view is pragmatic. If patients get the service they need, many will accept the interaction even if it is powered by AI.
He also makes an important generational point. Right now, older patients may still prefer a phone call. Younger people may prefer text, chat, or another digital format altogether. The interesting part is that the technology can adapt to those preferences. It can call, message, or communicate through different channels while still achieving the same outcome. Over time, the interface may change, but the delivery model will remain AI-enabled. He goes even further. In many situations, he does not think people will have much of a real choice. If the bot is the way the system reaches out and the patient does not engage, they may simply miss the service or care they need. That is a powerful reminder that patient adoption will not always be a matter of preference alone. It will often be shaped by how health systems choose to operate.
He also offers a prediction that feels both practical and forward-looking. He believes healthcare is entering the year of the robot.
He is not describing science fiction. He is describing machines that can move through hospitals and take on time-consuming operational work. Transport is one example. Nurses often spend valuable time moving items between the pharmacy, the laboratory, and the patient's floor. A capable humanoid robot can take on that work if it can use an elevator, navigate security, and move between departments. That may sound simple, but in a hospital setting it solves a real productivity problem.
He also points to robots used for telehealth. Instead of a static screen, these systems can move into a room, position themselves, and support a more flexible remote interaction between clinicians and patients. Another use case is supplying management, where robots can help ensure, the right materials are stocked in the right place at the right time. What makes these examples compelling is not novelty. It is that they remove repetitive workload from already stretched teams. Once again, John’s logic is consistent. The strongest use of AI in Healthcare is often not the most glamorous. It is the use that quietly frees people to focus on higher value work.
One of the smartest parts of John’s perspective is that he does not limit innovation to the most obvious categories. He finds excitement in areas that often go unnoticed but are rife with operational drag.
Credentialing is a good example. It is time consuming, documenting heavy, and filled with repetitive steps involving providers, state bodies, medical societies, and insurers. For healthcare organizations trying to onboard clinicians quickly, it can become a major bottleneck. He sees this as a perfect use case for AI. Systems can request missing documents, gather public data, navigate difficult legacy portals, and present information back to the clinician for verification. That does not just save time. It shortens the path between hiring and productivity.
He also highlights chart summarization. Medical records contain enormous amounts of information, but not every detail matters equally to every specialist. An orthopaedic surgeon, for example, may not need the same emphasis as another speciality. AI can surface the most relevant details for the right clinician at the right moment. He also points to radiology AI as one of the most progressive areas in the field, moving quickly and raising important questions about how the speciality may evolve. Together, these examples show that AI in Healthcare is not limited to one headline use case. It is spreading across administrative, operational, and clinical layers at the same time.
When asked about jobs, He responds with balance rather than panic. He openly acknowledges that some roles will disappear, especially repetitive tasks that people often disliked in the first place. But he places that in a broader historical context. Every major technological shift has replaced certain kinds of work while creating new forms of value and new responsibilities.
He uses familiar comparisons to make the point. The move from horses to cars changed labor. The rise of word processing changed labor. AI will do the same. Some jobs will be automated. Others will emerge around coordinating, monitoring, and validating AI systems. And there are still areas where people will continue to want human oversight and human care. He is direct about that. He is not ready for AI to perform a knee replacement or set a broken bone on its own. The future may include more automation, but it still includes a strong place for clinical judgment, patient trust, and human responsibility. In many ways, the more AI enters healthcare, the more important human coordination may become.
He closes with advice that feels especially useful for executives trying to navigate a fast moving market. Start doing. Build a network. Do not be afraid to fail thoughtfully.
The first point matters because leaders cannot understand AI only by reading about it. They have to use it. They have to experiment with it in work and in everyday life. He even mentions something as ordinary as planning a vacation with AI as a way to understand its strengths and limitations more clearly. Experience creates judgment. Without doing, there is no real learning.
The second point is about network quality. In a noisy market, trusted peers help separate what is real from what is performative. They help leaders identify what is effective, what is immature, and what deserves closer attention. That kind of network is not optional anymore. It is part of how executives stay informed without being overwhelmed.
The third point may be the most important. He argues that doing nothing is also a risk. In healthcare, caution matters because lives are involved. But complete hesitation carries its own cost. The better culture is not one where mistakes never happen. It is one where leaders experiment carefully, learn quickly, and recover fast when something does not work. That mindset may be what ultimately separates organizations that talk about AI in healthcare from those that actually turn it into meaningful progress.
What John makes clear is that the future of AI in healthcare will not be decided by hype alone. It will be shaped by trust, workflow design, patient adaptation, economic practicality, and leadership that is willing to move before the market feels fully settled. That is what made this conversation so valuable. It was not just about what AI can do. It was about what it takes to make AI work in the real world.
Because in the end, AI alone will not create impact. The thoughtful application of AI within systems, workflows, and decisions makes the real difference.
Conversations like this, shared through The Executive Outlook, are meant to bring that clarity forward. And for organizations ready to move beyond experimentation into real execution, working with partners like Complere Infosystem can help turn that intent into solutions that work where it matters most.
