Ken Johnston on Building Trust through AI Ethics and Governance 

Ken Johnston - Vice President of Data, Analytics, and AI @Envorso
In this special edition of The Executive Outlook, we had the privilege of speaking with Ken Johnston, Vice President of Data, Analytics, and AI at Envorso—a global voice in cloud, data science, and ethical AI. With over three decades of experience across Microsoft, Bing, Ford, and large-scale enterprise systems, Ken has been part of the very fabric that shaped how organizations think about responsible technology.
What makes Ken’s story stand out isn’t just the scale of his experience but the rare blend of curiosity and caution that defines his approach to technology. In a world chasing speed, Ken has always been one step ahead—asking not just what AI can do, but what it should do. When he looked back, his path to AI wasn’t planned. “I wasn’t always working on AI,” he recalled with a smile. “I started as a database developer—obsessed with structure, order, and precision. I became a certified Microsoft DBA and thought that was my path.” But curiosity has a way of redirecting even to the most structured careers. His next transition—into software testing—opened new doors. He specialized in stress and performance testing, setting up vast systems to simulate real-world loads. “I used to run scale tests overnight,” he said. “By morning, I’d have megabytes of log files to analyze—and back then, that was big data.”
Prefer to listen on the go? Tune in to the full podcast episode on Spotify below:
It was during this phase that his fascination with data truly took root. The thrill of seeing patterns emerge from chaos, of uncovering meaning buried inside numbers, became the heartbeat of his career. “I’ve always followed the data,” he said. “Every role I took pulled me deeper—from structured databases to massive-scale observability in the cloud.” The turning point came after shipping Office 2010. Ken joined the Bing team, and what he saw there would redefine his career. “When I joined Bing, I realized how small my world had been. I went from managing megabytes to petabytes, even exabytes. That’s when I understood the true power of unstructured data and AI.” It wasn’t just the size that amazed him—it was freedom. “In databases, you search for precision,” he explained. “With AI, you search for possibilities. That’s a completely different mindset.”
He laughed as he recalled the nickname his peers gave him back then—Big Data Surfer. “It fits,” he said. “I was just surfing waves after waves of data. Structured, unstructured—I didn’t care. I just wanted to see what it could teach us.” That same curiosity still drives him today. When asked what excites him about the shift from structured to unstructured data, he leaned forward with infectious energy. “Reports only answer questions you’ve already thought of,” he said. “Dashboards give you slightly more—some filters, some variation. But unstructured data? That’s where magic happens. It lets you uncover the questions you never imagined.” He often turns to graph models to explain this power. “Graphs help us find relationships we didn’t know existed,” he said. “It’s not about just connecting dots—it’s about revealing the dots you didn’t even know were there.” Ken’s stories are filled with examples of how creative data thinking drives real insight. He recalled one famous case: analysts once combined corporate jet flight data with merger and acquisition rumors, predicting deals before they were publicly announced. “Who would think to join those two data sets?” He laughed. “But that’s exactly the kind of curiosity that defines good data science—finding unexpected relationships that change how we see the world.”

Watch the full conversation on YouTube by clicking the link below:

As the conversation turned to AI’s current wave, Ken’s tone shifted—equal parts of excitement and realism. “I’ve been working in AI since long before OpenAI or ChatGPT existed,” he said. “Back then, we had to build everything—annotating data, structuring metadata, manually adding context. Today, the models discover the structure for you. It’s extraordinary.” He paused. “But that also means we’ve lost some of our rigor. The models are fast, but we don’t always understand why they behave the way they do. That’s where governance becomes critical.” Ken believes that the failure of many AI initiatives has little to do with technology—and everything to do with strategy. “Most AI projects fail because they don’t start with clarity,” he said. “They launch pilots with no alignment to business goals. Leaders say, ‘Let’s do AI,’ just to prove they’re keeping up—but they never define what success looks like.”
He’s seen the pattern repeat across industries. “It’s not an AI problem,” he explained. “It’s a leadership problem. Success comes when you align metrics. Define outcomes and create accountability. The same principles that drove cloud and software success apply to AI—but we’re forgetting them in the hype.” Ken’s experience leading data-driven organizations has taught him that clarity and curiosity must coexist. “If you reward the wrong metrics, you’ll get the wrong behavior,” he said. “In one case, a team was rewarded only for closing deals, not for product usage. They hit targets—but adoption was near zero. Data doesn’t lie; it just tells you what you asked it to.” When we asked about ethical AI, his answer was both structured and deeply personal. “Ethics isn’t a checkbox,” he began. “It’s a compass. It tells you not just what you can do, but what you should do.”
He referenced one of his favorite frameworks—the Microsoft Responsible AI principles, which emphasize fairness, transparency, accountability, inclusivity, security, and privacy. “Those six principles are timeless,” he said. “They apply whether you’re training a chatbot or designing a financial model.” To illustrate how ethics and governance intersect, Ken shared a story that still echoes across the tech world. Years ago, Microsoft discovered that an early facial recognition model performed poorly for darker-skinned women on dark backgrounds and lighter-skinned women on bright backgrounds. “The bias wasn’t intentional,” he explained. “It was statistical. Our training data simply didn’t include enough diversity. We didn’t measure what we collected—and bias crept in through omission, not intent.” That realization became a turning point for him. “Bias hides in the data you don’t collect,” he said. “Fairness starts with asking who’s missing from your model.” He offered another example—this time about privacy. When Microsoft launched gesture recognition for Xbox, employees volunteered to train the models. But evolving privacy laws forced the company to later delete all that data. “It was the right call,” he said firmly. “Your biometric data—your face, your voice, your gestures—those belong to you, not a corporation. Ethics and law finally caught up with technology.”
Over time, Ken’s focus has shifted from building systems to building trust. “AI governance is the foundation of everything,” he said. “You can’t innovate without trust. If your customers or your employees don’t trust the systems you build, it doesn’t matter how advanced they are.” He also pointed out that responsible AI isn’t just about ethics—it’s also about observability and optimization. “As models grow, token usage and performance fluctuate,” he said. “If you’re not monitoring cost, latency, and accuracy in real time, you’re not governing AI—you’re guessing.” For Ken, AI observability is the missing bridge between innovation and accountability. His fascination with AI failures stems from the same belief. “I love studying AI mistakes,” he said, laughing. “There’s an entire database of AI incidents out there—I’ve read almost all of them. Because every failure teaches us something. Every bug has a story.” He told one story about Apple’s credit card model, which unintentionally gave women lower credit limits than men—even when their financial profiles were identical. “Apple didn’t include gender as a feature, but their model still learned gender bias through shopping patterns,” he explained. “That’s how subtle bias can be. You think you’ve removed it—but it finds a way back in.”
These lessons, he said, reveal the true cost of poor AI governance: “It’s not just a regulatory risk. It’s a brand risk. Its trust lost at scale.” Ken’s stories are full of humility and humour, but beneath them lies a serious mission—to make AI safe, accountable, and human-centric. “I don’t want innovation to stop because of fear,” he said. “AI governance isn’t about slowing down progress. It’s about protecting it.” As we wrapped up, he reflected on where the field was heading. “Every company today wants to move fast with AI,” he said. “But speed without reflection is a shortcut to chaos. We need leaders who understand that governance isn’t red tape—it’s the runway that makes flight possible.” Ken Johnston’s journey—from database developer to global advocate for ethical AI—is a reminder that technology alone doesn’t shape the future. People do. His story embodies the very principle he teaches: AI’s greatest power lies not in replacing human intelligence but in amplifying human responsibility.

Click here to discover more life stories and insights from leaders shaping the future of data and tech.

Editor Bio

Isha Taneja

I’m Isha Taneja, serving as the Editor-in-Chief at "The Executive Outlook." Here, I interview industry leaders to share their personal opinions and provide valuable insights to the industry. Additionally, I am the CEO of Complere Infosystem, where I work with data to help businesses make smart decisions. Based in India, I leverage the latest technology to transform complex data into simple and actionable insights, ensuring companies utilize their data effectively.
In my free time, I enjoy writing blog posts to share my knowledge, aiming to make complex topics easy to understand for everyone.

Leave a Reply

Your email address will not be published. Required fields are marked *