Skip to main content

Authentic Intelligence: Context, Explainability, Bias, and Human-in-the-Loop

with Keshavan Seshadri

About This Episode

Keshavan Seshadri, a Senior ML Engineer at Prudential Financial, makes the case that intelligence without transparency is not intelligence at all. The conversation covers explainability requirements for AI systems, how bias creeps into models and what it takes to detect it, and why human-in-the-loop design is not a limitation yet a feature that strengthens AI deployments.

Key Insights

Explainability in AI is not a nice-to-have; it is a prerequisite for any deployment where decisions affect people's lives or livelihoods. When healthcare AI systems make recommendations that clinicians cannot understand, clinicians cannot defend them to patients, cannot override them when appropriate, and cannot improve them when they fail.

Bias in machine learning models is a systemic issue that requires ongoing monitoring, not a one-time audit. Bias emerges from training data, can amplify during deployment, and changes as the population using the system evolves. Organizations that treat bias detection as a launch criteria rather than continuous practice will continue to experience failures.

Human-in-the-loop approaches strengthen AI systems by introducing accountability at the points where models are most likely to fail. Rather than viewing human involvement as a necessary evil, effective organizations recognize it as a design feature that improves outcomes and maintains trustworthiness.

Topics Explored

The episode covers AI explainability techniques, algorithmic bias detection methods, human-in-the-loop AI system design, model transparency and accountability practices, responsible AI deployment frameworks, and the engineering practices that separate trustworthy AI from black-box automation. Discussion includes specific techniques for making models interpretable and strategies for ongoing bias monitoring.

About the Guest

Keshavan Seshadri is a Senior ML Engineer at Prudential Financial, where he works on building machine learning systems that are transparent, fair, and operationally reliable. His technical depth informs a practical perspective on responsible AI.

Questions This Episode Answers

Why does AI explainability matter in healthcare?

Clinicians and patients need to understand why an AI system made a particular recommendation before they can trust or act on it. Explainability is not just a technical feature; it is a requirement for clinical adoption, regulatory compliance, and patient consent. Black-box models that cannot explain their reasoning face justified resistance from healthcare professionals.

How do you detect and mitigate bias in AI models?

Bias detection requires systematic evaluation across demographic groups, clinical populations, and data sources throughout the model lifecycle. Mitigation is not a one-time fix but an ongoing process of monitoring, testing, and adjusting as populations and data distributions change. Responsible AI development treats bias detection as a core engineering practice.

What is the case for keeping humans in the loop with AI?

Human oversight ensures that AI recommendations are evaluated against clinical context, patient preferences, and situational factors that models cannot fully capture. The human-in-the-loop approach also provides a critical safety net for catching errors before they reach patients. Removing human oversight to gain efficiency creates risks that far outweigh the benefits.

"Authentic intelligence requires transparency. If a model cannot explain why it made a recommendation, clinicians have every right to reject it."

Keshavan Seshadri, Senior ML Engineer, Prudential Financial, on The Signal Room Podcast

Listen & Subscribe

Catch every episode of The Signal Room on your preferred platform.

YouTube Apple Podcasts Spotify

About the Host

Chris Hutchins is the Founder and CEO of Hutchins Data Strategy Consultants, where he helps healthcare organizations unlock the value of their data and AI investments through practical, responsible strategies. With deep experience integrating data, analytics, and AI across complex healthcare systems, he hosts The Signal Room to surface the leadership decisions, ethical questions, and operational realities that shape healthcare's data-driven future.