Authentic Intelligence: Context, Explainability, Bias, and Human-in-the-Loop

with Keshavan Seshadri

Episode 9 December 24, 2025 28 min

Authentic Intelligence: Context, Explainability, Bias, and Human-in-the-Loop

with Keshavan Seshadri

Keshavan Seshadri, a Senior ML Engineer at Prudential Financial, makes the case that intelligence without transparency is not intelligence at all. The conversation covers explainability requirements for AI systems, how bias creeps into models and what it takes to detect it, and why human-in-the-loop...

Show Notes

Keshavan Seshadri, a Senior ML Engineer at Prudential Financial, makes the case that intelligence without transparency is not intelligence at all. The conversation covers explainability requirements for AI systems, how bias creeps into models and what it takes to detect it, and why human-in-the-loop design is not a limitation but a feature that strengthens AI deployments.

Explainability in AI is not a nice-to-have; it is a prerequisite for any deployment where decisions affect people's lives. Bias in machine learning models is a systemic issue that requires ongoing monitoring, not a one-time audit. Organizations that treat bias detection as a launch criteria rather than a continuous practice will continue to experience failures.

Topics covered: AI explainability requirements, algorithmic bias detection and monitoring, human-in-the-loop design principles, model transparency for clinical trust, and why authentic intelligence demands that humans remain accountable for AI-driven decisions.