Skip to main content

Data Quality and AI Strategy: Garbage In, Gen AI Out

with Danette McGilvray

About This Episode

Danette McGilvray, a leading voice in data quality, delivers a clear message: AI strategy without data quality is a house built on sand. The conversation covers practical frameworks for assessing and improving data quality, why generative AI amplifies data problems rather than solving them, and what healthcare organizations must get right before investing in advanced analytics.

Key Insights

Generative AI does not fix bad data; it amplifies it at scale and with confidence. When garbage data enters generative models, the output appears sophisticated and authoritative while remaining fundamentally unreliable. This creates a false sense of trust in outputs that mask underlying quality problems.

Data quality must be treated as a strategic investment, not a cleanup project that happens after AI deployment fails. Healthcare organizations that succeed invest in data governance long before they invest in machine learning models, treating data quality as foundational rather than preparatory.

Measuring data quality requires frameworks that connect data integrity to business and clinical outcomes, not just technical accuracy metrics. A field might be technically complete while containing clinically dangerous values that no statistical audit would catch.

Topics Explored

The episode covers data quality frameworks, AI strategy foundations, generative AI data risks, healthcare data governance approaches, data integrity measurement, and the relationship between data quality investment and AI return on value. Discussion includes practical steps for assessing current data quality and building a roadmap for improvement.

About the Guest

Danette McGilvray is a recognized data quality expert and author who has spent her career building frameworks for measuring and improving data quality. Her work directly informs how organizations prepare their data foundations for AI readiness.

Questions This Episode Answers

Why does data quality matter for AI strategy?

Data quality determines AI output quality. Generative AI systems trained on incomplete, inaccurate, or inconsistent data produce outputs that are confidently wrong, a particularly dangerous outcome in healthcare where decisions affect patient safety. Investing in data quality before deploying AI is not conservative; it is responsible.

What happens when AI is built on bad data?

AI systems built on poor-quality data amplify existing errors at scale and with a veneer of authority that makes them harder to catch. In healthcare, this means misdiagnoses, incorrect treatment recommendations, and compliance violations that would not have occurred with human-only workflows. Generative AI does not just give garbage out; it gives confidently wrong answers at scale.

How should healthcare organizations approach data governance?

Data governance should be treated as a continuous capability rather than a one-time project. Effective governance requires clear ownership, measurable quality standards, and ongoing monitoring that adapts as data sources and AI applications evolve.

"When the inputs are garbage, generative AI does not just give you garbage out. It gives you confidently wrong answers at scale, and in healthcare, that is a patient safety crisis."

Danette McGilvray, Data Quality Expert and Author, on The Signal Room Podcast

Listen & Subscribe

Catch every episode of The Signal Room on your preferred platform.

YouTube Apple Podcasts Spotify

About the Host

Chris Hutchins is the Founder and CEO of Hutchins Data Strategy Consultants, where he helps healthcare organizations unlock the value of their data and AI investments through practical, responsible strategies. With deep experience integrating data, analytics, and AI across complex healthcare systems, he hosts The Signal Room to surface the leadership decisions, ethical questions, and operational realities that shape healthcare's data-driven future.