About This Episode
Danette McGilvray, a leading voice in data quality, delivers a clear message: AI strategy without data quality is a house built on sand. The conversation covers practical frameworks for assessing and improving data quality, why generative AI amplifies data problems rather than solving them, and what healthcare organizations must get right before investing in advanced analytics.
Key Insights
Generative AI does not fix bad data; it amplifies it at scale and with confidence. When garbage data enters generative models, the output appears sophisticated and authoritative while remaining fundamentally unreliable. This creates a false sense of trust in outputs that mask underlying quality problems.
Data quality must be treated as a strategic investment, not a cleanup project that happens after AI deployment fails. Healthcare organizations that succeed invest in data governance long before they invest in machine learning models, treating data quality as foundational rather than preparatory.
Measuring data quality requires frameworks that connect data integrity to business and clinical outcomes, not just technical accuracy metrics. A field might be technically complete while containing clinically dangerous values that no statistical audit would catch.
Topics Explored
The episode covers data quality frameworks, AI strategy foundations, generative AI data risks, healthcare data governance approaches, data integrity measurement, and the relationship between data quality investment and AI return on value. Discussion includes practical steps for assessing current data quality and building a roadmap for improvement.
About the Guest
Danette McGilvray is a recognized data quality expert and author who has spent her career building frameworks for measuring and improving data quality. Her work directly informs how organizations prepare their data foundations for AI readiness.
Questions This Episode Answers
Why does data quality matter for AI strategy?
Data quality determines AI output quality. Generative AI systems trained on incomplete, inaccurate, or inconsistent data produce outputs that are confidently wrong, a particularly dangerous outcome in healthcare where decisions affect patient safety. Investing in data quality before deploying AI is not conservative; it is responsible.
What happens when AI is built on bad data?
AI systems built on poor-quality data amplify existing errors at scale and with a veneer of authority that makes them harder to catch. In healthcare, this means misdiagnoses, incorrect treatment recommendations, and compliance violations that would not have occurred with human-only workflows. Generative AI does not just give garbage out; it gives confidently wrong answers at scale.
How should healthcare organizations approach data governance?
Data governance should be treated as a continuous capability rather than a one-time project. Effective governance requires clear ownership, measurable quality standards, and ongoing monitoring that adapts as data sources and AI applications evolve.