Skip to main content

The AI Shutdown is Here: Why Most Projects Will Fail by 2026

with Andre Samokish

About This Episode

Andre Samokish, Privacy and AI Governance Manager at Anaplan and AIGP and CIPM IAPP certified professional, joins the show to unpack why the majority of enterprise AI projects are stalling or failing outright. The conversation moves past surface-level AI hype to examine the structural governance gaps, privacy blind spots, and organizational misalignments that are quietly killing AI programs before they ever reach production.

Key Insights

Privacy cannot be bolted onto an AI system after deployment. Governance frameworks that treat privacy as a compliance checkbox rather than an architectural requirement are setting their AI programs up to fail. The conversation explores how organizations that separate privacy engineering from AI development create fractures that surface as regulatory exposure, data misuse, and eroded stakeholder trust.

Topics Explored

The episode covers AI project failure rates and root causes, privacy-by-design in AI governance, the role of AIGP and CIPM frameworks in operationalizing responsible AI, organizational accountability structures for AI programs, and why governance without enforcement is theater.

About the Guest

Andre Samokish is the Privacy and AI Governance Manager at Anaplan, where he leads efforts to embed privacy and governance into AI programs at scale. With AIGP and CIPM IAPP certifications, he brings a rigorous, frameworks-driven approach to operationalizing responsible AI in complex enterprise environments.

Questions This Episode Answers

Why are most enterprise AI projects failing?

The majority of enterprise AI initiatives are stalling because organizations treat governance, privacy, and accountability as afterthoughts rather than foundational requirements. Without structural alignment between privacy engineering, AI development, and operational leadership, projects fail to move beyond pilot stages.

What does privacy-by-design mean for AI governance?

Privacy-by-design means embedding privacy considerations into the architecture of AI systems from the start, not retrofitting compliance controls after deployment. It requires treating privacy as an engineering discipline integrated into every stage of the AI lifecycle.

Why is governance without enforcement ineffective?

Governance frameworks that lack enforcement mechanisms become organizational theater. Without accountability structures that connect governance requirements to operational consequences, policies exist on paper but fail to change behavior or protect against the risks they were designed to address.

Listen & Subscribe

Catch every episode of The Signal Room on your preferred platform.

YouTube Apple Podcasts Spotify

About the Host

Chris Hutchins is the Founder and CEO of Hutchins Data Strategy Consultants, where he helps healthcare organizations unlock the value of their data and AI investments through practical, responsible strategies. With deep experience integrating data, analytics, and AI across complex healthcare systems, he hosts The Signal Room to surface the leadership decisions, ethical questions, and operational realities that shape healthcare's data-driven future.