About This Episode
Susie Brannigan, VP of Healthcare Innovation, argues that AI governance in healthcare cannot be an afterthought or a compliance exercise. It must be human-centered from the start, built around transparency, clinical accountability, and the needs of the people affected by AI-driven decisions. The conversation covers governance frameworks, stakeholder engagement, and what non-negotiable principles look like in practice.
Key Insights
AI governance that begins with compliance requirements rather than human impact will always be reactive and insufficient. Healthcare organizations that lead with "what do we need to comply with?" rather than "what does responsible AI look like?" end up with governance that creates check boxes without actual protection.
Transparency in AI governance means far more than publishing a policy document; it requires ongoing communication with every stakeholder affected by AI-driven decisions. Patients need to know when AI is involved in their care. Clinicians need to understand what the system does. Administrators need visibility into outcomes and risks.
Clinical accountability must be designed into governance frameworks, not bolted on after deployment. When algorithms make recommendations that affect patient safety, somebody must be accountable for those recommendations. Governance must clarify that accountability chain.
Topics Explored
The episode covers human-centered AI governance framework design, healthcare innovation leadership, clinical accountability mechanisms, stakeholder engagement strategies in AI policy, and the strategic value of principled AI governance. Discussion includes how to build governance that empowers innovation while protecting patients and clinicians.
About the Guest
Susie Brannigan is VP of Healthcare Innovation with experience leading governance and innovation initiatives across healthcare organizations. Her focus is on ensuring that AI adoption serves the needs of patients and clinicians first.
Questions This Episode Answers
What does human-centered AI governance look like in healthcare?
Human-centered AI governance starts with the people affected by AI decisions, including patients, clinicians, and communities, and designs oversight mechanisms around their needs and rights. It requires governance structures that include clinical voices, not just technical ones, and creates accountability for AI outcomes at every level. Governance must be treated as a living practice rather than a compliance checkbox.
Why is AI governance non-negotiable in healthcare?
Healthcare AI decisions directly affect patient safety, clinical outcomes, and health equity. Without robust governance, AI systems can perpetuate biases, create accountability gaps, and make consequential errors without adequate oversight or recourse. The stakes in healthcare make ungoverned AI deployment ethically indefensible.
How should healthcare organizations structure AI oversight?
AI oversight should be distributed across clinical, technical, ethical, and operational stakeholders rather than siloed within IT or data science teams. Effective structures include AI ethics committees with clinical representation, clear escalation paths for AI-related concerns, and regular audits of AI system performance across patient populations.