This moment calls for more than awareness; it demands accountability and active governance of AI at the board table.
The adoption of artificial intelligence (AI) technologies across the Australian health and aged care sector presents enormous promise, but it also brings significant responsibility for boards.
As a continuation of what I wrote recently about risk literacy and director capability, it’s time we turn our attention to how executive teams and boards work together to strengthen governance around AI.
AI is already influencing clinical diagnostics, streamlining administrative tasks and enabling more culturally safe, patient-centred care across both public and private systems. But with this comes a complex risk landscape. And that risk must be proactively governed by those at the board table, many of whom may not yet fully understand the capabilities or consequences of AI.
The strategic governance challenge
Boards are ultimately accountable for the responsible use of AI across their organisations.
The interplay between innovation and regulation is becoming a defining feature of contemporary governance. In times when we have observed shifting regulations and the expectation of further regulatory changes in Australia regarding high-risk settings, it has become evident that technology has outpaced policy and oversight structures, creating unknown or unaddressed risks at the board table.
An important dimension that boards and their Audit and Risk Committee (ARC) need to appreciate is that AI in healthcare is not a “set and forget” system.
AI models, even those performing optimally at the outset, can drift over time, silently moving off-course and potentially leading to unintended clinical, ethical, or operational outcomes. Regular monitoring, validation, and adjustment of AI systems are therefore critical to ensuring ongoing accuracy and alignment with clinical standards and ethical expectations.
Boards should consider:
- How AI tools are procured, validated, and continuously monitored, especially in public sector contexts where additional data ownership and procurement scrutiny applies;
- Whether internal systems are capable of identifying emerging AI risks, including escalation pathways and the appointment of an AI or digital governance lead;
- How ethical, cultural, and clinical risks are managed in real-time, with AI itself used to audit and enhance oversight;
- Whether your board has the right skills mix to provide stewardship over innovation, AI, and data governance;
Without strong governance, organisations face reputational damage, legal exposure, ethical breaches, and workforce disengagement, each with a material board-level risk.
What should boards and ARCs do?
Integrate AI into risk governance structures:
AI-specific risks must be captured either as standalone risks or embedded in existing monitored risks within the corporate risk register, clearly categorised by strategic, compliance, operational, reputational, and financial dimensions.
The Audit and Risk Committee should have defined oversight, supported by directors or advisors with relevant digital or AI governance experience.
Too often, boards don’t have the right skill mix in digital literacy/technology/AI or rely on one person only for expertise. If this expertise is not currently available, consider how you might establish an advisory function to support responsible AI.
Internal audit functions should routinely test the effectiveness of AI systems, data handling, and regulatory compliance. Noting this is now often seeing the use of an AI tool to support the audit process!
Adopt a responsible AI framework:
Boards should work with their management or executive team to develop or adopt a Responsible AI Governance Framework, consistent with evolving national and international guidance (e.g., Australian Privacy Principles, OAIC’s guidance on AI, and the proposed Commonwealth guardrails for high-risk health settings).
This framework should guide decision-making on procurement, implementation, and continuous monitoring, particularly concerning:
- Vulnerable and priority populations, including Indigenous Data Sovereignty;
- Transparency and explainability of AI tools;
- Consent and data privacy;
- Ethical considerations in care delivery.
Related
Build board and workforce preparedness:
AI is not merely a technological evolution; it represents a significant cultural shift in using data-driven insights and decision-making.
Boards should support executive teams in preparing the workforce through targeted education, change management, and transition planning.
Similar to the approaches taken toward social media, we must change how we communicate with customers and consumers and how we rapidly upskill in cybersecurity.
Likewise, directors must commit to ongoing professional development to remain informed about risks, opportunities, and governance responsibilities.
Engaging external expertise can also strengthen the board’s capabilities; for example, I have advised boards on implementing AI by conducting pilot projects using high-quality data sets in low-risk settings, thus building internal capacity and confidence.
The Australian context
In 2024 alone, we have seen a marked increase in AI reviews and consultations, including:
- Senate Select Committee: Adopting Artificial Intelligence in Australia;
- DISR: Consultation on mandatory AI guardrails;
- DoHAC: Safe and Responsible AI in Health Care interim response;
- TGA: Clarifying and strengthening AI regulation;
- A review of privacy and surveillance frameworks.
New AI use cases are emerging across LHDs, PHNs and private providers. These developments signify a critical inflection point.
Boards that adopt a proactive governance approach continuously monitor AI performance and ensure internal capacity building will be best positioned to harness AI’s value safely and ethically.
Conversely, those who are reactive may find themselves vulnerable, facing not only risks but also a loss of clinical, consumer, and community trust.
Key questions for health directors to ask:
- Is AI captured in your organisation’s risk register?
- Do you have a data or AI governance lead at the board or executive level?
- Does your ARC include or consult digital and AI expertise?
- Are your procurement and assurance processes assessing AI safety, fairness and transparency?
- Is your workforce AI-ready and is your board?
- Have you explored basic AI education tools (e.g. AICD webinars, trusted podcasts, audiobooks)?
Final reflections
AI presents a generational opportunity to advance healthcare quality, safety, and efficiency to help counteract the increasing challenges of delivering affordable healthcare.
The potential benefits must be matched by robust, strategic, and values-aligned governance.
Whether overseeing public hospitals, NGO health services or private health entities, board directors have a critical role in establishing the parameters and safeguards within the organisation.
This moment calls for more than awareness; it demands accountability and active governance of AI at the board table.
Dr Emily Kirkpatrick is managing director of EKology Group and senior clinical lecturer at the Australian Institute for Machine Learning.
This article was first published on LinkedIn. You can read the original here.