Goldilocks and the three healthcare AI regulation reviews

6 minute read


If only making AI regulation for healthcare was as easy as making the porridge ‘just right’


Submissions to the Department of industry, Science and Resources (DISR) Consultation on Safe and responsible AI in Australia from the AMA and the MSIA this week demonstrate just how confusing and problematic the regulation of AI in healthcare might become over the next couple of years.

Not the fault of either organisation.

Both are doing their best in what must be a pretty confusing and demanding period where key healthcare stakeholder groups are having to provide rapid fire input into three concurrent AI consultation processes from three different government departments – this week the DISR (as above) and next week the Department of Health and Aged Care (Safe and Responsible Artificial Intelligence in Health Care Legislation and Regulation Review) and the TGA with Clarifying and strengthening the regulation of Artificial Intelligence.

It’s not clear how much each department is sharing notes although it’s very clear that the overlap between each consultation as far as healthcare is concerned is a lot.

While the AMA and the MSIA are coming from pretty different stakeholder positions in healthcare they were both in sync on the idea that healthcare, being a uniquely complex and high risk sector, is going to need a lot of thinking beyond the more industry wide and generic ideas on regulating AI in Australia that the DISR has been asking for advice on.

In this respect submissions are due to DoHAC’s Healthcare AI and Legislation review next  Monday and TGA AI regulation submissions are due on the same day.

The AMAs major recommendation to the DISR review is for the creation of a dedicated government body to oversee AI regulation in healthcare.

“The AMA calls upon the government to implement dedicated mandatory AI standards for healthcare, and to establish a dedicated governance body consisting of practising clinicians, medical professionals, consumers and technology developers for the active oversight of AI application in healthcare,” says the AMA submission.

While the MSIA didn’t go as far in its submission to recommend a dedicated new body, a key argument it makes is that regulation of AI in healthcare will be most efficiently developed by carefully considering it in the context of existing detailed clinical and risk governance frameworks in the sector. In other words, like the AMA, the MSIA feels that the DISR and the government needs to be viewing healthcare as a special case when it comes to AI.

The AMA submission states that “significant existing regulation in healthcare provides a strong framework to embed new provisions for the challenges of AI in healthcare and ensure patient safety, privacy and ethical standards”.

The MSIA put it a little more directly.

“Healthcare is a special case,” it says in its submission.

“It is more mature than other industries in its approach to the responsible use of AI in Health. This is because it has been balancing risks, including the adoption or non-adoption of technology which has been such a predominant force for positive change to healthcare outcomes.” 

In the MSIA submission, CEO Emma Hossack warns the DISR about the dangers of healthcare AI regulation being framed by some of the broader generic regulatory buckets being developed for the technology, both in Australia and overseas.

She points to how easily regulation around general-purpose AI can create perverse outcomes if it isn’t developed with appropriate nuance in it to deal with the complexities of healthcare. 

“A technology agnostic approach is essential,” she says.

 “There may be GPAI which will have inestimable benefits in some applications, such as triaging children in emergency wards to expedite testing, but which could be equally successful in selecting substances for nefarious purposes.”

Ms Hossack refers the DISR to the important work that is being done by the Australian Alliance for Artificial Intelligence in Healthcare (AAAiH), which following extensive consultation produced the National Policy Roadmap for AI in Healthcare.

The key elements of DISR consultation include a proposed definition of high-risk AI, ten proposed mandatory guardrails and three regulatory options to mandate these guardrails.

The three regulatory approaches suggested by the DISR are: 

  • Option 1: Adopting the guardrails within existing regulatory frameworks as needed
  • Option 2: Introducing new framework legislation to adapt existing regulatory frameworks across the economy
  • Option 3: Introducing a new cross-economy AI-specific law (for example, an Australian AI Act).

Both the MSIA and the AMA submissions don’t believe the suggested 10 DISR guardrails are fit for purpose in the case of healthcare, based on their assessment that healthcare is a unique sector when it comes to AI and risk versus the enormous potential it holds for productivity in the sector.

AI looks like it can make an enormous short-term impact in helping solve our worsening healthcare workforce crisis if managed correctly.

The MSIA does however pick a favoured regulatory approach of the three suggested by the DISR which Ms Hossack quaintly frames by referencing Goldilocks and the three bears.

“The key is to ensure that the regulation has the ‘Goldilocks’ recipe – not too hard, not too soft – but just right,” she says.

“As usual, the devil is in the detail. Current technical, professional, legal, and administrative regulation has served Australia well. [But] the added complexity of specific AI regulation needs consideration in this context.”

Ms Hossack’s not too hot, not too cold AI regulatory porridge option is number two above (right in the middle just like baby bear): introducing new framework legislation to adapt existing regulatory frameworks across the economy.

She says option number one won’t work because it only extends “the existing regimes of specific frameworks [which] does not provide the clarity which consumers rightly require and puts the onus of compliance with complex regulation on the parties most familiar with the detail”.

She says option three is confusing. 

“It purports to maintain existing regimes and carve them out, but that in and of itself would make it complex for say a health consumer which would then need to navigate the TGA or other requirements. A one size fits all solution doesn’t work because all the issues are different and nuanced so their solution must reflect that,” says Ms Hossack.

Confusing?

Three different AI regulatory consultations by three different government departments all on the same timeline for consultation and reporting, which so far don’t seem to be giving anyone any hints on how they might compare notes between their reviews?

At least Ms Hossack has reduced her arguments down to the very simple process of making porridge, “just right”.

End of content

No more pages to load

Log In Register ×