Call for an urgent ‘pause’ on generative AI use

7 minute read


Australian policy is ill-equipped to support AI in healthcare, experts say.


Generative AI should not be used in a clinical setting until more work is done to ensure it was safe from privacy and patient health perspectives, one of Australia’s leading AI experts has warned.

Professor Enrico Coiera, professor of medical informatics at Macquarie University, and director of the Australian Alliance for AI in Healthcare, told Health Services Daily there were still too many questions around the use of generative AI like ChatGPT.

He believes Australia’s healthcare sector should hit a collective “pause button” on its use for now.

“We don’t know if it’s safe or not. It might be fantastic,” he said.

“But somebody’s got to test it to make sure it is. Because if it’s not, it may give you the wrong information. Usually the wrong information will get ignored, the doctor will know better.

“But you can imagine that eventually something would come that would harm a patient.”

Currently data from generative AI like ChatGPT goes to an overseas server where it can be shared without regulation. Professor Coiera said the solution to that might be the development of a secure Australia-based server that could safeguard patients’ privacy.

And using the technology to make clinical decisions about a patient’s care was the other major unknown, he said.

“I think a pause is a good point, until we understand the privacy and safety concerns, until we have some clear remedies for those,” he said.

“I’m not saying they’re bad. I’m not saying we shouldn’t use it. It’s very easy for these two messages to get confused. These are very exciting technologies, they will bring benefits, we just need to do it in a rational way.”

Professor Coiera pointed out that other AI technology such as devices underwent stringent testing and regulation by the TGA before it was approved for clinical use.

“There’s a standard we apply for everything else we do and we should apply that standard here – it’s pretty simple,” he told HSD.

Writing in a Perspective in this week’s MJA, Professor Coiera and colleagues warned that it was the “unintended consequences of these technologies that we are truly unprepared for”.

“It was hard to imagine in the early innocent days of social media, which brought us the Arab Spring, just how quickly it would be weaponised,” they wrote.

“Algorithmic manipulation has turned social media into a tool for propagating false information, enough to swing the results of elections, create a global antivaccination movement, and fashion echo chambers that increasingly polarise society and mute real discourse.”

And the fallout from the release of ChatGPT was just as swift.

“Within two months of the release of ChatGPT, scientific journals were forced to issue policies on ‘non-human authors’ and whether AI can be used to help write articles,” the authors wrote.

“Universities and schools have banned its use in classrooms and educators scramble for new ways to assess students, including returning to pen and paper in exams.

“ChatGPT is apparently performing surprisingly well on questions found in medical exams. The major unintended consequences of generative models are still to be revealed.”

While the rise of artificial intelligence seems to be an inescapable part of the 21st century, the authors said the Australian healthcare system was inadequately prepared for it.  

“With AI’s many opportunities and risks, one would think the national gaze would be firmly fixed on it,” they wrote.

“However, Australia lags most developed nations in its engagement with AI in health care and has done so for many years.”

In a healthcare setting, the role of generative AI has begun to take shape. For example, Microsoft and Epic Systems have partnered to use Open AI’s GPT-4 language model to draft message responses from health professionals to patients, and to analyse and identify trends in medical records.  

In their MJA article, the authors emphasise that generative AI has “unintended consequences”, many of which are “still to be revealed”. 

They noted that action in the policy space has been rudimentary and “low priority”, with Australia having no “national framework for an AI-ready workforce, or overall regulation of safety, industry development or targeted research investment”. 

Further, they believe current policy focuses on “limited safety regulation of AI embedded in clinical devices and avoidance of general-purpose technologies such as ChatGPT”. This may be illustrated by the halving of funding for AI in the recent federal budget.  

The authors emphasised the need for policies which facilitate the production and modification of AI in Australia, rather than merely adopting technology produced overseas. Without such policies, the “nation is exposed to new risks and misses one of the most significant industrial revolutions of our times”.  

“Australia’s 1.4 billion clinical trials sector will face international competition from those who use AI to identify, enrol and monitor patients more effectively and at a lower cost. Our health response to climate change will depend heavily on digital health and AI for mitigation and adaptation”, the authors wrote. 

“Further, AI requires local customisation to support local practices and reflect diverse populations or health service differences. Without local capability, paying to modify clinical AI will likely become a huge burden on our health system.”  

The RACGP said in a statement to HSD that “GPs must be involved in the development and integration of AI-based solutions in primary care, to ensure that solutions are fit for purpose”, and “encourages efforts to ensure that the sector is appropriately regulated”.

The body said it was “committed to support GPs to develop the skills needed to work with AI as is required”.  

 The AMA has advised “caution with this new technology in the healthcare space”, underlining that “ethical principles and appropriate mechanisms will have to apply”. 

A spokesperson for the Medical Board of Australia has stated that the use of generative AI is “an area regulators will need to consider into the future”.  

Neither the Department of Health and Aged Care or the Australian Digital Health Agency have a formal position on generative AI so far, but the ADHA has indicated it was staying very close to the Department of Industry, Science and Resources and the Digital Transformation Agency (DISRDTA) led discussion focused on AI (under Minister Ed Husic) and was keen to start working with industry to help bridge the gap between policy and implementation of AI applications in the health sector in the near future.

The DISRDTA cut the budget for AI development in half in May’s federal budget.

The Australian Alliance for AI in Healthcare – comprising more than 100 organisations in academia, industry, peak bodies, and health services providers – has a produced a roadmap looking into the future of AI in healthcare.

The highest community priority identified is for AI to be “safe for patients and developed and used ethically”.  

 The roadmap has proposed that a National AI Healthcare Strategy be developed in the next three years to “provide strategic and national governance and leadership”.  

Other recommendations include developing professional accredited training programs for specialist AI health professionals, establishing a program to ensure Australia can “take advantage of AI to manage future crises”, and to support the creation and translation of new technologies through existing funding mechanisms.  

End of content

No more pages to load

Log In Register ×