New radiology AI outperforms OpenAI and Google Gemini

2 minute read


The new large language model can generate reports, detect and local radiological findings and record open-ended X-ray-related chatter.


New radiology-specific language model, Harrison.rad.1, has been launched for sector-wide use today and is already outperforming OpenAI and Google Gemini, according to its creator Harrison.ai.

The model is being made available to selected healthcare professionals, industry partners and regulators across the globe.

The tech, which is dialogue-based, offers functions including report generation, detecting and localising radiological findings and it can record open-ended chat relating to X-rays.

Its reports can provide reasoning based on clinical patient history and context.

According to Harrison.ai, the model’s priorities are accuracy and clinical safety.

The AI tech company has already sucessfully launched its radiology solution Annalise.ai, which has been approved in more than 40 countries for clinical use.

The new product is set apart from others in the space as it was trained on real-world clinical data from millions of radiology images, reports and studies.

“The dataset is further annotated at scale by a large team of medical specialists to provide Harrison.rad.1 with clinically accurate training signals,” said the team.

“This makes it the most capable specialised vision language model to date in radiology.”

Harrison.ai said the “highly regulated nature of healthcare” limited incorporation of AI to date, but this model changed the game.

Co-founder and CEO of Harrison-ai Dr. Aengus Tran said the new model was in great shape.

“AI’s promise rests on its foundations – the quality of the data, rigour of its modelling and its ethical development and use,” he said.

“Based on these parameters, the Harrison.rad.1 model is groundbreaking.”

Think of generative AI as an ecosystem, not a single technology

US health giant splashes $18m on Australian AI technology

Dr Tran said the model was already outperforming other major large language models twice over in the Royal College of Radiologists’ (FRCR) 2B exam.

According to the company, only 40-59% of human radiologists pass the FRCR exam on the first attempt.

“Harrison.rad.1 performed on par with accredited and experienced radiologists at 51.4 out of 60, while other competing models such as OpenAI’s GPT-4o, Anthropic’s Claude-3.5-sonnet, Google’s Gemini-1.5 Pro and Microsoft’s LLaVA-Med scored below 30 on average,” said Harrison.ai.

“Additionally, when assessing Harrison.rad.1 using the VQA-Rad benchmark, a dataset of clinically generated visual questions and answers on radiological images, Harrison.rad.1 achieved an impressive 82% accuracy on closed questions, outperforming other leading foundational models.

“Similarly, when evaluated on RadBench, a comprehensive and clinically relevant open-source dataset developed by Harrison.ai, the model achieved an accuracy of 73%, the highest among its peers.”

End of content

No more pages to load

Log In Register ×