Transcriber: Gisele Cristina Ribeiro Reviewer: lisa thompson Artificial intelligence has often been depicted as villain robots ready to take over the world. But I’m here to tell you that AI can actually save lives and improve health care for millions of patients around the world. AI is helping us personalize the delivery of care, make hospitals more efficient, and improve access to health care by providing accurate decision-making tools.
AI is the process of educating a computer model using complex and large data sets. The model learns from this data in a training process to build its ability to make decisions or predict outcomes when presented with new data. We are talking about having access to a computer model that knows, based on the experience of thousands of other patients, whether a treatment is likely to work and what works best for that patient based on their individual conditions.
No two of you in this room or, in fact, anywhere in the world are alike. But AI models are helping our doctors learn from patients with similar conditions or even similar genetic information and make highly informed decisions about their diagnosis and their treatment options. I want to talk about how we are starting to use AI for delivering care to cancer patients.
Cancer diagnosis can be immensely complicated, both for the doctors in making decisions about diagnosing a primary or secondary cancer, as well as for the patients, in understanding the risks and success rates of the treatment options. But we are developing AI models that can help streamline this process by taking information from a number of sources. This involves feeding an AI model data from the patient’s blood tests, X-ray images of the suspected lesions, as well as genetic information from a tissue biopsy.
The trained AI model can rapidly consolidate this information and provide highly accurate predictions of the patient's diagnosis, treatment options most likely to succeed, as well as the prognosis. Let’s talk about Peter, who is a cancer patient. He’s gone through comprehensive clinical assessment, imaging and various other diagnostic workups, but not even the best doctors in town can tell him where his cancer primary site is, meaning he can’t get a treatment specific for his cancer and his chances of surviving another five years is less than ten percent.
But our team right here in Brisbane has developed a tool using AI and patients’ genetic information that can accurately identify the cancer primary site of Peter, and empower doctors to give Peter a treatment that we know is going to work for him. These type of models can be expanded exponentially to predict accurate health care. This means using an AI model to understand whether a certain population is more susceptible to a certain disease and whether they would respond more favorably to certain health care interventions.
AI is giving us the ability to have a much more refined and detailed understanding of human health than we’ve ever had before. But there is a catch to the immense promise of AI being implemented into routine clinical practice. Our existing regulation frameworks aren’t designed for AI software intended for diagnosing, treating or managing the disease, also known as AI-based software as a medical device.
They are designed for physical medical devices, like surgical implants, or most software that have the same output every time that the patient or clinicians are using them. Traditional software are static, in a sense that the developers release a version of a software and, no matter how many times you use it, it would always have the same output for the same data. On the other hand, AI software behaves completely differently to most software in health care because of the intrinsic ability to learn and evolve over time, ideally becoming more intelligent as suited to the environment that they’re being used at.
Our existing regulation frameworks rely on the static and reproducible nature of this software to prove that they are safe to be implemented into routine clinical practice. So, our regulatory authorities’ solution has been to lock the learning potential of these algorithms before they are implemented into clinical practice. This means that the model can no longer learn from its environment and new data, which limits its potential to improve its functionality or its accuracy, you know, the whole point of AI.
And, at times, this can even be harmful for the patients because the AI model is no longer trained on the most up-to-date data and can potentially lead to a wrong diagnosis. But the good news is that there are emerging regulation frameworks being proposed that, if implemented right, can be a game changer. Our regulatory authorities are proposing using more transparent reporting mechanism so that the developers can disclose how their models would learn and evolve over time.
And this will be combined with ongoing and real-time monitoring to make sure that the predicted changes actually occur and that the software is adaptive to make much more accurate predictions and improve health care outcomes. We also need to make sure that the training data used for these algorithms are representative of the entire human population. Let’s look at a mobile-based diagnostic software that we are developing right here in Brisbane that uses AI to detect skin cancer from the images that you’ve taken on your iPhone.
If this model has been trained on a predominantly Caucasian population, how well do you think it would do on an African American or an Asian patient? Our AI developers have a huge responsibility to make sure that data bias doesn’t exist and that their models are trained on diverse and robust data sets, representative of the entire population, you know, not just white males. But at times, we understand that this is not entirely possible.
Skin cancer does, in fact, disproportionately affect the Caucasian population because of the genetic differences, and, as a result, there are much larger data sets available for those patients. But this means that we need to build in a functionality in our AI models that, for low confidence results, for an Asian patient, for example, the model is capable of saying “I don’t know” or that “This is my best guess based on a skewed training population. ” But, unfortunately, this functionality doesn’t exist yet, and it’s urgently needed to be mandated by our regulators.
To successfully implement AI in health care, we need to establish new regulatory frameworks in consultation with AI developers, health care practitioners, policy advisers, as well as the patients themselves, to bring the best out of AI. Improve the regulatory frameworks can make sure that diverse and robust tools are developed that are compliant and adaptive and can serve the whole population equally. If we get this right, we can transform the delivery of health care where we are promoting personalized health and well-being advice.
I’m excited to be at the forefront of translating this amazing technology into health care and use this to help millions of lives around the world.