An unimaginable amount of data is collected in healthcare. So much that it is incomprehensible to the human brain. It is expected that AI will play an essential role in analyzing medical data. But what are the conditions for a reliable diagnosis?

By: Eline te Velde

To make a diagnosis, numerous data are collected about the patient. So much data is now collected in healthcare that it is no longer comprehensible for the human brain. Dr. Meike Nauta, senior data scientist at Datacation, explains that this represents an excellent opportunity for AI. But at the same time she emphasizes the importance of explainable AI and human guidance.

AI is very suitable for analyzing medical images, says Nauta. “X-rays are still relatively simple. But CT scans are more challenging, because you are dealing with 3D images.” However, Datacation has successfully developed an algorithm for the University Medical Center Utrecht (UMC), which helps medical specialists detect pancreatic (pancreatic cancer) recurrences on CT scans. “The tumor is often easy to recognize for a doctor at first. But after surgery there is a chance that the tumor will come back. On a CT scan, the difference between scar tissue, due to the operation, and a tumor is difficult to distinguish. AI can support the human eye by recognizing a possible recurrence at an early stage.”

Shortcuts

AI learns to recognize patterns based on data and can therefore sometimes make a diagnosis more carefully and better than a doctor. But there is a danger that the algorithm will draw an incorrect conclusion and thus take a shortcut.

Nauta explains this using an example: “For potential hip fractures, the patient makes an appointment and two x-rays are taken, from different angles. But sometimes someone comes through the emergency department and it is not possible to take those standard photos. In that case, one photo is taken from a different angle and the edge of the bed is often visible. The model learned to see a connection between the presence of the edge of the bed and a fracture, because a fracture was more often present in these photos.”

A model can be controlled not to create shortcuts, but not all of these can be overcome. “Some biases are very logical. For example, when we talk about the distinction between men and women or skin color. But often there is also a shortcut that we do not think about in advance, but which you only discover once the model is running. You want to be able to adjust the model, as it were, perform an operation on the brain of the AI,” says Nauta.

“You want to be able to adjust the model, perform an operation on the AI's brain, as it were.”

Explainable AI

“AI consists of millions of numbers,” Nauta explains, “as a developer you can see them, but I don't understand what all those numbers mean: I can't give them human meaning. With explainable AI we can translate all those numbers into something we can explain and then we can understand the model.” She advocates AI models that provide insight into the decisions made and reasoning structure of the model. Insight into the reasoning is essential to find shortcuts and adjust the model if necessary. Without this insight, the AI turns into a so-called black box, where it is impossible to check which patterns it has learned.

“AI consists of millions of numbers, which you can see as a developer, but I don't understand what all those numbers mean: I cannot give them human meaning.”

In addition, it is very interesting for people to see what a model has learned. “If we can extract the insights that AI learns, we can learn a lot from them and further develop science. But this is only possible if a model is explainable,” says Nauta. The combination of artificial and human intelligence, hybrid intelligence, is very powerful and makes us 'smarter'. Human intuition, creativity and collaboration seamlessly complement the speed, memory and pattern recognition of artificial intelligence.

Dates

Developments within AI are happening very quickly and sometimes it seems as if it works magically. But we sometimes forget that a model must be fed with data in order to learn. For example: “In the past, a lot of photos had to be labeled manually. Suppose you are looking for a tumor, then a doctor must manually draw in each picture where exactly the tumor was. This data was then used to train AI. More and more smart devices are being developed that will hopefully no longer require a manual label in the future.”

Diagnosing diseases requires a lot of data and this is not always available. There are now AI methods that generate additional data that an AI model can use to learn from. For example, you can ask a model to create scans or photos of an imaginary patient with a certain disease. In addition, the exchange of data between hospitals is an important development. “The collaboration between hospitals in the Netherlands is promising. Sharing data in the medical world is of course complicated due to patient privacy, but there are more and more methods to train an AI model on different datasets, without the data being visible to the hospital. Because the more knowledge we bundle, the smarter the models become,” says Nauta.

Related companies

METZ CONNECT GmbH
Global connections have never been more important than they are today. Our world is connected in networks. Borders are losing their meaning. Ideas, concepts, even entire projects are shared around the world in seconds.
FHI, federatie van technologiebranches
nl_NLNederlands