Scroll Top
19th Ave New York, NY 95822, USA
Colon cancer, Colon disease concept, 3d illustration

In the previous post, the ways that AI can assist healthcare professionals by reviewing colonoscopy videos were discussed. By leveraging the power of AI, deep learning networks (which are a significant breakthrough in the field of medical imaging) can analyse colonoscopy videos with remarkable precision, identifying polyps and adenomas that might otherwise be missed. But how exactly does AI work to detect polyps and adenomas in colonoscopy videos? In the following post, it will be explained how AI models, and in particular deep learning networks (a subset of AI that has shown remarkable detection ability), reach their “decision” when asked to respond to questions such as “Is there a polyp in this colonoscopy video?” or perform tasks like “Identify the polyp in this image”. Let’s dive into the fascinating world of AI and medical imaging to find out.

Imagine a child learning how to ride a bike: It all starts with exposure to examples and guidance; a child learns to ride a bike by observing others and receiving instructions. Then it’s time to practice on their own and receive feedback. A child refines their technique through repeated attempts and falls. Over time, the child will achieve proficiency and autonomy and will be able to apply its learned skill independently; riding smoothly.

Learning in AI parallels the child in several ways: The AI model is exposed to examples and guidance in the form of labelled data, where the input can be an image, a video, text, and the output/label is the answer to the question/task. During training, the AI model will practice and incorporate feedback by adjusting its parameters based on errors it makes compared to the given labels. Finally, when the model achieves a satisfactory performance, it can be deployed to real-world settings and perform the task that it was trained for consistently and reliably.

To explain how a deep learning network learns and performs different tasks, it is important to understand what deep learning networks are. Deep learning is a type of machine learning that involves neural networks with many layers—hence the term “deep.” These networks are designed to mimic the human brain’s structure and function, learning from vast amounts of data to recognize patterns and make decisions.

In the context of colonoscopy, deep learning networks are trained to analyse video footage and identify signs of polyps and adenomas. The process begins with training the deep learning network using a large dataset of colonoscopy videos. These videos are annotated by experts, marking the presence of polyps and adenomas. This annotated data serves as a learning foundation for the network. By processing thousands of examples, the network learns to distinguish between normal and abnormal tissue based on various features such as shape, colour, and texture. To achieve this, the best performing models employ complex mathematical operations called convolutions. These convolutional Neural Networks (CNNs) are a type of deep learning network particularly effective for image and video analysis. CNNs consist of multiple layers, including convolutional layers, pooling layers, and fully connected layers. Here’s a simplified breakdown of how CNNs work in detecting polyps and adenomas:

First, there are convolutional layers, which applies filters to the input video frames, creating feature maps that highlight specific visual features. For example, a filter might highlight edges or textures that are characteristic of polyps. After each convolutional layer, there are pooling layers that reduce the dimensionality of the feature maps, retaining the most important information while reducing computational complexity. This process helps the network focus on the most relevant features. This combination of convolutional and pooling layer occurs several times in the model to produce feature maps of increasing abstraction or complexity. Finally, at the very end of a model there needs to be fully connected layers. In these layers, the network combines all the extracted features to make a final prediction. The fully connected layers analyse the relationships between different features to determine whether a polyp or adenoma is present.

Once trained, the deep learning network can analyse colonoscopy videos in real-time. This means that as the colonoscope moves through the colon, the network processes each video frame, continuously scanning for signs of polyps and adenomas. When the network detects a potential abnormality, it highlights the area on the screen, alerting the endoscopist to examine it more closely.

Understanding how a deep neural network, or any AI model, is trained is crucial for achieving trustworthiness because it enables transparency and accountability in their decision-making processes. We hope that after reading this article, you gain insight into how AI models operate and feel more confident in their integration into medical applications where accuracy and reliability are of utmost importance.

In ONCOSCREEN, AINIGMA Technologies is delivering polyp detection AI models to enrich the ONCO-AICO educational platform. Feel free to contact them with any questions on how these models work or other applications of AI in healthcare.