ADAPTIVE MULTI-SCALE EDGE DETECTION USING GRAPH-BASED LEARNING FOR ENHANCED MEDICAL IMAGING
Abstract
Accurate edge detection in medical imaging, particularly in COVID-19 CT scans, is critical for diagnosing and assessing the severity of infections accurately. Traditional edge detection methods often struggle with the complexity and variability of medical image data, leading to less reliable diagnostic outcomes. This research introduces an adaptive graph- based multi-scale edge detection model that significantly enhances the accuracy and reliability of edge delineation in COVID-19 CT scans. The proposed model employs Adaptive Graph Convolutional Networks (AGCNs) integrated with a dynamic, multi-scale analysis approach. This methodology allows the model to adaptively process image data across multiple scales, effectively capturing both global structures and fine details. The graph-based approach is tailored to adjust the connectivity and weights dynamically based on the local context of the image features, enabling superior detection of subtle and complex patterns. Experimental results demonstrate that the proposed model achieves an accuracy of 0.995, precision of 0.920, recall of 0.970, and a Dice Similarity Coefficient (DSC) of 0.945. These metrics significantly surpass those of traditional methods like Sobel + GCN and Canny
+ GCN, and even outperform the advanced CDSE-UNet model referenced in earlier studies.
Qualitatively, the edges detected by the proposed model are sharper and more consistent with the clinical annotations provided by medical experts. The novelty of this research lies in its integration of graph-based learning with a multi-scale approach tailored specifically for medical imaging, providing a robust framework that can adapt to various imaging conditions and requirements. This model not only sets a new standard in medical image edge detection but also opens avenues for further research and application in other areas of medical imaging.