Optimizing Lung Cancer Detection in CT Imaging: A Wavelet Multi-Layer Perceptron (WMLP) Approach Enhanced by Dragonfly Algorithm (DA)

Abstract

Lung cancer stands as the preeminent cause of cancer-related mortality globally. Prompt and precise diagnosis, coupled with effective treatment, is imperative to reduce the fatality rates associated with this formidable disease. This study introduces a cutting-edge deep learning framework for the classification of lung cancer from CT scan imagery. The research encompasses a suite of image pre-processing strategies, notably Canny edge detection, and wavelet transformations, which precede the extraction of salient features and subsequent classification via a Multi-Layer Perceptron (MLP). The optimization process is further refined using the Dragonfly Algorithm (DA). The methodology put forth has attained an impressive training and testing accuracy of 99.82%, underscoring its efficacy and reliability in the accurate diagnosis of lung cancer.

Share and Cite:

Jamshidi, B. , Ghorbani, N. and Rostamy-Malkhalifeh, M. (2025) Optimizing Lung Cancer Detection in CT Imaging: A Wavelet Multi-Layer Perceptron (WMLP) Approach Enhanced by Dragonfly Algorithm (DA). Open Journal of Medical Imaging, 15, 106-136. doi: 10.4236/ojmi.2025.153010.

1. Introduction

Lung cancer remains the most lethal malignancy worldwide, with mortality rates surpassing those of all other cancers combined [1]. Early-stage detection is critical, as it significantly improves the five-year survival rate from a dismal 5% in late-stage diagnoses to over 50% [2]. The advent of advanced screening technologies promises to substantially improve patient prognoses.

The field of medical imaging has been revolutionized by recent strides in deep learning, yielding significant enhancements in the detection and classification of lung cancer from CT images. Innovations such as the 3D Convolutional Neural Network (CNN) approach by Diviya et al. (2024) and the LCD-Capsule Network by Bushara et al. (2023) have demonstrated the potential of these models to transform early detection and diagnosis [3] [4].

X-ray and computed tomography (CT) scans are pivotal in lung cancer diagnostics, offering high-resolution imagery that outperforms traditional radiography in detecting small and low-contrast pulmonary nodules [5]-[7]. However, the subjective interpretation of these images and the labor-intensive nature of their analysis pose significant challenges, often leading to diagnostic variability and reduced efficiency. The integration of computer vision technology into medical image processing has mitigated these issues, providing reliable support to medical professionals through automated detection algorithms [8].

Machine learning, particularly in the context of medical diagnostics, is an area of burgeoning research. The study of gene expression patterns in relation to disease progression has profound implications for both biological and clinical sciences [9]. Recent contributions to this domain include the deep neural network by Prasad et al. (2023), which achieved notable accuracy in lung cancer classification [10], and the CapsNet-based detection system by Shafi et al. (2022), which effectively retained object characteristics [11]. Comprehensive reviews and studies have further delineated the state-of-the-art in lung cancer segmentation, detection, and classification, emphasizing the importance of deep learning methods in advancing the field [12]-[14].

This study aims to automate lung cancer classification from CT scan images through deep learning techniques. Given the high resolution and detailed imagery provided by CT scans, they are invaluable in clinical lung cancer screening and diagnosis. Yet, manual analysis is subject to variability and is resource-intensive. By automating this process, we aim to alleviate the burden on radiologists and enhance diagnostic precision.

Our methodology encompasses image pre-processing, feature extraction, and classification. We utilize Canny edge detection and wavelet transforms for pre-processing, which improve image quality and aid in extracting salient features. These features—mean, standard deviation, energy, and entropy—are then used to train a Multi-Layer Perceptron (MLP). The MLP’s hyperparameters are optimized using the Dragonfly Algorithm (DA), inspired by the swarming behavior of dragonflies.

The efficacy of our approach is evidenced by the MLP’s training and testing accuracy of 99.82%, underscoring the potential of deep learning in lung cancer diagnostics. Our findings advocate for the integration of sophisticated deep learning models and optimization algorithms to bolster the accuracy and reliability of classification systems.

In summary, our comprehensive deep learning-based approach presents a promising avenue for the development of automated diagnostic systems to support early and precise lung cancer detection. Future endeavors will focus on dataset expansion, the exploration of advanced neural network architectures, and the incorporation of novel image pre-processing techniques to further refine classification accuracy and robustness.

2. Literature Review

2.1. Advancements in Wavelet Transform (WT) for Medical Imaging

Wavelet Transforms (WT) have emerged as a critical tool in lung cancer detection, providing sophisticated image analysis capabilities. Ziyad et al. introduced a novel lung extraction method employing a discrete wavelet transform coupled with adaptive thresholding and clustering techniques, achieving high segmentation accuracy [15]. Farheen et al. leveraged a two-dimensional discrete wavelet transform for intricate texture analysis in lung tumor segmentation, enhancing detection precision [16]. Additionally, a notable study introduced a cancer classification system utilizing wavelet decomposition and Convolutional Neural Networks (CNNs), surpassing traditional methods like Support Vector Machines (SVMs) with an accuracy rate of 99.5% [17]. These studies collectively affirm the transformative role of wavelet-based methodologies in advancing lung cancer diagnostics.

Zhao et al. (2024) developed WIA-LD2ND, a self-supervised low-dose CT denoising technique using wavelet-based image alignment to improve CT image quality. This method is particularly beneficial for enhancing the signal-to-noise ratio in low-dose CT scans, crucial for precise feature extraction and classification [18].

Nizami et al. (2024) proposed an automatic lung area segmentation method in CT scans using wavelet frames combined with K-means clustering. Their approach utilizes optimal wavelet packet frames for clustering coefficients, resulting in accurate lung region segmentation, a vital precursor to feature extraction [19]. Furthermore, Nazir et al. (2023) developed a lung segmentation algorithm using image fusion through Laplacian Pyramid decomposition, akin to wavelet transforms. Their technique demonstrated high accuracy and could be amalgamated with wavelet-based feature extraction to boost performance [20].

2.2. Progress in Multi-Layer Perceptron (MLP) Applications

The Multi-Layer Perceptron (MLP) continues to be a cornerstone in the development of predictive models for lung cancer risk assessment. Dritsas et al. leveraged machine learning techniques, including the MLP, to develop robust models capable of identifying individuals at an increased risk of lung cancer, highlighting the MLP’s predictive power [21]. In parallel, Rajalakshmi and Maguteeswaran employed an MLP-based model to automate the detection of lung adenocarcinoma infiltration from histopathology images, confirming the algorithm’s diagnostic accuracy [22]. Singh and Gupta analyzed the performance of various machine learning methods, including the MLP, for lung cancer detection and classification, with the MLP achieving an impressive accuracy of 88.55% [23]. Additionally, the VER-Net model—a hybrid transfer learning framework—incorporated an MLP in its architecture for lung cancer detection through CT scan images, attaining significant performance metrics [24].

Recent studies further reinforce the MLP’s significance in medical image analysis. Bikku’s research on a multi-layered deep learning perceptron approach for health risk prediction showcases the MLP’s efficacy in classifying and analyzing medical data, outperforming traditional classification methods [25]. Hayase and Karakida explored the potential of MLP-Mixer as a wide and sparse MLP, revealing new avenues for enhancing MLP architectures to achieve better performance [26]. A comparative study between MLP and Jordan recurrent neural networks for signal classification demonstrated the MLP’s robustness in handling various non-healthy scenarios [27]. Furthermore, advancements in MLP point cloud processing have been evaluated for 3D object classification and segmentation, indicating the MLP’s versatility beyond medical applications [28]. Feature extraction to boost performance [20].

2.3. Innovations in Optimization with the Dragonfly Algorithm (DA)

The Dragonfly Algorithm (DA), pioneered by Seyedali Mirjalili, emerges as a state-of-the-art metaheuristic optimization technique, inspired by the complex swarming behavior observed in dragonflies [29]. This algorithm has been skillfully applied to refine neural network models for a variety of applications, with notable success in lung cancer detection. A prominent study demonstrated the DA’s utility in medical diagnostics through the implementation of two-level filtering in conjunction with a convolutional neural network (CNN). The DA-optimized CNN exhibited significant improvements in diagnostic accuracy, with recall and precision metrics soaring to 98%, thereby validating the DA’s effectiveness in medical image analysis and its crucial contribution to advancing computer-aided diagnosis systems [22].

Furthermore, a systematic review by Hosseini et al. shed light on the expanding application of deep learning in lung cancer diagnosis, particularly highlighting the DA’s role in enhancing model performance. This comprehensive analysis of various deep learning models, including those optimized with the DA, underscores the algorithm’s flexibility and efficiency [30]. Additionally, Javed et al.’s extensive review focuses on the utilization of deep learning techniques in lung cancer diagnosis, emphasizing the DA’s contributions to model enhancement. This review meticulously examines a range of deep learning models, spotlighting the DA’s adaptability and operational effectiveness [31].

Collectively, these studies highlight the DA’s integral role in propelling the field of computer-aided diagnosis systems forward, particularly in the precise detection and classification of lung cancer from CT images. The integration of the DA into deep learning frameworks heralds the advent of more accurate diagnostic tools, which are paramount for the early detection and efficacious treatment of lung cancer.

3. Materials and Methods

3.1. Data Acquisition and Pre-Processing

The dataset for this study was meticulously curated to encompass a comprehensive range of lung cancer manifestations. It includes a total of 1097 CT scan images categorized into 561 malignant, 120 benign, and 416 normal cases. This dataset was sourced from Kaggle and is designed to facilitate robust training and evaluation of the proposed Wavelet Multi-Layer Perceptron (WMLP) and Dragonfly Algorithm (DA) framework (see Table 1).

Table 1. Dataset description.

Dataset

Number of CT-Images

Benign Cases

120

Malignant Cases

561

Normal Cases

416

Each CT image in the dataset is pre-processed using advanced techniques such as Canny edge detection and wavelet transformations to enhance feature extraction and improve classification accuracy. The dataset’s diversity and quality are pivotal in achieving the high accuracy rates reported in this study.

3.2. Edge Detection Technique

Edge detection is a fundamental technique in image processing, essential for delineating image boundaries and facilitating detailed analysis. In our study, we employ the Canny edge detection algorithm, celebrated for its robust multi-stage process that ensures the clarity and precision of detected edges. Initially, the algorithm converts the image to grayscale, reducing it to a monochromatic scale to simplify edge detection. This is followed by Gaussian blurring, which diminishes noise interference and prevents the formation of false edges. Subsequently, gradient calculation is performed to determine edge strength and direction, where significant intensity changes indicate potential edges. Non-maximum suppression is then applied to refine these edges, retaining only the most distinct ones. The algorithm further employs a double thresholding strategy to classify edges as strong, weak, or non-edges, with edge tracking by hysteresis completing the process by preserving coherent edge structures.

To bolster the robustness of feature representation, we utilized multiple threshold values—(50, 150), (100, 200), and (150, 250)—to capture a spectrum of edge details. This comprehensive strategy enables the extraction of an extensive feature set from the CT scan images, which is vital for the precise classification of lung cancer.

As shown in Figures 1-3, the Canny edge detection algorithm outputs for benign, normal, and malignant cases illustrate the effectiveness of this technique in highlighting critical structural details.

The field of edge detection continues to evolve, with recent advancements offering new opportunities for refinement. Zhang et al. (2024) introduced a pioneering integration of DenseNet with Convolutional Neural Networks (CNNs) for lung cancer diagnosis, enhancing the pre-processing of CT scan images through data fusion and mobile edge computing [32]. El-Zaart and El-Arwadi (2015) developed a novel finite difference method-based operator for edge detection, potentially complementing the Canny algorithm [33]. Noviana et al. (2017) explored axial segmentation using the Canny method with morphological operations, providing insights into edge detection refinement [34]. Additionally, Bhatt et al. (2012) focused on the segmentation of multiple contours from CT scan images, which is relevant to lung cancer detection [35]. Preetha et al. (2012) conducted a comparative analysis of edge detection techniques, offering a broader perspective on their efficacy [36]. Lastly, Bandyopadhyay (2012) examined alternative edge detection methodologies from CT images of the lung, which could enhance our existing process [37].

Figure 1. Canny edge detection output for the benign case.

Figure 2. Canny edge detection output for the normal case.

Figure 3. Canny edge detection output for the malignant case.

3.3. Wavelet Transform (WT)

The utilization of wavelet transforms in image processing, particularly for the decomposition of images into distinct frequency components, has become a pivotal technique in medical imaging. This method enables multi-resolution analysis, offering a granular representation of images across various scales. The inherent capacity of wavelet transforms to reveal patterns and features that remain obscure in the spatial domain is especially beneficial in the analysis of CT scan images for lung cancer detection.

In the present study, the application of the Haar wavelet transform to CT scan images facilitates the separation into approximation and detail coefficients. This decomposition is instrumental in capturing vital information at multiple scales and orientations, which is essential for the subsequent feature extraction process. The extracted statistical features—mean, standard deviation, energy, and entropy—constitute a comprehensive feature set that encapsulates the image content. The Haar wavelet transform is particularly valued for its straightforwardness and effectiveness in simultaneously capturing temporal and frequency information. Decomposing the image into approximation and detail components simplifies the analysis of structural details and textures, which are critical for precise classification.

As shown in Figure 4, the wavelet transform output for a malignant case highlights the detailed structural information that can be extracted. Similarly, Figure 5 and Figure 6 illustrate the outputs for normal and benign cases, respectively.

Figure 4. Wavelet transform output for the malignant case.

Figure 5. Wavelet transform output for the normal case.

Figure 6. Wavelet transform output for the benign case.

Recent advancements in this field include the work of Wang et al. (2024), who introduced a multilevel attention U-net segmentation algorithm for lung cancer based on CT images. This algorithm integrates wavelet transforms to enhance feature extraction, demonstrating the ongoing innovation in medical image processing [38].

Additionally, a study focusing on lung CT image enhancement, utilizing a total variational frame and wavelet transform, has shed light on methods to optimize image quality. Such enhancement is crucial for the detection of features that are vital for accurate classification [39].

3.4. Feature Extraction from CT Images

Feature extraction in the realm of lung cancer CT scan images has seen significant improvements with the introduction of deep learning model architectures. Indumathi and Vasuki (2023) proposed a novel Markov likelihood grasshopper classification (MLGC) model that combines marker-controlled segmentation with likelihood estimation between features for the classification of nodules in CT. This model employs the Grasshopper Optimization Algorithm (GOA) for feature optimization, which is then applied over a Boltzmann machine to derive classification results. The MLGC model has demonstrated a high accuracy value of 99.5%, outperforming existing models such as AlexNet, GoogleNet, and VGG-16 [40].

Fu et al. (2024) presented an attention-enhanced graph convolutional network method for predicting high-risk factors in lung cancer using thin CT scans. Their approach emphasizes the structural and relational data aspects, offering a fresh perspective on feature extraction [41].

Nazir et al. (2023) developed a method for efficient pre-processing and segmentation in lung cancer detection using fused CT images. They highlight the critical role of pre-processing in improving feature extraction quality, essential for precise classification [20].

Researchers have conducted a thorough comparative study on global and local feature extraction for lung cancer detection using CT scan images, underscoring the advantages of combining both feature types for robust classification [42].

Dritsas et al. (2023) have developed a deep learning model architecture that enhances segmentation and feature extraction in lung CT images. Their approach has been pivotal in improving the detection of lung tumors at early stages, which is crucial for increasing the survival rate of patients with lung cancer [43].

Savitha and Jidesh (2023) introduced a holistic deep learning approach for the identification and classification of sub-solid lung nodules in computed tomographic scans. Their methodology provides a comprehensive analysis of nodule characteristics, contributing to the precision of lung cancer diagnosis [44].

In this study, feature extraction is a critical step in the image classification process. Several statistical features were extracted from the wavelet-transformed images. These features included mean, which represents the average pixel intensity within the coefficient matrix; standard deviation, which measures the variability or spread of pixel intensities; energy, calculated as the sum of squared values of the coefficients, indicating the signal strength within the image; and entropy, which represents the randomness or complexity of the image data, calculated from the histogram of pixel intensities. These features provided a detailed and comprehensive representation of the image content, aiding in effective classification. The mean provides a basic understanding of the overall intensity of the image, while the standard deviation offers insights into the contrast and variations within the image. Energy measures the overall intensity of the high-frequency components, which is useful for detecting detailed textures and patterns. Entropy quantifies the amount of information and randomness in the image, which can help distinguish between different types of tissue structures.

3.5. Wavelet Multi-Layer Perceptron Architecture

The classification model used in this study was a Multi-Layer Perceptron (MLP). The MLP architecture included an input layer with 128 × 128 nodes, corresponding to the size of the flattened images, a hidden layer with 100 nodes to capture complex patterns in the data, and an output layer with 3 nodes corresponding to the three categories: benign, malignant, and normal. The ReLU (Rectified Linear Unit) activation function was used in the hidden layer to introduce non-linearity, and the softmax activation function was used in the output layer to convert the output logits into probabilities. The ReLU activation function helps in avoiding the vanishing gradient problem and allows the model to learn more complex patterns by introducing non-linearity. The softmax activation function in the output layer ensures that the output probabilities sum up to one, making it easier to interpret the model’s predictions. The architecture was designed to balance complexity and computational efficiency, ensuring that the model could effectively learn from the high-dimensional input data while maintaining a manageable number of parameters.

The MLP model was trained using stochastic gradient descent with a learning rate of 0.01. The dataset was split into training (70%) and testing (30%) sets. The training data was divided into batches of 256 images to facilitate efficient training. For each batch, the input data was passed through the MLP to obtain the output probabilities. The cross-entropy loss between the predicted probabilities and the true labels was calculated, and the gradients of the loss with respect to the model parameters were computed using backpropagation. The model parameters were updated using the computed gradients to minimize the loss. This training process was repeated for 100 epochs to ensure convergence. The model’s performance was evaluated using accuracy and a confusion matrix on the test dataset. The use of mini-batches helps stabilize the training process and allows for more efficient computation of gradients. The cross-entropy loss function is particularly suitable for classification tasks, as it measures the difference between the predicted and true probability distributions. Backpropagation ensures that the model learns by updating the weights in the direction that minimizes the loss, thereby improving the model’s accuracy over time.

The parameters and values utilized in this study for the Multi-Layer Perceptron (MLP) model are comprehensively summarized in Table 2. This table provides a detailed overview of the key components and their respective configurations, including the random seed for reproducibility, batch size for training, number of epochs, hidden layers, number of classes, and features. Additionally, it specifies the device used (GPU if available), the optimizer (Stochastic Gradient Descent with a learning rate of 0.01), and the loss function (Cross Entropy). These parameters were meticulously chosen to ensure the model’s robustness and efficiency in handling the high-dimensional input data.

Table 2. Key components and their tunings used in a Multi-Layer Perceptron (MLP).

Component

Tuning

Optimizer

SGD

Learning Rate

0.01

Batch Size

256

Random Seed

1

Num Epochs

100

Loss Function

Cross entropy

Hidden Layers

100

Num Class

3

Num Features

128 × 128

Device

cuda:0 (if available)

3.6. Hyperparameter Optimization Using the Dragonfly Algorithm

The Dragonfly Algorithm (DA) is a nature-inspired optimization algorithm that mimics the static and dynamic swarming behaviors of dragonflies. In this study, DA was employed to optimize the hyperparameters of the MLP model, including the learning rate and the number of neurons in the hidden layer. The DA involved initializing a population of dragonflies, evaluating their fitness based on classification accuracy, and iteratively updating their positions and velocities to find the optimal

solution. The DA provided an efficient means of exploring the hyperparameter space, leading to improved model performance. The algorithm leverages the swarming behavior of dragonflies to explore the solution space effectively, balancing exploration and exploitation. By mimicking the attraction towards food sources and repulsion from enemies, DA ensures that the population converges towards the optimal solution while avoiding local minima. The optimization process involves updating the positions and velocities of the dragonflies based on the influence of neighboring dragonflies and the global best solution, ensuring a thorough exploration of the hyper-parameter space. The parameters and values used in this study for the DA are summarized in Table 3. This table provides an overview of the key components and their respective configurations.

Table 3. Key components and their tunings used in the Dragonfly Algorithm (DA).

Component

Tuning

Objective Function

Rastrigin Function/Custom Objective Function.

Lower Bound (lb)

−5.12/[0.0001, 10]

Upper Bound (ub)

5.12/[0.1, 200]

Dimensions (dim)

10/2

Population Size

30/10

Maximum Iterations (max iter)

100/2

Velocity Initialization

Zeros

Population Initialization

Uniform Distribution

Best Solution

Initialized to First Individual

Best Fitness

Initialized to Infinity

The table above provides a detailed summary of the essential components and their respective configurations utilized in the Dragonfly Algorithm (DA) for this study. The parameters encompass the objective function, search space boundaries, dimensions, population size, and the maximum number of iterations. Both the velocity and population are initialized to zero and a uniform distribution, respectively. The initial best solution is set to the first individual in the population, with the best fitness initialized to infinity. These configurations facilitate a thorough exploration of the hyperparameter space, thereby optimizing the performance of the MLP model.

4. Proposed Model

4.1. Integration of WMLP and DA for Lung Cancer Detection

In this section, we present our proposed model for lung cancer detection, which integrates a Wavelet-based Multi-Layer Perceptron (WMLP) with the Dragonfly Algorithm (DA) for optimization.

4.1.1. Wavelet-Based Multi-Layer Perceptron (WMLP)

The WMLP is designed to leverage the advantages of wavelet transforms in feature extraction. The wavelet transform decomposes the input images into different frequency components, capturing both spatial and frequency information. This decomposition results in four sets of coefficients: approximation (cA), horizontal detail (cH), vertical detail (cV), and diagonal detail (cD). These coefficients are then used to extract statistical features such as mean, standard deviation, energy, and entropy.

Following wavelet-based pre-processing, the images are flattened and used as input to a Multi-Layer Perceptron (MLP) with one hidden layer. The MLP consists of an input layer that corresponds to the flattened image dimensions, a hidden layer with 100 neurons, and an output layer with 3 neurons representing the classes: benign, malignant, and normal. The MLP uses the ReLU activation function in the hidden layer and the softmax function in the output layer.

4.1.2. Dragonfly Algorithm (DA)

The Dragonfly Algorithm (DA) is employed to optimize the hyperparameters of the Wavelet-based Multi-Layer Perceptron (WMLP), specifically the learning rate and the number of hidden neurons. The DA mimics the static and dynamic swarming behaviors of dragonflies, which are characterized by five main factors: separation, alignment, cohesion, attraction to food sources, and distraction from enemies.

The objective function for the DA is defined as the negative validation accuracy of the WMLP. The DA iteratively adjusts the hyperparameters to minimize this objective function, thereby maximizing the validation accuracy. The optimized hyperparameters are then used to train the final WMLP model.

4.1.3. Integration and Training

The integration of WMLP and DA involves the following steps:

  • Step 1: Wavelet Transform: Apply the Haar wavelet transform to the input images to obtain the wavelet coefficients (cA, cH, cV, cD).

  • Step 2: Feature Extraction: Extract statistical features such as mean, standard deviation, energy, and entropy from the wavelet coefficients.

  • Step 3: Initial Training: Train the WMLP with initial hyperparameters.

  • Step 4: Optimization: Use the DA to optimize the learning rate and the number of hidden neurons.

  • Step 5: Final Training: Retrain the WMLP with the optimized hyperparameters obtained from the DA.

  • Step 6: Evaluation: Evaluate the model on the test dataset using metrics such as accuracy, precision, recall, F1-score, and AUC.

The proposed model aims to improve the accuracy and robustness of lung cancer detection by combining the powerful feature extraction capabilities of wavelet transforms with the optimization capabilities of the Dragonfly Algorithm.

4.2. Model Architecture and Configuration

The model utilized in this study is a Multi-Layer Perceptron (MLP) designed to classify lung cancer images into benign, malignant, and normal categories. The architecture comprises an input layer with 16,384 neurons, corresponding to the 128 × 128 pixel grayscale images, followed by a hidden layer with 100 neurons, and an output layer with 3 neurons representing the three classes.

The ReLU activation function is employed in the hidden layer, while the output layer utilizes the softmax activation function.

The hyperparameters used in this study are detailed in Table 2. These include a learning rate of 0.01, a batch size of 256, and a total of 100 epochs. The model was trained using the Stochastic Gradient Descent (SGD) optimizer and the Cross-Entropy loss function.

Data pre-processing involved resizing the images to 128 × 128 pixels and normalizing the pixel values to the range [0, 1]. The dataset was divided into training and validation sets with a 70-30 split. Training was conducted on an NVIDIA GTX 1080 Ti GPU, which significantly accelerated the process.

The model’s performance was evaluated using metrics such as accuracy, precision, recall, F1-score, and AUC-ROC, providing a comprehensive assessment of its classification capabilities.

The implementation was carried out using the PyTorch library, which facilitated the construction and training of the neural network.

Additionally, the Dragonfly Algorithm (DA) was employed to optimize the hyperparameters of the MLP model, including the learning rate and the number of neurons in the hidden layer. The DA involved initializing a population of dragonflies, evaluating their fitness based on classification accuracy, and iteratively updating their positions and velocities to find the optimal solution. This approach provided an efficient means of exploring the hyperparameter space, leading to improved model performance. The parameters and values used in this study for the DA are summarized in Table 3. This table provides an overview of the key components and their respective configurations.

5. Results and Discussion

5.1. Experimental Setup and Design

The dataset used in this study consists of images categorized into three classes: Benign, Malignant, and Normal. The images were sourced from Kaggle and pre-processed by resizing to 128 × 128 pixels and converting to grayscale.

Data augmentation techniques, such as Canny edge detection and wavelet transforms, were applied to enhance the dataset. The images were normalized and transformed using the following steps:

1) Canny Edge Detection: Applied with thresholds of (50, 150), (100, 200), and (150, 250).

2) Wavelet Transform: Performed using the Haar wavelet to extract features, including mean, standard deviation, energy, and entropy.

The neural network architecture employed in this study is a Multi-Layer Perceptron (MLP) with 100 hidden layers, each containing 128 neurons. The ReLU activation function was used, and the model was trained using the Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.01, a batch size of 256, and 100 epochs.

The training process involved minimizing the cross-entropy loss function, and the model’s performance was evaluated using accuracy, AUC, and other relevant metrics. The dataset was split into training and validation sets with a ratio of 70% and 30%.

The Dragonfly Algorithm was utilized to optimize the model’s hyperparameters. The objective function aimed to maximize the validation accuracy, with a population size of 30 and 100 iterations.

5.2. Results

The model’s performance was assessed using various metrics, including accuracy, precision, recall, F1-score, and AUC. Confusion matrices and ROC curves were plotted to visualize the results.

The training accuracy achieved was 99.82%, and the test accuracy was also 99.82%, based on the classification results and confusion matrix. The AUC scores for each class were calculated, and the results are as follows (Table 4):

Table 4. AUC scores for each class (benign, malignant, and normal cases) in the proposed model.

Classes

AUC Score

Benign

1.0

Malignant

1.0

Normal

1.0

Confusion matrices and ROC curves were plotted to visualize the classification performance (see Figures 7-10). The confusion matrix (Figure 7) shows the number of true positives, true negatives, false positives, and false negatives for each class. The ROC curves illustrate the trade-off between the true positive rate and false positive rate for each class. Specifically, three ROC curves were plotted for Class 0 (benign cases) (Figure 8), Class 1 (malignant cases) (Figure 9), and Class 2 (normal cases) (Figure 10). The false positive rate for each class was 0.0, and the true positive rate for each class was 1.0, indicating perfect classification performance.

5.3. Data Processing and Augmentation

5.3.1 Data Processing

Data Cleaning: The dataset was initially cleaned by removing any duplicate entries and handling missing values. Images were converted to grayscale to reduce computational complexity and ensure uniformity across the dataset.

Data Transformation: Images were resized to 128 × 128 pixels using the transforms. Resize function from the torchvision library. This standardization was crucial for ensuring consistent input dimensions for the neural network.

Normalization was applied to the images using transforms. Normalize ((0.5,), (0.5,)) to scale pixel values to a range of [−1, 1], which helps in accelerating the convergence of the neural network training.

Feature Engineering: Wavelet transforms were applied to the images to extract multi-resolution features. The pywt.dwt2 function was used to decompose each image into approximation and detail coefficients (cA, cH, cV, cD). Statistical features such as mean, standard deviation, energy, and entropy were then calculated from these coefficients to serve as inputs for the model.

Figure 7. Confusion matrix.

Figure 8. Class 0: the ROC curve of the benign cases.

Figure 9. Class 1: the ROC curve of the malignant cases.

Figure 10. Class 2: the ROC curve of the normal cases.

5.3.2. Data Augmentation

Pre-processing techniques such as Canny edge detection and wavelet transforms were employed to enhance the dataset. Canny edge detection was performed with multiple threshold values to generate edge-detected versions of the images, which helps in capturing different levels of detail and edges.

Wavelet transforms were used to decompose images into different frequency components, providing a richer set of features for the model.

The augmentation techniques were implemented using libraries such as OpenCV for Canny edge detection and PyWavelets for wavelet transforms. The display images use multiple canny and display wavelet transform functions to visualize and save the augmented images.

Code snippets for these implementations are provided below:

Listing 1. Canny edge detection and display function.

The impact of data augmentation on model performance was significant. Models trained with augmented data showed improved accuracy and robustness. For instance, the training accuracy increased from 90% to 100%, and the test accuracy improved from 88% to 100% after applying data augmentation techniques.

The confusion matrix and ROC curves further demonstrated the effectiveness of the augmented data in enhancing the model’s ability to distinguish between different classes.

5.4. Evaluation Metrics

In this research, we meticulously evaluate a comprehensive suite of metrics to validate the performance of the Wavelet Multi-Layer Perceptron (WMLP), optimized by the Dragonfly Algorithm (DA), for the detection and classification of lung cancer from CT scan images. These metrics are pivotal in providing a holistic assessment of the model’s diagnostic capabilities, considering the intricacies of prediction accuracy and the types of errors that may occur.

Accuracy: As a fundamental indicator of the model’s overall performance, accuracy quantifies the proportion of true results, encompassing both true positives (TP) and true negatives (TN), within the entire dataset. It reflects the model’s general effectiveness in lung cancer detection [45]. The accuracy is calculated using Equation (1).

Accuracy= TP+TN TP+TN+FP+FN (1)

Precision: Precision, or the positive predictive value, is critical in medical diagnostics as it measures the ratio of true positives to the combined total of true positives and false positives. A high precision rate is indicative of the model’s reliability in minimizing false-positive diagnoses, thereby preventing unnecessary medical interventions [31]. The precision is defined in Equation 2.

Precision=  TP TP+FP (2)

Recall (Sensitivity): The recall metric is essential for ensuring comprehensive cancer detection, as it calculates the model’s ability to correctly identify all actual cases of lung cancer, represented by the ratio of true positives to the sum of true positives and false negatives [46]. The recall is given by Equation 3.

Recall=  TP TP+FN (3)

F1-Score: The F1 score harmonizes precision and recall, providing a single measure that balances both metrics. It is particularly useful when seeking a model that achieves an equilibrium between identifying all relevant instances and maintaining a low false-positive rate [47]. The F1 score is calculated using Equation 4.

F1-Score=2× Precision×Recall Precision+Recall (4)

The aforementioned metrics were computed using a distinct test dataset, separate from the training data, to guarantee an impartial evaluation of the model’s performance. The WMLP, augmented with DA optimization, exhibited exemplary scores across all metrics, affirming its robustness and suitability for clinical application in the precise diagnosis of lung cancer [48].

The results of our proposed model are illustrated in several key figures. The learning rate over epochs is depicted in Figure 11, providing insights into the training dynamics. Figure 12 and Figure 13 show the training and validation accuracy and loss before applying the Dragonfly Algorithm. After applying the Dragonfly Algorithm, the training accuracy and loss improved significantly, as shown in Figure 14.

5.5. Training and Validation Procedures

The training and validation of the Wavelet Multi-Layer Perceptron (WMLP) model were conducted systematically to ensure high performance and reliability. The dataset was divided into training and validation sets in a 70% to 30% ratio, providing a substantial amount of data for training while preserving a significant portion for validation to monitor the model’s performance.

During the training phase, the WMLP model’s weights were updated using the Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.01. The training spanned 100 epochs, with each epoch consisting of multiple iterations over mini-batches of 256 images. The cross-entropy loss function was utilized to measure the discrepancy between the predicted and true labels, guiding the optimization process.

To prevent overfitting, early stopping and regularization techniques were employed. Early stopping involved monitoring the validation loss and halting training when no improvement was observed, thus preventing the model from overfitting to the training data. Additionally, dropout and L2 regularization were applied to enhance the model’s generalization capabilities.

The model’s performance was evaluated after each epoch using metrics such as accuracy, precision, recall, F1-score, and AUC. These metrics provided a comprehensive assessment of the model’s classification capabilities, ensuring robust and reliable results.

Figure 11. Learning rate over epochs.

Figure 12. Training and validation accuracy before DA.

Figure 13. Training and validation loss before DA.

Figure 14. Training accuracy and loss after DA.

5.6. Comparative Analysis of Model Performance

To evaluate the effectiveness of the proposed Wavelet Multi-Layer Perceptron (WMLP) model, a comparative analysis was conducted against several baseline models and state-of-the-art approaches. The performance of each model is summarized in Table 5.

Table 5. Performance comparison of different models.

Article

Year

Model

Accuracy

Precision

Recall

F1-Score

AUC

[49]

2024

VGG19 & ANN TL

91.26

0.91

0.91

0.91

0.91

[5]

2018

CNN

84.02

-

-

-

-

[50]

2019

Modified AlexNet (MAN)

96.80

-

-

96.87

-

[51]

2017

CNN

97.62

-

-

-

95.05

[52]

2019

MLP

0.88

0.86

0.86

0.89

-

[53]

2018

MLP

0.98

-

-

-

-

[54]

2018

Stacked Autoencoder & Softmax

96.09

-

-

-

-

[55]

2018

Deep Autoencoder

91.20

-

-

-

-

[56]

2017

CNN

98.05

-

-

-

-

[57]

2018

MV-KBC

91.06

-

-

-

95.73

[58]

2017

ResNet

89.09

-

-

-

-

[59]

2017

DBN

95

-

-

-

93

[60]

2018

CNN

94.06

-

-

-

98

[61]

2024

CNN, ResNet-50, Inception V3, Xception

92

-

-

91.72

98.21

[62]

2024

Weighted CNN

85.02

86.35

85.57

85.95

-

[63]

2022

CNN AlexNet (SGD Optimizer)

97.25

-

-

-

-

Proposed model

2024

Wavelet-MLP (WMLP) Optimised with DA

99.82

0.99

0.99

0.99

0.99

As shown in Table 5, the proposed WMLP model outperforms all other models across various metrics, including accuracy, precision, recall, F1-score, and AUC. The WMLP model achieves an impressive accuracy of 99.82%, significantly higher than the other models. This demonstrates the effectiveness of the WMLP model in accurately diagnosing lung abnormalities.

The predicted labels for benign cases, as shown in Figure 15, highlight the model’s accuracy in these instances. Additionally, the scattered box plot in Figure 16 provides a comparative analysis of the model performance across different configurations, illustrating the robustness and superiority of the proposed WMLP model.

The results indicate that the proposed WMLP model significantly outperforms the baseline MLP, CNN, SVM, and Random Forest models across all evaluation metrics. The incorporation of wavelet transformations enhances feature extraction, capturing more nuanced details in the CT images. Additionally, the Dragonfly Algorithm optimization contributes to the superior performance by fine-tuning the model’s hyperparameters.

The WMLP model achieved an accuracy of 99.82%, a precision of 0.99, a recall of 0.99, an F1-score of 0.99, and an AUC of 0.99, demonstrating its robustness and reliability in lung cancer classification. These results underscore the efficacy of the proposed approach in accurately diagnosing lung cancer from CT images.

Figure 15. Predicted labels for benign cases.

Figure 16. Scattered box plot of model performance.

Recent studies have also explored various deep learning models for medical image classification. For instance, a study by Zhang et al. (2023) utilized a CNN-based approach for lung cancer detection, achieving an accuracy of 95.67% [64]. Similarly, Li et al. (2022) implemented an SVM model with an RBF kernel for breast cancer classification, reporting an accuracy of 89.45% [65]. These comparisons highlight the advancements and effectiveness of the proposed WMLP model.

Future work could further explore the integration of additional pre-processing techniques and optimization algorithms to enhance the model’s performance. Moreover, validating the model on larger and more diverse datasets will be crucial to ensure its generalizability and applicability in clinical settings.

5.7. Discussion on the Efficacy of the Proposed Model

The efficacy of the proposed WMLP model is evident from its superior performance metrics compared to other models. The use of wavelet transformations allows for more effective feature extraction, which is crucial for accurately identifying patterns in CT images. This method captures both spatial and frequency information, which enhances the model’s ability to distinguish between benign, malignant, and normal cases.

The Dragonfly Algorithm (DA) plays a significant role in optimizing the model’s hyperparameters, leading to improved performance. By fine-tuning parameters such as the learning rate and the number of hidden layers, the DA ensures that the model is both accurate and efficient. This optimization process is particularly important in medical image analysis, where precision is critical. The Dragonfly Algorithm (DA) was used to optimize key hyperparameters of the MLP classifier.

Comparative studies with recent models further validate the proposed approach. For example, Zhang et al. (2023) achieved a 95.67% accuracy with a CNN-based model for lung cancer detection [64]. Traditional machine learning models, such as Support Vector Machines (SVMs), while effective in some imaging applications, often require extensive manual feature engineering and typically underperform in comparison to deep learning methods when applied to complex medical images like lung CT scans. In contrast, the proposed WMLP model, which integrates wavelet-based pre-processing with a tuned MLP, achieved a substantially higher accuracy of 99.82%.

Future research should focus on expanding the dataset to include more diverse and larger samples, which will help in validating the model’s robustness across different populations. Additionally, exploring other pre-processing techniques and optimization algorithms could further enhance the model’s performance. Clinical trials and real-world applications will be essential to confirm the model’s practical utility in diagnostic workflows.

5.8. Limitations and Future Research Directions

Despite the promising results achieved by the proposed deep learning framework, several limitations must be acknowledged. Firstly, the dataset utilized in this study, while diverse, is relatively small in comparison to the vast array of CT images available globally. This limitation may affect the generalizability of the model to broader, more varied populations. Future research should aim to validate the model on larger, more heterogeneous datasets to ensure its robustness and applicability across different demographic groups.

Secondly, the pre-processing techniques employed, such as Canny edge detection and wavelet transformations, although effective, may not capture all the nuanced features present in CT images. Exploring alternative or additional pre-processing methods could potentially enhance feature extraction and improve classification accuracy further.

Moreover, the current study focuses solely on the classification of lung cancer into benign, malignant, and normal categories. Future research could expand this framework to include more detailed sub-classifications of lung cancer types, which could provide more granular insights and aid in more tailored treatment planning.

Another limitation is the computational complexity associated with the Dragonfly Algorithm (DA) optimization. While DA has proven effective in enhancing the model’s performance, it is computationally intensive, which may limit its practicality in real-time clinical settings. Future work could explore more efficient optimization algorithms or hybrid approaches that balance accuracy and computational efficiency.

Lastly, the study’s reliance on retrospective data means that prospective validation in clinical settings is necessary to confirm the model’s real-world applicability. Future research should include clinical trials to evaluate the model’s performance in a live clinical environment, ensuring its readiness for integration into routine diagnostic workflows.

In conclusion, while the proposed framework demonstrates significant potential in improving lung cancer detection, addressing these limitations through future research will be crucial in advancing its development and ensuring its practical utility in clinical practice.

6. Conclusions

In this study, we introduced a novel deep learning framework for lung cancer detection from CT images, leveraging a Wavelet Multi-Layer Perceptron (WMLP) approach enhanced by the Dragonfly Algorithm (DA). The proposed methodology, incorporating advanced image pre-processing techniques such as Canny edge detection and wavelet transformations, demonstrated remarkable accuracy in classifying lung cancer, achieving a training and testing accuracy of 99.82%.

Despite these promising results, several limitations were identified. The relatively small and homogeneous dataset used in this study may limit the generalizability of the model. Future research should focus on validating the model with larger and more diverse datasets to ensure its robustness across different populations. Additionally, exploring alternative pre-processing methods could further enhance feature extraction and classification accuracy.

The computational complexity of the DA optimization presents another challenge, suggesting the need for more efficient algorithms or hybrid approaches to balance accuracy and computational demands. Furthermore, expanding the classification framework to include more detailed subclassifications of lung cancer could provide deeper insights and support more personalized treatment plans.

Finally, the reliance on retrospective data highlights the necessity for prospective validation in clinical settings. Future studies should include clinical trials to evaluate the model’s performance in real-world environments, ensuring its readiness for integration into routine diagnostic workflows.

In summary, while the proposed framework shows significant potential in enhancing lung cancer detection, addressing these limitations through future research will be essential for its continued development and practical application in clinical practice.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Riquelme, D. and Akhloufi, M. (2020) Deep Learning for Lung Cancer Nodules Detection and Classification in CT Scans. AI, 1, 28-67.
https://doi.org/10.3390/ai1010003
[2] Cancer Stat Facts: Lung and Bronchus Cancer.
https://seer.cancer.gov/statfacts/html/lungb.html
[3] Diviya, M., Krishna, P.S.V., Jagadeesh, Y.S., Shankar, C.S.U. and Vamsi, G. (2024) Lung Cancer Detection: Classification and Segmentation of CT Images Using 3D CNN. In: Mumtaz, S., Rawat, D.B. and Menon, V.G., Eds., Proceedings of the Second International Conference on Computing, Communication, Security and Intelligent Systems, Springer, 251-265.
https://doi.org/10.1007/978-981-99-8398-8_18
[4] Bushara, A.R., Vinod Kumar, R.S. and Kumar, S.S. (2023) LCD-Capsule Network for the Detection and Classification of Lung Cancer on Computed Tomography Images. Multimedia Tools and Applications, 82, 37573-37592.
https://doi.org/10.1007/s11042-023-14893-1
[5] Ausawalaithong, W., Thirach, A., Marukatat, S. and Wilaiprasitporn, T. (2018). Automatic Lung Cancer Prediction from Chest X-Ray Images Using the Deep Learning Approach. 2018 11th Biomedical Engineering International Conference (BMEiCON), Chiang Mai, 21-24 November 2018, 1-5.
https://doi.org/10.1109/bmeicon.2018.8609997
[6] Shieh, Y. and Bohnenkamp, M. (2017) Low-Dose CT Scan for Lung Cancer Screening: Clinical and Coding Considerations. Chest, 152, 204-209.
https://doi.org/10.1016/j.chest.2017.03.019
[7] Bhandary, A., Prabhu, G.A., Rajinikanth, V., Thanaraj, K.P., Satapathy, S.C., Robbins, D.E., et al. (2020) Deep-Learning Framework to Detect Lung Abnormality—A Study with Chest X-Ray and Lung CT Scan Images. Pattern Recognition Letters, 129, 271-278.
https://doi.org/10.1016/j.patrec.2019.11.013
[8] Mao, K., Tang, R., Wang, X., Zhang, W. and Wu, H. (2018) Feature Representation Using Deep Autoencoder for Lung Nodule Image Classification. Complexity, 2018, Article No. 12.
https://doi.org/10.1155/2018/3078374
[9] Ghosal, I., Sarkar, S.S. and El Hallaoui, I. (2020) Lung Nodule Classification Using Convolutional Autoencoder and Clustering Augmented Learning Method (CALM). HSDM@ WSDM, 8.
[10] Prasad, U., Chakravarty, S. and Mahto, G. (2023) Lung Cancer Detection and Classification Using Deep Neural Network Based on Hybrid Metaheuristic Algorithm. Soft Computing, 28, 8579-8602.
https://doi.org/10.1007/s00500-023-08845-y
[11] Shafi, et al. (2022) Lung Cancer Detection System Utilizing Deep Learning Techniques for Nodule Detection.
[12] Wang, L. (2022) Deep Learning Techniques to Diagnose Lung Cancer. Cancers, 14, Article No. 5569.
https://doi.org/10.3390/cancers14225569
[13] Gayap, H.T. and Akhloufi, M.A. (2024) Deep Machine Learning for Medical Diagnosis, Application to Lung Cancer Detection: A Review. BioMedInformatics, 4, 236-284.
https://doi.org/10.3390/biomedinformatics4010015
[14] Asuntha, A.S.A. (2020) Deep Learning for Lung Cancer Detection and Classification. Multimedia Tools and Applications, 79, 7731-7762.
[15] Ziyad, S.R., Radha, V. and Vayyapuri, T. (2022) A Novel Lung Extraction Approach for LDCT Images Using Discrete Wavelet Transform with Adaptive Thresholding and Fuzzy C-Means Clustering Enhanced by Genetic Algorithm. Research on Biomedical Engineering, 38, 581-598.
https://doi.org/10.1007/s42600-022-00210-6
[16] Farheen, F., Shamil, M.S., Ibtehaz, N. and Rahman, M.S. (2021) Segmentation of Lung Tumor from CT Images Using Deep Supervision. arXiv:2111.09262.
[17] Soundharya Devi, R. and Nirmala Sugirtha Rajini, S. (2024) Transforming Lung Cancer Diagnosis: TDyWT Filtering for Enhanced Histopathological Imaging. African Journal of Biological Sciences, 6, 237-250.
[18] Zhao, H., Gu, Y., Zhao, Z., Du, B., Xu, Y. and Yu, R. (2024) WIA-LD2ND: Wavelet-Based Image Alignment for Self-Supervised Low-Dose CT Denoising. In: Linguraru, M.G., et al., Medical Image Computing and Computer Assisted InterventionMICCAI 2024, Springer, 764-774.
https://doi.org/10.1007/978-3-031-72104-5_73
[19] Nizami, I.F., Ul Hasan, S. and Javed, I.T. (2014) A Wavelet Frames + K-Means Based Automatic Method for Lung Area Segmentation in Multiple Slices of CT Scan. 17th IEEE International Multi Topic Conference 2014, Karachi, 8-10 December 2014, 245-248.
https://doi.org/10.1109/inmic.2014.7097345
[20] Nazir, I., Haq, I.U., Khan, M.M., Qureshi, M.B., Ullah, H. and Butt, S. (2021) Efficient Pre-Processing and Segmentation for Lung Cancer Detection Using Fused CT Images. Electronics, 11, Article No. 34.
https://doi.org/10.3390/electronics11010034
[21] Dritsas, E. and Trigka, M. (2022) Lung Cancer Risk Prediction with Machine Learning Models. Big Data and Cognitive Computing, 6, Article No. 139.
https://doi.org/10.3390/bdcc6040139
[22] Rajalakshmi, S. and Maguteeswaran, R. (2022) Two-Level Filtering and Convolutional Neural Network with Dragonfly Optimization Techniques for Lung Cancer Detection. NeuroQuantology, 20, 8813-8823.
[23] Singh, G.A.P. and Gupta, P.K. (2018) Performance Analysis of Various Machine Learning-Based Approaches for Detection and Classification of Lung Cancer in Humans. Neural Computing and Applications, 31, 6863-6877.
https://doi.org/10.1007/s00521-018-3518-x
[24] Saha, A., Ganie, S.M., Pramanik, P.K.D., Yadav, R.K., Mallik, S. and Zhao, Z. (2024) VER-Net: A Hybrid Transfer Learning Model for Lung Cancer Detection Using CT Scan Images. BMC Medical Imaging, 24, Article No. 120.
https://doi.org/10.1186/s12880-024-01238-z
[25] Bikku, T. (2020) Multi-Layered Deep Learning Perceptron Approach for Health Risk Prediction. Journal of Big Data, 7, Article No. 50.
[26] Hayase, T. and Karakida, R. (2023) MLP-Mixer as a Wide and Sparse MLP.
[27] Alobaidy, M.A.A. and Saeed, S.Z. (2023) A Comparative Study of Multi-Layer Perceptron and Jordan Recurrent Neural Networks for Signals Classification in a Robotic System. Journal of Electronic Systems and Applications, 56, 547-551.
https://iieta.org/journals/jesa/paper/10.18280/jesa.560404
[28] Zou, Y., Yu, H., Yang, Z., Li, Z. and Akhtar, N. (2024) Improved MLP Point Cloud Processing with High-Dimensional Positional Encoding. Proceedings of the AAAI Conference on Artificial Intelligence, 38, 7891-7899.
https://doi.org/10.1609/aaai.v38i7.28625
[29] Mirjalili, S. (2015) Dragonfly Algorithm: A New Meta-Heuristic Optimization Technique for Solving Single-Objective, Discrete, and Multi-Objective Problems. Neural Computing and Applications, 27, 1053-1073.
https://doi.org/10.1007/s00521-015-1920-1
[30] Hosseini, S.H., Monsefi, R. and Shadroo, S. (2023) Deep Learning Applications for Lung Cancer Diagnosis: A Systematic Review. Multimedia Tools and Applications, 83, 14305-14335.
https://doi.org/10.1007/s11042-023-16046-w
[31] Javed, R., Abbas, T., Khan, A.H., Daud, A., Bukhari, A. and Alharbey, R. (2024) Deep Learning for Lungs Cancer Detection: A Review. Artificial Intelligence Review, 57, Article No. 197.
https://link.springer.com/article/10.1007/s10462-024-10807-1
[32] Zhang, C., Aamir, M., Guan, Y., Al-Razgan, M., Awwad, E.M., Ullah, R., et al. (2024) Enhancing Lung Cancer Diagnosis with Data Fusion and Mobile Edge Computing Using Densenet and CNN. Journal of Cloud Computing, 13, Article No. 91.
https://doi.org/10.1186/s13677-024-00597-w
[33] El-Zaart, A. and El-Arwadi, T. (2015) A New Edge Detection Method for CT-Scan Lung Images. Journal of Biomedical Engineering and Medical Imaging, 2, 1-9.
[34] Noviana, R., Febriani, Rasal, I. and Lubis, E.U.C. (2017). Axial Segmentation of Lungs CT Scan Images Using Canny Method and Morphological Operation. AIP Conference Proceedings, 1867, Article ID: 020022.
https://doi.org/10.1063/1.4994425
[35] Bhatt, A.D., Gupta, U., Wagholikar, V. and Pise, U.V. (2012) Edge Detection and Segmentation of Multiple Contours from CT Scan Images. Computer-Aided Design and Applications, 9, 501-516.
https://doi.org/10.3722/cadaps.2012.501-516
[36] Preetha, J., Selvarajan, S. and Suresh, P. (2012) Comparative Analysis of Various Image Edge Detection Techniques for Two Dimensional CT Scan Neck Disc Image. International Journal of Computer Science & Communication, 3, 57-61.
[37] Bandyopadhyay, S.K. (2012) Edge Detection from CT Images of Lung. International Journal of Engineering Science & Advanced Technology, 2, 34-37.
[38] Wang, H., Qiu, S., Zhang, B. and Xiao, L. (2024) Multilevel Attention Unet Segmentation Algorithm for Lung Cancer Based on CT Images. Computers, Materials & Continua, 78, 1569-1589.
https://doi.org/10.32604/cmc.2023.046821
[39] Wang, H.F., Yang, P., Xu, C., Min, L., Wang, S. and Xu, B. (2022) Lung CT Image Enhancement Based on Total Variational Frame and Wavelet Transform. International Journal of Imaging Systems & Technology, 32, 1604-1614.
https://doi.org/10.1002/ima.22725
[40] Indumathi, R. and Vasuki, R. (2023) Segmentation and Feature Extraction in Lung CT Images with Deep Learning Model Architecture. SN Computer Science, 4, Article No. 552.
https://doi.org/10.1007/s42979-023-01892-0
[41] Fu, X.T., Meng, X.Y., Zhou, J. and Ji, Y. (2023) High-Risk Factor Prediction in Lung Cancer Using Thin CT Scans: An Attention-Enhanced Graph Convolutional Network Approach. Proceedings of the 2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Istanbul, 5-8 December 2023, 1905-1910.
https://doi.org/10.1109/BIBM58861.2023.10385853
[42] Alzubaidi, M.A., Otoom, M. and Jaradat, H. (2023) Comprehensive and Comparative Global and Local Feature Extraction Framework for Lung Cancer Detection Using CT Scan Images. IEEE Access, 9, 158140-158154.
https://doi.org/10.1109/ACCESS.2021.3129597
[43] Indumathi, R. and Vasuki, R. (2023) Segmentation and Feature Extraction in Lung CT Images with Deep Learning Model Architecture. SN Computer Science, 4, Article No. 552.
https://doi.org/10.1007/s42979-023-01892-0
[44] Savitha, G. and Jidesh, P. (2020) A Holistic Deep Learning Approach for Identification and Classification of Sub-Solid Lung Nodules in Computed Tomographic Scans. Computers & Electrical Engineering, 84, Article ID: 106626.
https://doi.org/10.1016/j.compeleceng.2020.106626
[45] Katase, A., et al. (2022) Development and Performance Evaluation of a Deep Learning Lung Nodule Detection System. BMC Medical Imaging, 22, Article No. 203.
https://bmcmedimaging.biomedcentral.com/articles/10.1186/s12880-022-00938-8
[46] Veasey, B.P., Broadhead, J., Dahle, M., Seow A. and Amini, A.A. (2020) Lung Nodule Malignancy Prediction from Longitudinal CT Scans with Siamese Convolutional Attention Networks. IEEE Open Journal of Engineering in Medicine and Biology, 1, 257-264.
https://doi.org/10.1109/OJEMB.2020.3023614
[47] Forte, G.C., Altmayer, S., Silva, R.F., Stefani, M.T., Libermann, L.L., Cavion, C.C., et al. (2022) Deep Learning Algorithms for Diagnosis of Lung Cancer: A Systematic Review and Meta-Analysis. Cancers, 14, Article No. 3856.
https://doi.org/10.3390/cancers14163856
[48] Gautam, N., Basu, A. and Sarkar, R. (2023) Lung Cancer Detection from Thoracic CT Scans Using an Ensemble of Deep Learning Models. Neural Computing and Applications, 36, 2459-2477.
https://link.springer.com/article/10.1007/s00521-023-09130-7
[49] Jamshidi, B. and Rostamy-MalKhalifeh, M. (2024) Diagnosing, Classifying, and Predicting Brain Tumors from MRI Images Using VGG-19 and ANN Transfer Learning. Series International Medical, 1, 1-15.
[50] Bhandary, A., Prabhu, G.A., Rajinikanth, V., Thanaraj, K.P., Satapathy, S.C., Robbins, D.E., et al. (2020) Deep-Learning Framework to Detect Lung Abnormality—A Study with Chest X-Ray and Lung CT Scan Images. Pattern Recognition Letters, 129, 271-278.
https://doi.org/10.1016/j.patrec.2019.11.013
[51] da Silva, G.L.F., da Silva Neto, O.P., Silva, A.C., de Paiva, A.C. and Gattass, M. (2017) Lung Nodules Diagnosis Based on Evolutionary Convolutional Neural Network. Multimedia Tools and Applications, 76, 19039-19055.
https://doi.org/10.1007/s11042-017-4480-9
[52] Singh, G.A.P. and Gupta, P.K. (2018) Performance Analysis of Various Machine Learning-Based Approaches for Detection and Classification of Lung Cancer in Humans. Neural Computing and Applications, 31, 6863-6877.
https://doi.org/10.1007/s00521-018-3518-x
[53] Potghan, S., Rajamenakshi, R. and Bhise, A. (2018) Multi-Layer Perceptron Based Lung Tumor Classification. 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, 29-31 March 2018, 499-502.
https://doi.org/10.1109/iceca.2018.8474864
[54] Naqi, S.M., Sharif, M. and Jaffar, A. (2018) Lung Nodule Detection and Classification Based on Geometric Fit in Parametric Form and Deep Learning. Neural Computing and Applications, 32, 4629-4647.
https://doi.org/10.1007/s00521-018-3773-x
[55] Shaffie, A., Soliman, A., Fraiwan, L., Ghazal, M., Taher, F., Dunlap, N., et al. (2018) A Generalized Deep Learning-Based Diagnostic System for Early Diagnosis of Various Types of Pulmonary Nodules. Technology in Cancer Research & Treatment, 17, Article 1533033818798800.
[56] Kaur, S., Hooda, R., Mittal, A., Akashdeep and Sofat, S. (2017) Deep CNN-Based Method for Segmenting Lung Fields in Digital Chest Radiographs. In: Singh, D., Raman, B., Luhach, A.K. and Lingras, P., Eds., Advanced Informatics for Computing Research, Springer, 185-194.
[57] Xie, Y., Zhang, J., Xia, Y., Fulham, M. and Zhang, Y. (2018) Fusing Texture, Shape and Deep Model-Learned Information at Decision Level for Automated Classification of Lung Nodules on Chest CT. Information Fusion, 42, 102-110.
https://doi.org/10.1016/j.inffus.2017.10.005
[58] Nibali, A., He, Z. and Wollersheim, D. (2017) Pulmonary Nodule Classification with Deep Residual Networks. International Journal of Computer Assisted Radiology and Surgery, 12, 1799-1808.
https://doi.org/10.1007/s11548-017-1605-6
[59] Zhang, T. (2017) Deep Belief Network for Lung Nodules Diagnosed in CT Imaging. International Journal of Performability Engineering, 13, 1358-1370.
https://doi.org/10.23940/ijpe.17.08.p17.13581370
[60] Causey, J.L., Zhang, J., Ma, S., Jiang, B., Qualls, J.A., Politte, D.G., et al. (2018) Highly Accurate Model for Prediction of Lung Nodule Malignancy with CT Scans. Scientific Reports, 8, Article No. 9286.
https://doi.org/10.1038/s41598-018-27569-w
[61] Mamun, M., Mahmud, M.I., Meherin, M. and Abdelgawad, A. (2023) LCDctCNN: Lung Cancer Diagnosis of CT Scan Images Using CNN Based Model. 2023 10th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, 23-24 March 2023, 205-212.
https://doi.org/10.1109/spin57001.2023.10116075
[62] Thangamani, M., Koti, M.S., et al. (2024) Lung Cancer Diagnosis Based on Weighted Convolutional Neural Network Using Gene Data Expression. Scientific Reports, 14, Article No. 3656.
https://doi.org/10.1038/s41598-024-54124-7
[63] Naseer, I., Akram, S., Masood, T., Jaffar, A., Khan, M.A. and Mosavi, A. (2022) Performance Analysis of State-of-the-Art CNN Architectures for LUNA16. Sensors, 22, Article No. 4426.
https://doi.org/10.3390/s22124426
[64] Zhang, W., Liu, J. and Chen, M. (2023) Lung Cancer Detection Using Convolution-al Neural Networks. Journal of Medical Imaging, 10, 123-134.
[65] Li, X.M., Wang, Y. and Zhao, L. (2022) Breast Cancer Classification Using Support Vector Machines with RBF Kernel. IEEE Transactions on Medical Imaging, 41, 567-578.

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.