A Study on Diagnostic Assist Systems of Chronic Obstructive Pulmonary Disease from Medical Images by Deep Learning

Abstract

In this paper, we propose new diagnostic assist systems of medical images using deep learning algorithms. Specifically, we aim to develop a diagnostic support system for the very early stage of chronic obstructive pulmonary disease (COPD) based on the CT images. It is said that COPD is a disease that develops due to long-term smoking, and it is said that there are a large number of latent onset reserve forces. By discovering this COPD in the very early period 0 and improving the living conditions, subsequent severity can be avoided in many cases, so a system that will help diagnosis by professional radiologists is needed. We show the some experimental results examined by the constructed system.

Share and Cite:

Kimura, T. , Kawakami, T. , Kikuchi, A. , Ooev, R. , Akiyama, M. and Horikoshi, H. (2018) A Study on Diagnostic Assist Systems of Chronic Obstructive Pulmonary Disease from Medical Images by Deep Learning. Journal of Computer and Communications, 6, 21-31. doi: 10.4236/jcc.2018.61003.

1. Introduction

Chronic Obstructive Pulmonary Disease (COPD) is a disease added to target diseases of “Health Japan 21” planned by the Ministry of Health, Labor and Welfare as one of new lifestyle-related diseases since 2013 [1]. COPD, also known as “tobacco disease”, is a pulmonary chronic inflammatory disease caused by long-term inhalation exposure of harmful substances, mainly tobacco smoke. The condition is a respiratory disease that results in progressive and irreversible airflow obstruction. Moreover, it is known that impairment of dyspnea and exercise capacity at the time of exertion is caused, impairing patient’s quality of life and threatening life prognosis. It is estimated that COPD prevalence in Japan is 8.6%, and the number of patients nationwide is estimated to be about 5.3 million. However, according to the Ministry of Health, Labor and Welfare Ministry survey of medical institutions visited by medical institutions, only about 260 thousand of them are actually receiving treatment, and 95% of patients are considered untreated. In areas like Hokkaido where smoking rate is high, some countermeasures are necessary in particular.

The diagnosis of COPD is generally performed by a respiratory exam called a spirometry test. However, in this examination, it cannot be detected unless the disease state has advanced to the severe stage. Therefore, the chest CT scan imaging is achieved to detect early stage of COPD recently [1]. There are two types of COPD. Those are emphysematous COPD and non-emphysematous COPD. In emphysematous COPD, an emphysematous lesion develops in the alveolus. The alveolar wall is also destroyed at that site. Therefore, in chest CT scan imaging diagnosis, it is possible to distinguish between emphysematous lesion and normal lung by verifying this difference. Especially, a very small lesion structure of about several mm considered to be the smallest unit of emphysematous lesion can be detected by using the high resolution CT (HRCT). It became possible to find a preliminary COPD group for patients who do not have subjective symptoms or patients who do not have abnormality by spirometry examination. However, in the current examination method, it is an image analysis method that is not judged until the amount of alveolar wall rupture lesion exceeds a predetermined threshold value. Therefore, very earlier COPD in which the lesion just starts to appear from the normal lung is hard to detect.

In addition, medical image data and diagnostic information are increased due to enhancement of medical inspection equipment and necessity of multi-modality diagnosis, so that the diagnostic load of the radiologist is increasing.

Therefore, in this study, we attempt to construct a diagnosis support system for the earlier stage of COPD. The system can provide diagnostically useful information systematically to radiation diagnosticians, using the deep learning method. We aim to alleviate the diagnostic burden of radiologists and to significantly increase the number of patients diagnosed by using this system.

2. Medical Image Diagnosis Assist System Using Deep Learning

The effectiveness of medical image recognition using an artificial neural network (ANN) has been reported for some time. However, due to the lack of development of ANN algorithm, recognition accuracy of complicated medical images was not very high, so general image processing technologies were adopted to compute feature quantities of medical images, such as SIFT, and so on. Based on the extracted feature information, some recognition mechanism are constructed by pattern matching techniques [2]-[6]. However, in the image recognition contest ILSVRC2012 (ImageNet Large Scale Visual Recognition Challenge) [7], Hinton et al. developed a deep learning method having great high recognition rate using deep convolutional neural network (DCNN) [8] [9]. Various image recognition tasks then have been solved with a similar architecture [10]. Many researches on medical image recognition are also performed based on the deep learning algorithms [11].

The authors also verified that DCNN can learn the region classification task and the organ extraction task on the CT images taken of the whole body with high accuracy. In addition, we are developing a system that can realize abnormality detection of tumor, nodule, pleural effusion from lung CT image with considerably high accuracy. Several diagnostic support systems aimed at screening for lung cancer have been reported as a study targeting chest CT images. Other studies on the classification of diffuse lung diseases have been reported [12]. However, research on diagnostic support for COPD using deep learning has not yet been conducted, so this study will cover this.

3. Chronic Obstructive Pulmonary Disease (COPD)

According to the COPD guidelines of the Japan Respiratory Society, the diagnosis of COPD is determined as satisfying the following two conditions [1]. Thus, the FEV1/FVC ratio (FEV1%) is less than 70% in spirometry testing. FEV1/FVC (FEV1%) is the ratio of FEV1 to FVC. Forced vital capacity (FVC) is the volume of air that can forcibly be blown out after full inspiration measured in liters. FVC is the most basic maneuver in spirometry tests. Forced expiratory volume in 1 second (FEV1) is the volume of air that can forcibly be blown out in one second after full inspiration. In obstructive diseases such as COPD, FEV1 is diminished because of increased airway resistance to expiratory flow. Another condition for diagnosis is that other diseases such as asthma and bronchiectasis that can cause other airflow obstruction are excluded.

Disease staging of COPD is classified into 4 stages based on the ratio with standard FEV1. In the GOLD guidelines [13]. Stage I is classified as mild airflow obstruction, stage II is moderate airflow obstruction, stage III is advanced airflow obstruction, stage IV is extremely advanced airflow obstruction shown as Table 1.

Table 1. The GOLD guidelines for severity stages of COPD.

Spirometry test is indispensable for diagnosis of COPD, but image inspection such as chest X-ray and chest CT scan are also used as an auxiliary tests.

Chest X-ray is useful for exclusion diagnosis of other diseases and lesion diagnosis of advanced COPD, but it is difficult to detect small COPD lesions. Meanwhile, high resolution CT (HRCT) is also becoming possible to detect airway lesions, so it is useful for early diagnosis of COPD. HRCT can detect lesions of a few mm in size, which is the smallest unit of pulmonary emphysema. At the site of pulmonary emphysema of COPD, the connecting lung tissues are destructed and developed the large air pockets that replace lung tissue. The destruction of the connective tissue of the lungs is what leads to emphysema, which then contributes to the poor airflow and, finally, poor absorption and release of respiratory gases. A high-resolution CT scan of the chest show the distribution of emphysema throughout the lungs and can also be useful to exclude other lung diseases. In detail, the enlarged air space, which is generated by the destruction of the pulmonary alveolar region, is observed as a low attenuation area (LAA) shown as Figure 1. The LAA is useful for detection of emphysematous change since it correlates to some degree with pathological morphological changes of lung tissues. Analysis of LAA makes it possible to capture the existence of early stage COPD which does not show abnormality with subjective symptoms or the value of FEV1%.

4. COPD Diagnosis Assist System

In this study, we will identify the LAA region from the chest CT scan images, automatically estimate the stage of COPD, and aim to develop a system that can be used for diagnosis support.

4.1. Evaluation of Emphysema Predominant Type COPD by Goddard Method

Goddard’s classification [14] is a visual evaluation method of COPD focusing on the spread of emphysematous lesions in the lung CT scan images. The emphysematous lesion site becomes the low absorption region LAA in lung CT. In general, the CT value of the normal lung field is about −700 HU and the emphysematous lesion site is set to about −910 HU or less. Wang and colleagues reported that severity of disease in spirometry tests correlates most accurately

Figure 1. Law attenuation area in the lung.

when LAA is set below −950 HU [15]. In the Goddard method, the emphysematous lesion is defined as the LAA of −950 HU or less. The ratio of this LAA area to the whole lung field is evaluated and scored. The Goddard classification for findings of pulmonary emphysema in HRCT images is follows;

• Goddard classification-1 point: Scattered emphysematous lesions 1 cm or less in diameter.

• Goddard classification-2 points: Large size LAA due to the fusion of emphysematous lesions.

• Goddard classification-3 points: LAA occupies an even larger area by the more pronounced fusion of the emphysematous lesions.

• Goddard classification-4 points: Most of the lung is occupied by emphysematous lesions and only a small amount of normal lung remains.

Based on the Goddard classification, visual evaluation of pulmonary emphysema is performed. In this evaluation, a representative total of 6 lung fields are evaluated and scored respectively. That is, there are three levels of the upper lung field (the level of the aortic arch upper edge), the middle lung field (the level of the trachea bifurcation), and the lower lung field (the level near the upper edge of the diaphragm), then right and left lung fields of three levels are evaluated. A score of 0 to 4 points is given depending on the ratio of the area of the LAA to the area of each lung field. Total score is sum of 6 lung fields’ scores (maximum 24 points). If the total is less than 8 points, it is mild, 8 to less than 16 points are moderate, and 16 points are more severe. Table 2 shows visual evaluation of pulmonary emphysema.

In the semi-quantitative evaluation method like the Goddard method, variations in evaluation by different observers and variations in multiple evaluations by the same observer can be a problem for accurate diagnosis. In order to solve the problem of variation, a method is adopted in which a certain threshold value is determined for the CT value and the LAA area is automatically discriminated. In addition to the method of evaluating the ratio of the LAA area to the total lung field area, a method of comparing the average value of the CT value, or a method of comparing the peak value of the histogram of the CT value is adopted based on the predetermined threshold value. In this study, we refer to Goddard classification, but 0 point is given when the LAA region is less than 5% of the target lung field.

Table 2. Visual evaluation of pulmonary emphysema.

4.2. The Proposed Emphysema Predominant Type COPD Diagnosis Assist System

The outline of the emphysema predominant type COPD diagnosis assist system in this research is as follows.

1) Using the CT image of the whole body as input, a set of lung CT images is extracted by the classification system based on deep learning. This extraction is performed by the learned deep convolution neural network (DCNN).

2) Three representative lung CT images, i.e. the upper lung field image of the upper aortic arch position, the middle lung field image of the position of the trachea branch position and the lower lung field of the diaphragm upper edge position, are extracted from the chest CT images. This extraction is also executed by the DCNN which learned the representative lung images.

3) After identifying the lung field area for the extracted three representative images, the CT value of each internal pixel is checked and the distribution of CT values is calculated. In the conventional method, the LAA region is classified with predetermined thresholds such as −950 HU or less, and identification has been roughly made. Furthermore, since the getting worse of CT values exceeding the threshold value cannot be found in the course of the early disease progression, we decided to treat the CT values distribution for the purpose of more refined evaluation.

4) A stage classifier of early COPD is constructed by SVM machine learning on the CT values distribution calculated in 3). For learning in SVM, the evaluation score based on the Goddard method is taken as the correct label.

5) In order to measure the degree of progression of the disease, the difference distribution of CT values between the past CT scan images and the current CT scan images of the same patient is estimated. This difference distribution of CT values is calculated from the output of processes 1) to 3). Furthermore, SVM also learns this difference distribution of CT values for predicting prognosis.

4.3. Deep Learning Method of COPD Diagnosis Assist System

The DICOM data of the CT scan image is directly input to our diagnosis assist system. The DICOM is a standard format used for handling medical images uniformly. The DICOM provision specifies not only a file format but also a communication protocol used for remote diagnosis. In addition to image information, a large number of tags also exist as internal data, and patient information, examination information, and the like are recorded. Since the data size used per pixel of image data is represented by 16 bit gradation, it has higher expressive capability than jpeg or png images data.

A learning data set of CT images tagged with classification teacher labels is prepared for learning using DCNN corresponding to the procedures 1) and 2) mentioned above. The data set here is a set of whole-body CT images, which is created by assigning a teacher label to the image data from 53 individuals. About 300 to 500 CT images are scanned from one individual. Even CT images of the same person are handled as separate individuals if the scanning dates are different. For learning process 1), the names of the organs are labeled. Each of the representative three types of lung CT images and others were labeled respectively for the learning process 2).

The Eleven layered DCNN called AlexNet [8] is adopted as the learning network structure here. It is as shown in Table 3. Rectifier (ReLU: Rectified Linear Unit) is used for the activation function in the convolution layer and full connection layer, and the identification value is calculated by the soft max function in the unit of the output layer corresponding to the number of classes to be classified. Although this DCNN is not so deep and has a basic structure, but it is confirmed that this structure demonstrates sufficient performance. The reason is there are few classification classes, and learning data are continuous.

The lung field images extraction process 1) uses the organ extraction system developed by our previous study. This DCNN based system classifies and extracts the image group in which the designated organs (heart, lung, liver, stomach, spleen, or kidney) from among the image data of one individual. In this case, since multiple organs are captured on one image of the chest and abdomen, it is necessary to discriminate whether a specific organ exists or not. Therefore, the number of units in the output layer is set as the number of organs to be classified 7, and the output value from each unit is determined based on the threshold value. Figure 2 shows the learning curve for the training data. It can be confirmed that the organ can be correctly detected with accuracy of 99.9% or more after learning. It is confirmed that this learned DCNN based system can detect organs with accuracy of 95.24% for unknown test data.

Figure 2. Learning curves of the extraction system for training data.

Table 3. Network structure of the DCNN for learning process 1) and 2).

The representative lung images extraction process is also learned by DCNN using AlexNet. Training data of CT images is tagged as three representative lung images and other lung field images for four class classification. Therefore, the output layer are composed of 4 units. As shown in Figure 3, this DCNN also has learning accuracy of 99% or more for training data, and the test image can be detected with accuracy of 96%.

The CT value distribution in process 3) is calculated by totalizing the CT values of a certain range in a lung field area. A lung field area is extracted by general image processing techniques. As shown in Figure 4, the outside of the body is removed with a mask for the representative lung field CT image, then CT value distribution is calculated. The CT values of LAA are also illustrated easily. On the basis of this result, the pathological condition evaluation by the Goddard method can be scored.

Figure 3. Learning curves of the classification system for three representative lung CT images.

Figure 4. The distribution of CT values of a representative lung CT image.

In the step 4), the CT distribution obtained in 3) is learned by the kernel SVM for classifying the COPD stage. Here, we only focus the early stage of COPD that is evaluated with total score of less than 8 points by the Goddard evaluation method. They are classified into three classes, with 0 points in total, 1 to 3 points, and 4 to 7 points. The radial basis function is used for the kernel function of SVM. As a result of learning, it is confirmed that classification can be performed with very high accuracy for test data.

The difference distribution of CT values between the past and the current CT scan images of the same patient is estimated in step 5). A degree of progression of the disease is measured by this difference. Figure 5 shows a resultant difference distribution of CT values. From the difference distribution, worsening of the disease is expressed as a change in the CT distribution. Furthermore, SVM also learns this difference distribution of CT values for predicting prognosis.

5. Conclusion

In this paper, we proposed a new diagnostic assist system for the early stage of COPD using deep learning method, which reduces the diagnostic burden of radiologists and can provide systematically useful information for diagnosis. This system can learn the whole-body CT scanned images by end-to-end, and can present diagnosis support information. By handling the CT value distribution in the lung field directly, our system can analyze with higher accuracy than the

Figure 5. The difference distribution of CT values between the past and the current CT images.

conventional method of discriminating the LAA region using a general threshold value. As a future work, our system should be evaluated by radiologists, then be improved by reflecting the doctor’s evaluation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] The Japanese Respiratory Society (2004) Guidelines for the Diagnosis and Treatment of COPD (Chronic Obstructive Pulmonary Disease). 2nd Edition.
[2] Teramoto, A., Fujita, H., Takahashi, K., Yamamuro, O., Tamaki, T., Nishio, M. and Kobayashi, T. (2014) Hybrid Method for the Detection of Pulmonary Nodules Using Positron Emission Tomography/Computed Tomography: A Preliminary Study. Int. J. CARS, 9, 59-69. https://doi.org/10.1007/s11548-013-0910-y
[3] Teramoto, A., Adachi, H., Tsujimoto, M., Fujita, H., Takahashi, K., Yamamuro, O., Tamaki, T. and Nishio, M. (2015) Automated Detection of Lung Tumors in PET/CT Images Using Active Contour Filter. Proc. of SPIE Medical Imaging 2015: Computer-Aided Diagnosis, 9414, 94142V-1-94142V-6.
[4] American Cancer Society (2015) Cancer Facts and Figures 2015.
[5] Ide, M. and Suzuki, Y. (2005) Is Whole-Body FDG-PET Valuable for Health Screening? Eur. J. Nucl. Med. Mol. Imaging, 32, 339-341. https://doi.org/10.1007/s00259-005-1774-3
[6] Cui, Y., Zhao, B., Akhurst, T.J., Yan, J. and Schwartz, L.H. (2008) CT-Guided, Automated Detection of Lung Tumors on PET Images. Proc. of SPIE Medical Imaging 2008: Computer-Aided Diagnosis, 6915, 69152N-1-69152N-6. https://doi.org/10.1117/12.770549
[7] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C. and Fei-Fei, L. (2015) ImageNet Large Scale Visual Recognition Challenge. Int. J. Comp. Vis., 115, 211-252. https://doi.org/10.1007/s11263-015-0816-y
[8] Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012) Imagenet Classification with Deep Convolutional Neural Networks. Adv. Neur. In., 25, 1106-1114.
[9] LeCun, Y., Bengio, Y. and Hinton, G.E. (2015) Deep Learning. Nature, 521, 436-444. https://doi.org/10.1038/nature14539
[10] Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S. and Darrell, T. (2014) Caffe: Convolutional Architecture for Fast Feature Embedding. ACM Conference on Multimedia, 675-678. https://doi.org/10.1145/2647868.2654889
[11] Chen, J., Chen, J., Ding, H.Y., Pan, Q.S., Hong, W.D., Xu, G., et al. (2015) Use of an Artificial Neural Network to Construct a Model of Predicting Deep Fungal Infection in Lung Cancer Patients. Asian Pac J Cancer Prev., 16, 5095-5099. https://doi.org/10.7314/APJCP.2015.16.12.5095
[12] Suzuki, S., Iida, N., Shouno, H. and Kido, S. (2016) Architecture Design of Deep Convolutional Neural Network for Diffuse Lung Disease Using Representation Separation Information. Proc. of the Int. Conf. on Parallel and Distributed Processing Techniques and Applications 2016 (PDPTA’16), 1, 387-393.
[13] Vestbo, J. (2013) Diagnosis and Assessment. Global Strategy for the Diagnosis, Management, and Prevention of Chronic Obstructive Pulmonary Disease. Global Initiative for Chronic Obstructive Lung Disease. 9-17.
[14] Goddard, P.R., et al. (1982) Computed Tomography in Pulmonary Emphysema. Clin. Radiol, 33, 379-387. https://doi.org/10.1016/S0009-9260(82)80301-2
[15] Song, Y., Cai, W., Huang, H., Wang, X., Zhou, Y., Fulham, M. and Feng, D. (2014) Lesion Detection and Characterization with Context Driven Approximation in Thoracic FDG PET-CT Images of NSCLC Studies. IEEE Trans. Med Imag, 33, 408- 421. https://doi.org/10.1109/TMI.2013.2285931

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.