Advancements in AI Research for Digestive System Diseases

Abstract

With the “boom” of AI, researchers have made significant progress in assisting clinical disease diagnosis, prediction, and treatment. This article provides an overview of models built using both traditional machine learning methods and deep learning methods, as well as research progress on robotics in digestive system diseases, aiming to provide references for further studies. An application has been developed by domestic and foreign scholars that allows users to upload images of stool samples, which are then analyzed using big data to provide a score for bowel preparation, thereby improving the quality of bowel preparation. In some gastrointestinal diseases, such as Hp infection, Barrett’s esophagus and esophageal cancer, chronic atrophic gastritis and gastric cancer, IBD, etc., artificial intelligence possesses diagnostic capabilities comparable to those of professional endoscopists, and some applications can achieve real-time diagnosis. In the field of liver, gallbladder, and pancreatic diseases, artificial intelligence can assist in preoperative diagnosis using imaging or pathology, and robotic remote operations can be performed during surgery, predicting postoperative risk levels, and more. Different scholars have compared and analyzed various algorithm networks for different diseases to find the best-performing models. On this basis, methods such as the MCA attention mechanism, feature selection, gradient descent, and ensemble models can be introduced to further improve the diagnostic performance of the models. In the future, AI can not only help patients self-manage single or multiple diseases, monitor and manage their own diseases in a standardized and reasonable manner, but also predict and treat digestive system diseases at the genetic level.

Share and Cite:

Feng, Q. , Li, J. , Tan, X. and Zhang, Q. (2023) Advancements in AI Research for Digestive System Diseases. Journal of Biosciences and Medicines, 11, 173-202. doi: 10.4236/jbm.2023.1112016.

1. Introduction

The three pillars of artificial intelligence (AI) are data, computing power, and algorithms. Data serves as the “feed” for AI algorithms and is ubiquitous in the era of big data. The rise of “health data science” has stimulated interdisciplinary thinking and methods, creating a knowledge-sharing system among computer science, biostatistics, epidemiology, and clinical medicine, which has propelled the development of AI in the field of medicine [1] . Algorithms act as the driving force behind AI, playing a pivotal role in clinical diagnosis, prediction, and treatment. AI algorithms are mainly divided into traditional machine learning algorithms and deep learning algorithms (DL). Traditional machine learning algorithms include linear regression, support vector machines (SVM), random forests, decision trees, and logistic regression, while deep learning algorithms are primarily applied in deep neural networks (DNN), recurrent neural networks (RNN), and convolutional neural networks (CNN). Furthermore, the development of AI in conjunction with 5G technology has also advanced the progress of robotics in the field of surgery [2] . This article provides an overview of models built using both traditional machine learning methods and deep learning methods, as well as research progress on robotics in the digestive system diseases, aiming to provide references for further studies (Picture 1).

2. Research Progress of AI in Bowel Preparation

With increasing age, the detection rate of polyps exhibits an upward trend. In addition to high-risk populations, research has found that individuals aged 40

Picture 1. Research applications in digestive system diseases.

and above in the general population also require colonoscopy [3] [4] . However, as many as one-fourth of the screened population experience inadequate bowel preparation during the colonoscopy procedure [5] . To prevent situations of missed diagnoses and misdiagnoses during the examination, healthcare professionals often employ written and verbal education to improve the quality of patients’ bowel preparation. With the emergence of AI technology, smartphone applications have become a novel educational tool, providing convenience and assurance for patients’ bowel cleansing preparations [6] . Furthermore, the accuracy of AI continues to improve with the establishment of Internet of Things data platforms, as sample data increases [7] .

In recent years, scholars both domestically and internationally have developed an application that utilizes big data analysis on uploaded stool images to generate a score for bowel preparation. Medical professionals can then provide improvement suggestions based on the score to help patients enhance the quality of their bowel preparation. A study compared the results of seven smartphone applications in terms of bowel preparation during colonoscopy procedures. Six of the studies demonstrated that smartphone applications were able to improve the quality of bowel preparation. Additionally, five studies found that the use of smartphone applications significantly increased patient satisfaction during the perioperative period of colonoscopy [8] . Further research has been conducted by scholars in this field. van der Zander [9] and others developed a smartphone application called Prepit (Ferring B V), which consists of six parts: colonoscopy examination date, educational tools, bowel preparation schedule, low-fiber diet examples, visually assisted preparation instructions, and examples of transparent liquid intake. They provided oral and written education to 86 patients while the remaining 87 patients followed the steps outlined in the smartphone application for bowel cleansing preparation. The research results indicated that the smartphone application was able to significantly improve the quality of bowel preparation, particularly in the right colon. However, no significant difference was observed in terms of patient satisfaction. Cho [10] and others reported that the application group had significantly higher BBPS scores and satisfaction levels compared to the control group, which was consistent with the conclusions reached by Madhav [6] . Furthermore, some domestically and internationally developed applications include an intelligent automatic scoring function for bowel preparation. Researchers such as Wen Jing [11] and Xun Linjuan [12] believed that AI-assisted educational applications significantly improved the quality of bowel preparation in various segments of the intestine. Further research by Sheng Ruli [13] and others found that smartphone apps equipped with an AI-assisted bowel preparation scoring system could effectively improve the quality of bowel preparation in overweight patients. However, Jung [14] and others arrived at opposing conclusions using the OBPS score. The results showed that the average OBPS score for the non-application group vs. the application group was 2.79 ± 2.064 vs. 2.53 ± 1.264 (p = 0.950), and it was observed that patients receiving the application used a relatively lower dosage of polyethylene glycol (PEG) (3713.2 ± 405.8 vs. 3979.2 ± 102.06, p = 0.001). Due to the limited number of patient cases in some studies, there may be insufficient sample sizes that affect the statistical reliability of the results. Additionally, factors such as patient age, proficiency in using the app, smartphone system, and examination equipment variations can also impact the accuracy of bowel scores and polyp detection rates. Therefore, further research is needed in the future to develop a universally applicable and freely downloadable application.

3. Research Progress of AI in Gastrointestinal Diseases

3.1. Hp Infection

The deep learning technology has attracted attention and research from scholars both domestically and internationally in the diagnosis and classification of Hp infection, as it utilizes information such as imaging, biology, and clinical data to train models for automatic detection and localization of Hp infection.

Nakashima et al. [15] have developed an AI system using two novel laser IEE systems, BLI-bright and Linked Color Imaging (LCI). Compared to white light imaging (WLI), BLI-bright and LCI show significantly higher AUC values, while there is no difference in diagnostic time among these three AI systems. The research group also developed a computer-assisted diagnosis (CAD) system, which constructs deep learning models using endoscopic still images captured by WLI and LCI. The results showed that LCI-CAD outperforms WLI-CAD by 9.2% in non-infection cases, by 5.0% in current infection cases, and by 5.0% in post-eradication cases. Additionally, the diagnosis accuracy of endoscopists using LCI images is consistent with this [16] . Brazilian scholars Gonçalves et al. [17] believe that CNN models can detect HP infection and inflammation spectrum in gastric mucosal tissue pathology biopsies, and they provide the DeepHP database. A study on digital pathology (DP) found that virtual slide images obtained by scanning Warthin-Starry (W-S) silver-stained tissues with a 20× resolution scanner can reliably identify HP, with an F1 score of 0.829 for the AI classifier [18] . Additionally, Song Xiaobin et al. [19] proposed the feasibility of constructing an Hp tongue image classification model using Alexnet convolutional neural network by analyzing the relationship between Helicobacter pylori and tongue color, coating color, and moisture based on extracted characteristic information from tongue image, such as tongue color and shape, coating color and moisture. However, this is only a concept, and no specific research results have been reported by other scholars at present.

3.2. Barrett’s Esophagus and Esophageal Cancer

Barrett’s esophagus (BE) is a precancerous condition for esophageal adenocarcinoma (EAC), with a cancer risk of up to 0.5%. The progression from normal esophageal epithelium to EAC typically involves the stages of normal esophageal epithelium, with or without intestinal metaplasia (IM), low-grade dysplasia, high-grade dysplasia, and EAC [20] . To accurately distinguish between BE and early-stage EAC, Alanna et al. [21] developed a real-time deep learning artificial intelligence (AI) system. High et al. [22] demonstrated that in the differentiation between normal tissue and Barrett’s esophagus, the performance of a dual pre-training model, which was trained sequentially on the ImageNet and HyperKvasir databases, was superior to that of a single pre-training model trained solely on the ImageNet database. Additionally, Shahriar et al. [23] developed a ResNet101 model that exhibited good sensitivity and specificity in predicting the grade of dysplasia in Barrett’s esophagus, with a sensitivity of 81.3% and a specificity of 100% for low-grade dysplasia, and sensitivities and specificities exceeding 90% for nondysplastic Barrett’s esophagus and high-grade dysplasia. Furthermore, Dutch researchers, including Manon et al. [24] , developed a classifier based on hematoxylin and eosin (H & E) staining imaging and a model based on mass spectrometry imaging (MSI), both of which were able to predict the grade of dysplasia in Barrett’s esophagus. It was also found that the H & E-based classifier performed well in differentiating tissue types, while the MSI-based model provided more accurate differentiation of dysplastic grades and progression risk. In the diagnosis of BE, the gastroesophageal junction (GEJ) and squamocolumnar junction (SCJ) are of significant importance, and a distance of ≥1 cm between the two is highly indicative of the presence of BE. A study utilized a fully convolutional neural network (FCN) to automatically identify the range of GEJ and SCJ in endoscopic images, enabling targeted pathological biopsy of suspicious areas. The segmentation results of the BE scope developed in this study were consistent with the accuracy of expert manual assessment [25] . In terms of prognosis after BE treatment, Sharib et al. [26] generated depth maps using a deep learning-based depth estimator network and achieved esophageal 3D reconstruction based on 2D endoscopic images. This AI system was able to automatically measure the Prague C & M criteria, quantify the Barrett’s epithelium area (BEA), and assess the extent of islands. For 131 BE patients, the system constructed esophageal 3D models for pre- and post-treatment comparisons, enabling effective evaluation of treatment outcomes and improved follow-up.

Esophageal cancer, apart from adenocarcinoma, is predominantly squamous cell carcinoma, accounting for over 90% of cases. In the study on esophageal squamous cell carcinoma (ESCC), Meng et al. [27] , compared the diagnostic performance of endoscopists with different levels of experience using CAD systems in WLI and NBI combination modes. They found that the accuracy (91.0% vs. 78.3%, p < 0.001), sensitivity (90.0% vs. 76.1%, p < 0.001), and specificity (93.0% vs. 82.5%, p = 0.002) of expert endoscopists were significantly higher than those of non-expert endoscopists. After referencing the CAD system, the differences in accuracy (88.2% vs. 93.2%, p < 0.001), sensitivity (87.6% vs. 92.3%, p = 0.013), and specificity (89.5% vs. 94.7%, p = 0.124) between non-experts and experts were significantly reduced. Zhao et al. [28] diagnosed early esophageal cancer and benign esophageal lesions by constructing an Inception V3 image classification system, and the results showed that AI-NBI had a faster diagnostic rate than doctors, and its sensitivity, specificity, and accuracy were consistent with those of doctors.

3.3. Chronic Atrophic Gastritis and Gastric Cancer

Chronic atrophic gastritis (CAG) and gastric intestinal metaplasia (GIM) are precancerous conditions of gastric cancer [29] . Zhang et al. [30] compared the diagnosis results of a deep learning model with those of three experts and found that the CNN system had accuracies of 0.93, 0.95, and 0.99 for mild, moderate, and severe atrophic gastritis, respectively. They concluded that the CNN outperformed in detecting moderate and severe atrophic gastritis compared to mild atrophic gastritis. This conclusion was consistent with the findings of Zhao et al.’s U-Net network model [31] . Wu et al. [32] conducted experiments using 4167 electronic gastroscopy images, including early gastric cancer, chronic superficial gastritis, gastric ulcers, gastric polyps, and normal images. They trained a CNN convolutional neural network model and verified its excellent performance in identifying early gastric cancer and benign images. The model also showed good performance in accurately locating early gastric cancer and real-time recognition in videos. Goto et al. [33] designed an AI classifier using the EfficientnetB1 model for deep learning to differentiate between intramucosal and submucosal cancers. They tested 200 case images using the AI classifier, endoscopists, and a diagnostic method combining AI and endoscopic experts. The measured accuracy, sensitivity, specificity, and F1 score were (77% vs 72.6% vs 78.0%), (76% vs 53.6% vs 76.0%), (78% vs 91.6% vs 80.0%), and (0.768 vs 0.662 vs 0.776), respectively. The study results indicated that the collaboration between artificial intelligence and endoscopic experts can improve the diagnostic capability of early gastric cancer infiltration depth. Additionally, Tang et al. [34] found that the NBI AI system for diagnosing early gastric cancer (EGC) outperformed both senior and junior endoscopists in terms of performance.

Artificial intelligence has been widely applied in the preoperative predictive assessment, intraoperative treatment, and postoperative rehabilitation of curative resection for early gastric cancer. Bang et al. [35] utilized machine learning (ML) models to predict the likelihood of curative resection for undifferentiated early gastric cancer (U-EGC) with the identification of variables such as patient age, gender, endoscopic lesion size, morphology, and presence of ulceration prior to endoscopic submucosal dissection (ESD). They developed 18 models, with extreme gradient boosting classifiers achieving the best performance with an F1 score of 95.7%. In a retrospective study, Kuroda et al. [36] compared two types of robotic gastrectomy, namely ultrasonic shears-assisted robotic gastrectomy and conventional forceps-assisted robotic gastrectomy. They found that the console time for the ultrasonic shears group (310 minutes [interquartile range (IQR), 253 - 369 minutes]) and the console time for gastrectomy (222 minutes [IQR, 177 - 266 minutes]) were significantly shorter than the conventional forceps group (332 minutes [IQR, 294 - 429 minutes]; p = 0.022 and 247 minutes [IQR, 208 - 321 minutes]; p = 0.004, respectively). Additionally, the ultrasonic shears group had less blood loss compared to the conventional forceps group (20 mL [IQR, 10 - 40 mL] vs. 30 mL [IQR, 16 - 80 mL]; p = 0.014). Shen et al. [37] conducted a study on 32 patients undergoing elective robotic gastrectomy and explored the role of “diurnal light and nocturnal darkness” theory in postoperative recovery for gastric cancer using an artificial intelligence-based heart rate variability monitoring device. They found that maintaining the “diurnal light and nocturnal darkness” state was beneficial for the restoration of autonomic neurocircadian rhythms, alleviation of postoperative inflammation, and promotion of organ function recovery in gastric cancer patients.

3.4. IBD

AI has shown new advantages and significant potential in gastrointestinal endoscopy examinations, demonstrating excellent performance in both endoscopy and biopsy samples. In addition, it has achieved good results in the diagnosis, treatment, and prediction of gastrointestinal diseases [38] [39] . For instance, in terms of diagnosis, endoscopy is crucial for evaluating inflammatory bowel disease (IBD). Some studies have utilized an ensemble learning method based on fine-tuned ResNet architecture to construct a “meta-model”. Compared to a single ResNet model, the combined model has improved the IBD detection performance of endoscopic imaging in distinguishing positive (pathological) samples from negative (healthy) samples (P vs N), distinguishing ulcerative colitis from Crohn’s disease samples (UC vs CD), and distinguishing ulcerative colitis from negative (healthy) samples (UC vs N) [40] .

Additionally, capsule endoscopy (CE) is also an accurate clinical tool for diagnosing and monitoring Crohn’s disease (CD). Research has used the EfficientNet-B5 network to evaluate the accuracy of identifying intestinal strictures in Crohn’s disease patients, achieving an accuracy rate of approximately 80% and effectively distinguishing strictures from different grades of ulcers [41] . There have also been studies using CNN algorithms to automatically grade the severity of ulcers captured in capsule endoscopy images [42] . Moreover, Mascarenhas et al. [43] developed and tested a CNN-based model by collecting a large number of CE images, discovering that deep learning algorithms can be used to detect and differentiate small bowel lesions with moderate to high bleeding potential under the Saurin classification.

In terms of treatment and prediction, Charilaou et al. [39] used traditional logistic regression (cLR) as a reference model and compared it with more complex ML models. They constructed multiple IM prediction models and surgical queue models, converting the best-performing QLattice model (symbolic regression equation) into a network-based calculator (IM-IBD calculator), which achieved good validation in stratifying the risk of inpatient mortality and predicting surgical sub-queues. Additionally, there have been studies using machine learning methods to process a large number of predictive factors for predicting complications in pediatric Crohn’s disease [44] .

3.5. Colorectal Cancer (CRC)

In the diagnosis and staging of colorectal cancer, two applications of artificial intelligence are computer-aided detection (CADe) and computer-aided diagnosis or differentiation (CADx). CADe is used for detecting lesions, while CADx characterizes the detected lesions through real-time diagnosis of tissue using optical biopsy, which utilizes the properties of light [45] . In the early identification and differential diagnosis of colorectal cancer, Kudo et al. [46] trained an EndoBRAIN system using 69,142 endoscopic images. This system analyzes cell nuclei, crypt structures, and microvasculature in endoscopic images to identify colonic tumors. Comparative testing revealed that EndoBRAIN had higher accuracy and sensitivity in both staining and NBI modes compared to resident endoscopists and experts. Additionally, a study based on deep learning demonstrated that an AI CAD system can assist inexperienced endoscopists in accurately predicting the histopathology of colorectal polyps, maintaining diagnostic accuracy of over 80% regardless of polyp size, location, and morphology [47] .

Furthermore, the combination of AI technology and medical imaging can assist radiologists in diagnosing the T stage, molecular subtype, adjuvant therapy, and prognosis of patients with colorectal cancer (CRC) more efficiently and accurately [48] . For instance, Wang et al. [49] conducted a retrospective study in which they trained and validated an artificial intelligence-assisted imaging diagnosis system to identify positive circumferential resection margin (CRM) status using 12,258 high-resolution pelvic MRI T2-weighted images from 240 rectal cancer patients. This system employed the Faster R-CNN AI method, which is based on region-based convolutional neural network approach, and achieved an accuracy of 0.932 in determining CRM status, with an automatic image recognition time of only 0.2 seconds. Additionally, in a study utilizing the Faster R-CNN approach, Tong et al. [50] trained on 28,080 MRI images for identifying metastatic lymph nodes, demonstrating good performance with an AUC of 0.912.

In the field of colorectal cancer surgery, Kitaguchi et al. [51] conducted a retrospective study in which they annotated over 82 million frames for phase and action classification tasks, as well as 4000 frames for tool segmentation tasks, from 300 laparoscopic colorectal surgery (LCRS) videos. This led to the development of the LapSig300 model, which encompasses nine surgical phases (P1 - P9), three actions (dissection, exposure, and other actions), and five target tools (grasper, T1; dissecting point, T2; linear cutter, T3; Maryland dissector, T4; scissors, T5). The overall accuracy of the multi-phase classification model was found to be 81.0%, with a mean intersection over union (mIoU) of 51.2% for tools T1 - T5. Although LapSig300 achieved high accuracy in phase, action, and tool recognition, there is still room for improvement in terms of recognition accuracy, and the available dataset is not yet large enough, necessitating further research. Furthermore, Ichimasa et al. [52] proposed a novel analysis method for the resection of T2 colorectal cancer (CRC) after endoscopic full-thickness resection (EFTR). This simple and non-invasive tool, the Random Forest (RF) model, was developed and validated specifically for T2 CRC, following the development of a new predictive tool for lymph node metastasis (LNM) in T1 CRC. However, due to the technical immaturity of EFTR, further validation is required for the RF model in predicting LNM metastasis in T2 CRC patients (Table 1).

4. Advancements in AI Research in Biliary Diseases

4.1. Gallstone, Cholecystitis, and Cholecystocolic Fistula

In 2022, the first human clinical trial on single-arm robotic-assisted cholecystectomy was conducted by Capital Medical University Affiliated Hospital and other institutions, and the surgical procedure was successful. The surgical incision was only 2.5 cm, which not only reduced surgical trauma but also alleviated patient pain [53] . Rasa et al. [54] conducted a retrospective study on 40 cases of single-port robotic cholecystectomy (SPRC), and the results showed a median operative time of 93.5 minutes, with an average time of 101.2 ± 27.0 minutes and an average hospital stay of 1.4 ± 0.6 days. Fourteen patients (35.0%) experienced Clavien-Dindo grade I complications, including five cases (12.5%) related to wound problems. Tschuor et al. [55] reviewed 26 patients who underwent robotic-assisted cholecystectomy, with an intraoperative blood loss of 50 mL (range: 0 - 500 mL). Only mild postoperative complications (Clavien-Dindo ≤ II within 90 days) were observed, without any reports of major complications or death.

Cholecystocolic fistula (CCF) is a rare complication of biliary disease, with a preoperative imaging detection rate of less than 8%. Krzeczowski et al. [56] reported a case study of successful management of CCF patients using the da Vinci® Xi surgical system, with a smooth surgical procedure and discharge on the first day postoperatively, and no complications observed during a six-month follow-up. Additionally, a case of CCF discovered during robotic cholecystectomy was reported, where the patient was initially diagnosed with CCF during the dissection process and subsequently underwent the surgery in a robotic manner. However, there is no report available regarding the postoperative condition and prognosis of this patient. Due to the limited number of robotic CCF surgery cases and the need for further validation of the potential advantages compared to laparoscopic surgery, more research is required.

4.2. Cholangiocarcinoma and Gallbladder Cancer

Wang et al. [57] divided 65 eligible patients with extrahepatic cholangiocarcinoma (ECC) into two groups: Group A (patients with postoperative pathological stage Tis, T1, or T2 ECC) and Group B (patients with postoperative pathological stage T3 or T4 ECC). They then used the MaZda software to delineate the regions of interest (ROIs) on MRI images and analyzed the texture features of these regions. Finally, the selected texture features were incorporated into a binary logistic regression model using the Enter method to establish a prediction

Table 1. Research applications of AI in gastrointestinal diseases.

model. The results showed that this model had certain value in predicting the presence of extrahepatic bile duct invasion in ECC, with a sensitivity of 86.0% and specificity of 86.4% for predicting stage T3 or above. However, further prospective experiments are still needed to validate this finding. Similarly, Yao et al. [58] used the MaZda software to establish a PSO-SVM radiomics model to predict the degree of differentiation (DD) and lymph node metastasis (LNM) in patients with ECC. The results showed that the average AUC of the model for DD in the training group and test group was 0.8505 and 0.8461, respectively, while the average AUC for LNM was 0.9036 and 0.8800, respectively. Another study also demonstrated that machine learning using MRI and CT radiomic features can effectively differentiate combined hepatocellular-cholangiocarcinoma (cHCC-CC) from cholangiocarcinoma (CC) and hepatocellular carcinoma (HCC), with good predictive performance [59] .

In recent years, robotic surgery has been widely used in the treatment of cholangiocarcinoma in hepatobiliary surgery. Chang et al. [60] reported a case series of 34 patients with hilar cholangiocarcinoma, including 13 cases of Bismuth type I, 3 cases of Bismuth type IIIa, 9 cases of Bismuth type IIIb, and 9 cases of Bismuth type IV, treated with robotic surgery. In addition, Yin et al. [61] performed a robotic-assisted radical resection for a patient with Bismuth type IIIb hilar cholangiocarcinoma. Shi et al. [62] also performed a robot-assisted radical resection for a patient with hilar cholangiocarcinoma, and the surgery was successful with approximately 50 ml intraoperative blood loss. The patient had no postoperative complications and was discharged six days after the surgery. These studies suggest that robotic surgery has shown good efficacy in the treatment of cholangiocarcinoma.

Gallbladder cancer is the most common malignancy of the biliary tract with a poor prognosis. One study compared the clinical outcomes of laparoscopic extended cholecystectomy (LEC) with open extended cholecystectomy (OEC). The results showed that LEC was comparable to OEC in terms of short-term clinical outcomes. There were no statistically significant differences between the LEC and OEC groups in terms of operation time (p = 0.134), intraoperative blood loss (p = 0.467), postoperative morbidity (p = 0.227), or mortality (p = 0.289). In terms of long-term outcomes, the 3-year disease-free survival rate (43.1% vs 57.2%, p = 0.684) and overall survival rate (62.8% vs 75.0%, p = 0.619) were similar between the OEC and LEC groups [63] .

5. Research Progress on AI in Liver Diseases

5.1. Liver Fibrosis

Methods for the diagnosis of liver fibrosis include invasive liver biopsy and non-invasive imaging, serological diagnostic models, and transient elastography. With the development of artificial intelligence in pathology and radiology, we can utilize technologies such as artificial intelligence to extract information that is difficult to perceive or identify with the naked eye. This is of great significance for the pathological analysis of liver biopsy tissue sections and other non-invasive detection methods. Additionally, artificial intelligence also plays a supportive role and has research value in prognostic assessment and surgical procedures for patients with liver fibrosis [64] [65] .

In the research on ultrasound elastography for diagnosing liver fibrosis, Xie et al. [66] utilized convolutional neural networks to analyze and extract 11 ultrasound image features from 100 liver fibrosis patients. They selected four classic models (including AlexNet, VGGNet-16, VGGNet-19, and GoogLeNet) for experimental comparison. The results showed that the GoogLeNet model had better recognition accuracy than the other models. By controlling variables such as batch size, learning rate, and iteration times, they verified the performance of the network model and obtained the best recognition effect. Fu et al. [67] employed traditional machine classification models, including SVM classifier, sparse representation classifier, and deep learning classification model based on LeNet-5 neural network. They trained, validated, and tested ultrasound images of 354 patients who underwent liver resection surgery. They performed automated classification using a binary classification (S0/S1/S2 vs. S3/S4) and two ternary classifications (S0/S1 vs. S2/S3/S4 and S0 vs. S1/S2/S3/S4). The research results showed an accuracy of around 90% for binary classification and around 80% for ternary classification, with the deep learning classifier having slightly higher accuracy than the other two traditional models.

In the CT imaging-based diagnosis of liver fibrosis, Wu et al. [68] demonstrated that multi-slice CT (MSCT) based on artificial intelligence (AI) algorithms provided a new approach for clinical diagnosis of liver cirrhosis and fibrosis. However, further in-depth research is still needed in multiple centers and large hospitals. Furthermore, related studies reported the construction of a liver fibrosis staging network (LFS network) based on contrast-enhanced portal venous phase CT images, which achieved an accuracy of approximately 85.2% in diagnosing significant fibrosis (F2 - F4), advanced fibrosis (F3 - F4), and cirrhosis (F4) [69] .

5.2. Fatty Liver

Artificial intelligence can effectively identify patients with non-alcoholic steatohepatitis (NASH) and advanced fibrosis, as well as accurately assess the severity of non-alcoholic fatty liver disease (NAFLD) [70] . Okanoue et al. [71] developed a novel non-invasive system called NASH-Scope, which consists of 11 features. After training and validation on 446 patients with NAFLD and non-NAFLD, the results showed that NASH-Scope achieved an area under the curve (AUC) and sensitivity both exceeding 90%, and was able to accurately distinguish between NAFLD and non-NAFLD cases. Zamanian et al. [72] applied a deep learning algorithm based on B-mode images to classify ultrasound images of patients with fatty liver disease, achieving a high accuracy rate of 98.64%. Another study suggested that using the CNN model Inception v3 in B-mode ultrasound imaging significantly improves the evaluation of hepatic steatosis [73] .

5.3. Hepatocellular Carcinoma

Primary liver cancer patients comprise 75% - 85% of cases diagnosed with hepatocellular carcinoma (HCC), with the risk factors for HCC varying by region. In China, chronic hepatitis B virus infection and/or exposure to aflatoxin are the main contributing factors [74] . With the rapid development of artificial intelligence (AI) in the field of medical imaging, AI models based on CT and MR images have been widely applied for automatic segmentation, lesion detection, characterization, risk stratification, treatment response prediction, and automated classification of liver nodules in hepatocellular carcinoma [75] . Riccardo et al. [76] proposed a novel AI-based pipeline that utilizes convolutional neural networks to provide virtual Hematoxylin and Eosin-stained images, and employs AI algorithms based on color and texture content to automatically identify regions with different progression features of HCC, such as steatosis, fibrosis, and cirrhosis. Compared to manual segmentation performed by histopathologists, the AI approach achieved an accuracy rate of over 90%. Xu et al. [77] constructed a diagnostic model based on CT images using Support Vector Machines (SVM), showing good performance in distinguishing between HCC and intrahepatic cholangiocarcinoma (ICCA). Additionally, Rela et al. [78] employed various classification techniques such as SVM, K-Nearest Neighbors (KNN), Naïve Bayes (NB), Decision Tree (DT), Composite, and Discriminant classifiers to classify liver CT images of 68 patients, aiming to differentiate between hepatocellular carcinoma and liver abscess (LA). The results indicated that the SVM classifier outperformed other classifiers in terms of accuracy and specificity, while slightly lagging behind the Discriminant Analysis classifier in sensitivity. Overall, the SVM classifier demonstrated the best performance. Relevant studies have shown that gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced MRI can be used to evaluate the differentiation degree of HCC [79] . Due to the limited performance of CT or MRI in detecting lesions smaller than 1.0 cm in HCC patients, and biopsy being the gold standard for diagnosing HCC, Ming et al. [80] developed a diagnostic tool for pathological image classification of HCC. This diagnostic tool evaluated several architectures including ResNet-34, ResNet-50, and DenseNet, and selected ResNet-34 as the benchmark architecture for building the AI model due to its superior performance. The AI model achieved sensitivities, specificities, and accuracies all above 98% in the validation and test sets, and demonstrated more stable performance compared to experts. Furthermore, Ming et al. also applied transfer learning methods to image classification of colorectal cancer and invasive ductal carcinoma of the breast, where the AI algorithm achieved slightly lower performance compared to HCC.

With the rapid development of artificial intelligence, an increasing number of clinical trials are exploring the feasibility of using robots for liver resection surgery. According to the research findings of Shogo et al. [81] , they found no significant differences between robot-assisted liver resection (RALR) and laparoscopic liver resection (LLR) in terms of blood loss, transfusion ratio, postoperative complication rate, mortality rate, or length of hospital stay. However, studies by Linsen et al. [82] pointed out that compared to laparoscopic major liver resection, robot-assisted major liver resection had less blood loss (118.9 ± 99.1 vs 197.0 ± 186.3, P = 0.002). Furthermore, although there were differences in surgical time (255.5 ± 56.3 min vs 206.8 ± 69.2, p < 0.001), this difference gradually narrowed with increasing surgeon experience. Some scholars abroad have also reported individual cases. For example, Machado [83] reported a successful case of extensive hepatocellular carcinoma resection using the Glissonian method in a 77-year-old male patient. The surgery went smoothly, and the patient was discharged on the 8th day after the operation. In addition, Peeyush [84] reported a case of multifocal hepatocellular carcinoma in a 70-year-old male patient who underwent robot-assisted total right hepatectomy. The blood loss was minimal (400 ml), but the surgery duration was longer (520 min). These research findings suggest that robot-assisted liver resection surgery has the potential to play a role in providing more precise operations while reducing certain surgical risks.

6. Research Progress of AI in Pancreatic Diseases

The auxiliary diagnosis of pancreatic diseases mainly involves imaging and pathological images, with commonly used examination methods including CT, MRI, MRCP, EUS, etc. Some studies have used deep learning algorithms to improve diagnostic performance. For example, Jiawen et al. [85] developed the DeepCT-PDAC model using contrast-enhanced CT and deep learning algorithms to predict the overall survival (OS) of patients with pancreatic ductal adenocarcinoma (PDAC) before and after surgery. In addition, another study compared four networks, NLLS, GRU, CNN, and U-Net, through quantitative analysis (comparing SSIM and nRMSE scores), qualitative analysis (comparing parameters), and Bland-Altman analysis (consistency analysis). The study found that GRU performed the best and proposed a method combining GRU and attention layer for analyzing the concentration curve of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and ensuring stable output of expanded parameter estimation. This MRI technique can be used for non-invasive detection of various diseases, including pancreatic cancer [86] . Furthermore, Filipe et al. [87] developed a deep learning algorithm using endoscopic ultrasound (EUS) images of pancreatic cysts, achieving an accuracy rate of over 98% in automatically identifying mucinous pancreatic cysts. There is also a DCNN system that uses EUS-FNA stained histopathological images for pancreatic cell cluster differentiation, which performs comparably to pathologists in terms of discrimination performance [88] .

7. Conclusions

Artificial intelligence has made significant progress in the application of digestive system diseases in the field of medicine. Different scholars have compared and analyzed various algorithm networks for different diseases to find the best-performing models. On this basis, methods such as MCA attention mechanism, feature selection, gradient descent, and ensemble models can be introduced to further improve the diagnostic performance of the models. With the establishment of data platforms, the accuracy of the models will also gradually improve. However, in achieving high accuracy and applicability, further efforts are still needed for the development of artificial intelligence in the medical field.

As far as this study is concerned, the sources and quality of the data used to build the AI model are uneven, which may affect the accuracy of the data; there is a lack of standardization and normalization, and it is currently impossible to provide some universally applicable and highly accurate models. In the future, AI can not only help patients self-manage single or multiple diseases, monitor and manage their own diseases in a standardized and reasonable manner, but also predict and treat digestive system diseases at the genetic level.

Support

The article has no fund support.

NOTES

*Corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Wang, M., Wang, F.L. and Zhang, L.X. (2022) Pay attention to Improving the Data Literacy of Clinicians and Cultivating Clinical Data Scientists for the Future. Chinese Journal of Internal Medicine, 61, 243-245.
https://doi.org/10.34133/2022/9832564
[2] Geng, Y.L. (2022) Application of Service Robots Based on Artificial Intelligence and 5G Technology. Yangtze River Information and Communications, 35, 235-237.
[3] Meng, T., Chen, X., Lu, J.H., et al. (2020) Comparative Study on the Detection Rate of Colon Polyps in the General Population and High-Risk Groups. Chinese Electronic Journal of Gastrointestinal Endoscopy, 7, 21-25.
[4] Kim, J., Dobson, B., Ng Liet Hing, C., et al. (2020) Increasing Rate of Colorectal Cancer in Younger Patients: A Review of Colonoscopy Findings in Patients under 50 at a Tertiary Institution. ANZ Journal of Surgery, 90, 2484-2489.
https://doi.org/10.1111/ans.16060
[5] Ye, Z., Chen, J., Xuan, Z., Gao, M. and Yang, H. (2020) Educational Video Improves Bowel Preparation in Patients Undergoing Colonoscopy: A Systematic Review and Meta-Analysis. Annals of Palliative Medicine, 9, 671-680.
https://doi.org/10.21037/apm.2020.03.33
[6] Desai, M., Nutalapati, V., Bansal, A., et al. (2019) Use of Smartphone Applications to Improve Quality of Bowel Preparation for Colonoscopy: A Systematic Review and Meta-Analysis. Endoscopy International Open, 7, E216-E224.
https://doi.org/10.1055/a-0796-6423
[7] Zhao, Z., Li, M., Liu, P., Yu, J. and Zhao, H. (2022) Efficacy of Digestive Endoscope Based on Artificial Intelligence System in Diagnosing Early Esophageal Carcinoma. Computational and Mathematical Methods in Medicine, 2022, Article ID: 9018939.
https://doi.org/10.1155/2022/9018939
[8] Aksan, F., Tanriverdi, L.H., Figueredo, C.J., Barrera, L.C., Hasham, A. and Jariwala, S.P. (2023) The Impact of Smartphone Applications on Bowel Preparation, Compliance with Appointments, Cost-Effectiveness, and Patients’ Quality of Life for the Colonoscopy Process: A Scoping Review. Saudi Journal of Gastroenterology, 29, 71-87.
https://doi.org/10.4103/sjg.sjg_207_22
[9] van der Zander, Q.E.W., Reumkens, A., van de Valk, B., Winkens, B., Masclee, A.A.M. and de Ridder, R.J.J. (2021) Effects of a Personalized Smartphone App on Bowel Preparation Quality: Randomized Controlled Trial. JMIR Mhealth Uhealth, 9, e26703.
https://doi.org/10.2196/26703
[10] Cho, J., Lee, S., Shin, J.A., Kim, J.H. and Lee, H.S. (2017) The Impact of Patient Education with a Smartphone Application on the Quality of Bowel Preparation for Screening Colonoscopy. Clinical Endoscopy, 50, 479-485.
https://doi.org/10.5946/ce.2017.025
[11] Wen, J., Liu, C.H., Lu, N.L., Yang, D.H., Zhang, Y.Y., Gao, Y.X., Yang, Q. and Huang, J. (2022) The Impact of Smartphone Applications on the Quality of Bowel Preparation. Journal of Gastroenterology and Hepatology, 31, 1022-1026.
[12] Xun, L.J., Wu, K., Zhou, S., Shi, Y., Song, R.M., Zhuang, Y., et al. (2022) Optimization and Evaluation of Bowel Preparation Education Program for Colonoscopy. Journal of Nursing, 37, 80-82+86.
[13] Sheng, R.L. (2022) Establishment and Preliminary Verification of Artificial Intelligence-Assisted Bowel Preparation Application System. Xinxiang Medical College, Xinxiang.
[14] Jung, J.W., Park, J., Jeon, G.J., et al. (2017) The Effectiveness of Personalized Bowel Preparation Using a Smartphone Camera Application: A Randomized Pilot Study. Gastroenterology Research and Practice, 2017, Article ID: 4898914.
https://doi.org/10.1155/2017/4898914
[15] Nakashima, H., Kawahira, H., Kawachi, H. and Sakaki, N. (2018) Artificial Intelligence Diagnosis of Helicobacter pylori Infection Using Blue Laser Imaging-Bright and Linked Color Imaging: A Single-Center Prospective Study. Annals of Gastroenterology, 31, 462-468.
https://doi.org/10.20524/aog.2018.0269
[16] Nakashima, H., Kawahira, H., Kawachi, H. and Sakaki, N. (2020) Endoscopic Three-Categorical Diagnosis of Helicobacter pylori Infection Using Linked Color Imaging and Deep Learning: A Single-Center Prospective Study (with Video). Gastric Cancer, 23, 1033-1040.
https://doi.org/10.1007/s10120-020-01077-1
[17] Gon?alves, W.G.E., Santos, M.H.P.D., Brito, L.M., et al. (2022) DeepHP: A New Gastric Mucosa Histopathology Dataset for Helicobacter pylori Infection Diagnosis. International Journal of Molecular Sciences, 23, Article No. 14581.
https://doi.org/10.3390/ijms232314581
[18] Liscia, D.S., D’Andrea, M., Biletta, E., et al. (2022) Use of Digital Pathology and Artificial Intelligence for the Diagnosis of Helicobacter pylori in Gastric Biopsies. Pathologica, 114, 295-303.
https://doi.org/10.32074/1591-951X-751
[19] Song, X.B., Li, Y., et al. (2021) Feasibility Study on Identification of Helicobacter pylori Positive Tongue Image by Alexnet Convolution Neural Network. Shandong Journal of Traditional Chinese Medicine, 40, 235-238.
[20] Li, H.S. and Chu, C.L. (2023) Research Progress on the Role of Intestinal Metaplasia in the Progression of Barrett’s Esophagus to Esophageal Adenocarcinoma. Chinese Journal of Digestion, 31, 41-47.
https://doi.org/10.11569/wcjd.v31.i2.41
[21] Gao, J.W., Lin, J.X., Liu, L., et al. (2022) Research on Establishing a Barrett’s Esophageal Endoscopic Image Classification Model Based on Deep Learning Secondary Pre-Training. China Digital Medicine, 17, 54-58.
[22] Faghani, S., Codipilly, D.C., Vogelsang, D., et al. (2022) Development of a Deep Learning Model for the Histologic Diagnosis of Dysplasia in Barrett’s Esophagus. Gastrointestinal Endoscopy, 96, 918-925.e3.
https://doi.org/10.1016/j.gie.2022.06.013
[23] Pan, W., Li, X., Wang, W., et al. (2021) Identification of Barrett’s Esophagus in Endoscopic Images Using Deep Learning. BMC Gastroenterology, 21, Article No. 479.
https://doi.org/10.1186/s12876-021-02055-2
[24] Ebigbo, A., Mendel, R., Probst, A., et al. (2020) Real-Time Use of Artificial Intelligence in the Evaluation of Cancer in Barrett’s Oesophagus. Gut, 69, 615-616.
https://doi.org/10.1136/gutjnl-2019-319460
[25] Beuque, M., Martin-Lorenzo, M., Balluff, B., et al. (2021) Machine Learning for Grading and Prognosis of Esophageal Dysplasia Using Mass Spectrometry and Histological Imaging. Computers in Biology and Medicine, 138, Article ID: 104918.
https://doi.org/10.1016/j.compbiomed.2021.104918
[26] Ali, S., Bailey, A., Ash, S., et al. (2021) A Pilot Study on Automatic Three-Dimensional Quantification of Barrett’s Esophagus for Risk Stratification and Therapy Monitoring. Gastroenterology, 161, 865-878.e8.
https://doi.org/10.1053/j.gastro.2021.05.059
[27] Meng, Q.-Q., Gao, Y., Lin, H., Wang, T.-J., Zhang, Y.-R., Feng, J., et al. (2022) Application of an Artificial Intelligence System for Endoscopic Diagnosis of Superficial Esophageal Squamous Cell Carcinoma. World Journal of Gastroenterology, 37, 5483-5493.
https://doi.org/10.3748/wjg.v28.i37.5483
[28] Zhao, Z.T., Li, M., Liu, P., Yu, J.F. and Zhao, H. (2022) Efficacy of Digestive Endoscope Based on Artificial Intelligence System in Diagnosing Early Esophageal Carcinoma. Computational and Mathematical Methods in Medicine, 2022, Article ID: 9018939.
https://doi.org/10.1155/2022/9018939
[29] Huang, S.G., Liang, L.J., Zhu, C.P. and Lai, W.G. (2023) Management Strategies for Gastric Precancerous States and Precancerous Lesions. Biomedical Translation, 1, 55-60.
[30] Zhang, Y.Q., Li, F.X., Yuan, F.Q., Zhang, K., Huo, L.J., Dong, Z.C., et al. (2020) Diagnosing Chronic Atrophic Gastritis by Gastroscopy Using Artificial Intelligence. Digestive and Liver Disease: Official Journal of the Italian Society of Gastroenterology and the Italian Association for the Study of the Liver, 52, 566-572.
https://doi.org/10.1016/j.dld.2019.12.146
[31] Zhao, Q.C. and Chi, T.Y. (2022) Application and Research of Chronic Atrophic Gastritis Model Based on U-Net Deep Learning. Journal of Gastroenterology and Hepatology, 31, 656-661.
[32] Wu, H.B., Yao, X.Y., Zeng, L.S., Huang, F. and Chen, L. (2021) Application of Artificial Intelligence Technology Based on Convolutional Neural Network in the Identification of Early Gastric Cancer. Journal of the Third Military Medical University, 43, 1735-1742.
[33] Atsushi, G., Naoto, K., Jun, N., Ryo, O., Koichi, H., Shinichi, H., et al. (2022) Cooperation between Artificial Intelligence and Endoscopists for Diagnosing Invasion Depth of Early Gastric Cancer. Gastric Cancer: Official Journal of the International Gastric Cancer Association and the Japanese Gastric Cancer Association, 26, 116-122.
https://doi.org/10.1007/s10120-022-01330-9
[34] Tang, D.H., Ni, M.H., Zheng, C., Ding, X.W., Zhang, N.N., Yang, T., et al. (2022) A deep Learning-Based Model Improves Diagnosis of Early Gastric Cancer under Narrow Band Imaging Endoscopy. Surgical Endoscopy, 36, 7800-7810.
https://doi.org/10.1007/s00464-022-09319-2
[35] Bang, C.S., Ahn, J.Y., Kim, J.H., Kim, Y.I., Choi, I.J. and Shin, W.G. (2021) Establishing Machine Learning Models to Predict Curative Resection in Early Gastric Cancer with Undifferentiated Histology: Development and Usability Study. Journal of Medical Internet Research, 23, e25053.
https://doi.org/10.2196/preprints.25053
[36] Kuroda, K., Kubo, N., Sakurai, K., Tamamori, Y., Hasegawa, T., Yonemitsu, K., et al. (2022) Comparison of Short-Term Surgical Outcomes of Two Types of Robotic Gastrectomy for Gastric Cancer: Ultrasonic Shears Method versus the Maryland Bipolar Forceps Method. Journal of Gastrointestinal Surgery: Official Journal of the Society for Surgery of the Alimentary Tract, 27, 222-232.
https://doi.org/10.1007/s11605-022-05527-2
[37] Shen, D.L., Deng, Z.M., Wang, G., Cheng, H., Zhang, X.C. and Jiang, Z.W. (2022) Exploring the Clinical Significance of the “Day Essence and Night Darkness” Theory in Postoperative Rehabilitation of Gastric Cancer Based on Artificial Intelligence Heart Rate Variability Monitoring Equipment. Liaoning Journal of Traditional Chinese Medicine, 49, 1-5.
[38] Chen, L. and Li, D.C. (2021) Artificial Intelligence and Inflammatory Bowel Disease. World Chinese Journal of Digestion, 29, 684-689.
https://doi.org/10.11569/wcjd.v29.i13.684
[39] Charilaou, P., Mohapatra, S., Doukas, S., Kohli, M., Radadiya, D., Devani, K., et al. (2022) Predicting Inpatient Mortality in Patients with Inflammatory Bowel Disease: A Machine Learning Approach. Journal of Gastroenterology and Hepatology, 38, 241-250.
https://doi.org/10.1111/jgh.16029
[40] Chierici, M., Puica, N., Pozzi, M., Capistrano, A., Donzella, M.D., Colangelo, A., Osmani, V. and Jurman, G. (2022) Automatically Detecting Crohn’s Disease and Ulcerative Colitis from Endoscopic Imaging. BMC Medical Informatics and Decision Making, 22, Article No. 300.
https://doi.org/10.1186/s12911-022-02043-w
[41] Klang, E., Grinman, A., Soffer, S., et al. (2020) Automated Detection of Crohn’s Disease Intestinal Strictures on Capsule Endoscopy Images Using Deep Neural Networks. Journal of Crohn’s & Colitis, 15, 749-756.
[42] Barash, Y., Azaria, L., Soffer, S., Margalit, Y.R., Shlomi, O., BenHorin, S., Eliakim, R., Klang, E. and Kopylov, U. (2021) Ulcer Severity Grading in Video Capsule Images of Patients with Crohn’s Disease: An Ordinal Neural Network Solution. Gastrointestinal Endoscopy, 93, 187-192.
https://doi.org/10.1016/j.gie.2020.05.066
[43] Mascarenhas, S.M.J., Afonso, J., Ribeiro, T., Ferreira, J., Cardoso, H., Andrade, A.P., et al. (2021) Deep Learning and Capsule Endoscopy: Automatic Identification and Differentiation of Small Bowel Lesions with Distinct Haemorrhagic Potential Using a Convolutional Neural Network. BMJ Open Gastroenterology, 8, e000753.
https://doi.org/10.1136/bmjgast-2021-000753
[44] Ungaro, R.C., Hu, L.Y., Ji, J.Y., et al. (2020) Machine Learning Identifies Novel Blood Protein Predictors of Penetrating and Stricturing Complications in Newly Diagnosed Paediatric Crohn’s Disease. Alimentary Pharmacology & Therapeutics, 53, 281-290.
[45] Roshan, A. and Byrne, M.F. (2022) Artificial Intelligence in Colorectal Cancer Screening. CMAJ: Canadian Medical Association Journal, 194, E1481-E1484.
https://doi.org/10.1503/cmaj.220034
[46] Kudo, S.E., Misawa, M., Mori, Y., et al. (2020) Artificial Intelligence-Assisted System Improves Endoscopic Identification of Colorectal Neoplasms. Clinical Gastroenterology and Hepatology, 18, 1874-1881.e2.
https://doi.org/10.1016/j.cgh.2019.09.009
[47] Song, E.M., Park, B., Ha, C.A., et al. (2020) Endoscopic diagnosis and treatment planning for colorectal polyps using a deep-learning model. Scientific Reports, 10, 30.
https://doi.org/10.1038/s41598-019-56697-0
[48] Zhang, H.M. and Gao, J.Y. (2022) Research Status and Prospects of Artificial Intelligence Technology Based on Medical Imaging in Colorectal Cancer. Journal of Lanzhou University (Medical Edition), 48, 1-4.
[49] Wang, D., Xu, J., Zhang, Z., et al. (2020) Evaluation of Rectal Cancer Circumferential Resection Margin Using Faster Region-Based Convolutional Neural Network in High-Resolution Magnetic Resonance Images. Diseases of the Colon & Rectum, 63, 143-151.
https://doi.org/10.1097/DCR.0000000000001519
[50] Lu, Y., Yu, Q., Gao, Y., et al. (2018) Identification of Metastatic Lymph Nodes in MR Imaging with Faster Region-Based Convolutional Neural Networks. Cancer Research, 78, 5135-5143.
https://doi.org/10.1158/0008-5472.CAN-18-0494
[51] Kitaguchi, D., Takeshita, N., Matsuzaki, H., et al. (2020) Automated Laparoscopic Colorectal Surgery Workflow Recognition Using Artificial Intelligence: Experimental Research. International Journal of Surgery, 79, 88-94.
https://doi.org/10.1016/j.ijsu.2020.05.015
[52] Ichimasa, K., Nakahara, K., Kudo, S., Misawa, M., Bretthauer, M., Shimada, S., et al. (2022) Novel “Resect and Analysis” Approach for T2 Colorectal Cancer with Use of Artificial Intelligence. Gastrointestinal Endoscopy, 96, 665-672.E1.
https://doi.org/10.1016/j.gie.2022.04.1305
[53] Wang, D., Liu, Y., Zhang, Y.X., Song, X.J., Zhao, Y.P., Guo, W., Li, X. and Zhang, Z.T. (2022) Report on the First Human Clinical Trial of Single-Arm Robot-Assisted Cholecystectomy in China. Journal of Robotic Surgery (Chinese and English), 3, 518-524.
[54] Rasa, H.K. and Erdemir, A. (2022) Our Initial Single Port Robotic Cholecystectomy Experience: A Feasible and Safe Option for Benign Gallbladder Diseases. World Journal of Gastrointestinal Endoscopy, 14, 769-776.
https://doi.org/10.4253/wjge.v14.i12.769
[55] Tschuor, C., Lyman, W.B., Passeri, M., et al. (2021) Robotic-Assisted Completion Cholecystectomy: A Safe and Effective Approach to A Challenging Surgical Scenario—A Single Center Retrospective Cohort Study. The International Journal of Medical Robotics and Computer Assisted Surgery, 17, e2312.
https://doi.org/10.1002/rcs.2312
[56] Krzeczowski, R.M., Grossman Verner, H.M., Figueroa, B. and Burris, J. (2022) Robotic Diagnosis and Management of Acute Cholecystocolonic Fistula. Cureus, 14, e24101.
https://doi.org/10.7759/cureus.24101
[57] Wang, L.M., Shu, J., Huang, X.Q., Yang, C.M., Guo, Q.X. and Su, S. (2021) The Value of MRI Texture Analysis in Predicting Extrabiliary Invasion of Extrahepatic Cholangiocarcinoma. Sichuan Medicine, 42, 1-5.
[58] Yao, X., Huang, X., Yang, C., et al. (2020) A Novel Approach to Assessing Differentiation Degree and Lymph Node Metastasis of Extrahepatic Cholangiocarcinoma: Prediction Using a Radiomics-Based Particle Swarm Optimization and Support Vector Machine Model. JMIR Medical Informatics, 8, e23578.
https://doi.org/10.2196/23578
[59] Liu, X., Khalvati, F., Namdar, K., et al. (2021) Can Machine Learning Radiomics Provide Pre-Operative Differentiation of Combined Hepatocellular Cholangiocarcinoma from Hepatocellular Carcinoma and Cholangiocarcinoma to Inform Optimal Treatment Planning? European Radiology, 31, 244-255.
https://doi.org/10.1007/s00330-020-07119-7
[60] Chang, Z.Y., Zhao, G.D., Chou, S. and Liu, R. (2019) Series Report of 34 Cases of Robotic Hilar Cholangiocarcinoma Surgery. Abdominal Surgery Science, 32, 330-334, 360.
[61] Yin, X.Y. (2020) Robot-Assisted Hilar Cholangiocarcinoma Type IIIB Radical Resection. Journal of Digestive Oncology (Electronic Edition), 12, 296-298.
[62] Shi, G.J. and Li, L. (2021) Robot-Assisted Radical Resection of Hilar Cholangiocarcinoma. Electronic Journal of Clinical General Surgery, 9, 1-2.
[63] Yang, J., Li, E., Wang, C., et al. (2022) Robotic versus Open Extended Cholecystectomy for T1a-T3 Gallbladder Cancer: A Matched Comparison. Frontiers in Surgery, 9, Article ID: 1039828.
https://doi.org/10.3389/fsurg.2022.1039828
[64] Lu, L.G., You, H., Xie, W.F. and Jia, J.D. (2019) Consensus on Diagnosis and Treatment of Liver Fibrosis (2019 Years). Miscellaneous Clinical Hepatobiliary Diseases, 35, 2163-2172.
[65] Li, Q.J., Guo, Q.Y., Chen, H.B. and Zhang, R.G. (2018) Research Progress in Liver Fibrosis Based on Texture Analysis and Deep Learning. Radiology Practice, 33, 997-1001.
[66] Xie, Y., Chen, S., Jia, D., et al. (2022) Artificial Intelligence-Based Feature Analysis of Ultrasound Images of Liver Fibrosis. Computational Intelligence and Neuroscience, 2022, Article ID: 2859987.
https://doi.org/10.1155/2022/2859987
[67] Fu, T.T., Yao, Z., Ding, H., Xu, Z.T., Yang, M.R., Yu, J.H. and Wang, W.P. (2019) Analysis of the Value of Computer-Aided Diagnosis of Liver Fibrosis Process in Patients with Chronic Hepatitis B. Chinese Medical Journal, 99, 491-495.
[68] Wu, L., Ning, B., Yang, J., et al. (2022) Diagnosis of Liver Cirrhosis and Liver Fibrosis by Artificial Intelligence Algorithm-Based Multislice Spiral Computed Tomography. Computational and Mathematical Methods in Medicine, 2022, Article ID: 1217003.
https://doi.org/10.1155/2022/1217003
[69] Yin, Y., Yakar, D., Dierckx, R.A., et al. (2021) Liver Fibrosis Staging by Deep Learning: A Visual-Based Explanation of Diagnostic Decisions of the Model. European Radiology, 31, 9620-9627.
https://doi.org/10.1007/s00330-021-08046-x
[70] Aggarwal, P. and Alkhouri, N. (2021) Artificial Intelligence in Nonalcoholic Fatty Liver Disease: A New Frontier in Diagnosis and Treatment. Clinical Liver Disease, 17, 392-397.
https://doi.org/10.1002/cld.1071
[71] Okanoue, T., Shima, T., Mitsumoto, Y., et al. (2021) Artificial Intelligence/Neural Network System for the Screening of Nonalcoholic Fatty Liver Disease and Nonalcoholic Steatohepatitis. Hepatology Research, 51, 554-569.
https://doi.org/10.1111/hepr.13628
[72] Zamanian, H., Mostaar, A., Azadeh, P., et al. (2021) Implementation of Combinational Deep Learning Algorithm for Non-Alcoholic Fatty Liver Classification in Ultrasound Images. Journal of Biomedical Physics & Engineering, 11, 73-84.
https://doi.org/10.31661/jbpe.v0i0.2009-1180
[73] Rhyou, S.Y. and Yoo, J.C. (2021) Cascaded Deep Learning Neural Network for Automated Liver Steatosis Diagnosis Using Ultrasound Images. Sensors, 21, Article No. 5304.
https://doi.org/10.3390/s21165304
[74] Sung, H., Ferlay, J., Siegel, R.L., et al. (2021) Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA: A Cancer Journal for Clinicians, 71, 209-249.
https://doi.org/10.3322/caac.21660
[75] Laino, M.E., Viganò, L., Ammirabile, A., et al. (2022) The Added Value of Artificial Intelligence to LI-RADS Categorization: A Systematic Review. European Journal of Radiology, 2022, Article ID: 110251.
https://doi.org/10.1016/j.ejrad.2022.110251
[76] Scodellaro, R., Inverso, D., Panzeri, D., et al. (2022) AI-Based Pipelines for the Automated Recognition of Hepatocellular Carcinoma and the Semantic Segmentation of Virtually Stained Liver Biopsies. Biophysical Journal, 121, 136a.
https://doi.org/10.1016/j.bpj.2021.11.2061
[77] Xu, X., Mao, Y., Tang, Y., et al. (2022) Classification of Hepatocellular Carcinoma and Intrahepatic Cholangiocarcinoma Based on Radiomic Analysis. Computational and Mathematical Methods in Medicine, 2022, Article ID: 5334095.
https://doi.org/10.1155/2022/5334095
[78] Rela, M., Rao, S.N. and Patil, R.R. (2022) Performance Analysis of Liver Tumor Classification Using Machine Learning Algorithms. International Journal of Advanced Technology and Engineering Exploration, 9, 143-154.
https://doi.org/10.19101/IJATEE.2021.87465
[79] Zhou, J., Wu, P. and Xu, Y.C. (2022) Patients with Hepatocellular Carcinoma of Different Degrees of Differentiation Analysis of Gd-EOB-DTPA Enhanced MRI Examination Results. Liver, 10, 1129-1131+1152.
[80] Chen, W.M., Fu, M., Zhang, C.J., et al. (2022) Deep Learning-Based Universal Expert-Level Recognizing Pathological Images of Hepatocellular Carcinoma and Beyond. Frontiers in Medicine (Lausanne), 9, Article ID: 853261.
https://doi.org/10.3389/fmed.2022.853261
[81] Tanaka, S., Kubo, S. and Ishizawa, T. (2023) Positioning of Minimally Invasive Liver Surgery for Hepatocellular Carcinoma: From Laparoscopic to Robot-Assisted Liver Resection. Cancers (Basel), 15, Article No. 488.
https://doi.org/10.3390/cancers15020488
[82] Liu, L., Wang, Y., Wu, T., et al. (2022) Robotic versus Laparoscopic Major Hepatectomy for Hepatocellular Carcinoma: Short-Term Outcomes from a Single Institution. BMC Surgery, 22, Article No. 432.
https://doi.org/10.1186/s12893-022-01882-8
[83] Machado, M.A., Mattos, B.H. and Makdissi, F.F. (2023) Robotic Left Trisectionectomy with Glissonian Approach (with Video). Journal of Gastrointestinal Surgery, 27, 842-844.
https://doi.org/10.1007/s11605-023-05587-y
[84] Varshney, P. and Varshney, V.K. (2023) Total Robotic Right Hepatectomy for Multifocal Hepatocellular Carcinoma Using Vessel Sealer. Annals of Hepato-Biliary-Pancreatic Surgery, 27, 95-101.
https://doi.org/10.14701/ahbps.22-036
[85] Varshney, P. and Varshney, V.K. (2023) Total Robotic Right Hepatectomy for Multifocal Hepatocellular Carcinoma Using Vessel Sealer. Annals of Hepato-Biliary-Pancreatic Surgery, 27, 95-101.
https://doi.org/10.14701/ahbps.22-036
[86] Ottens, T., Barbieri, S., Orton, M.R., et al. (2022) Deep Learning DCE-MRI Parameter Estimation: Application in Pancreatic Cancer. Medical Image Analysis, 80, Article ID: 102512.
https://doi.org/10.1016/j.media.2022.102512
[87] Vilas-Boas, F., Ribeiro, T., Afonso, J., et al. (2022) Deep Learning for Automatic Differentiation of Mucinous versus Non-Mucinous Pancreatic Cystic Lesions: A Pilot Study. Diagnostics (Basel), 12, Article No. 2041.
https://doi.org/10.3390/diagnostics12092041
[88] Zhang, S., Zhou, Y., Tang, D., et al. (2022) A Deep Learning-Based Segmentation System for Rapid Onsite Cytologic Pathology Evaluation of Pancreatic Masses: A Retrospective, Multicenter, Diagnostic Study. EBioMedicine, 80, Article ID: 104022.
https://doi.org/10.1016/j.ebiom.2022.104022

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.