The Role of Digital Technology and Artificial Intelligence in Diagnosing Medical Images: A Systematic Review

The provision of up-to-date medical information on digital technology and AI systems in journals, clinical practices, and textbooks informing radiologists about patient care has resulted in faster, more reliable, and cheaper image interpretation. This study reviews 27 articles regarding the application of digital technology and artificial intelligence (AI) in radiological scholarship, looking at the incorporation of electronic health system records, digital radiology imaging databases, IT environments, and machine learning—the latter of which has emerged as the most popular AI approach in modern medicine. This article examines the emerging picture surrounding archiving and communication systems in the implementation phase of AI technologies. It ex-plores the most appropriate clinical requirements for the use of AI systems in practice. Continued development in the integration of automated systems, probing the use of information systems, databases, and records, should result in further progress in radiological theory and practice.

ficulty in properly viewing huge imaging data may lead to misinterpretation. For instance, the occurrence of notorious laterality errors is a product of mistakes made by a physician about body orientation, or labelling errors by a radiologist. Indeed, errors made by a radiologist can lead to delays or missed diagnosis, which can cause unfavourable patient outcomes. Thus, the application of AI in radiology ensures faster, more reliable, and cheaper image interpretation, utilizing electronic health system records and digital imaging databases [14]. The present review aims to determine the role of AI and other technologies in the daily practice of radiologists, from historical and current perspectives to providing an overview for practicing radiologists, acting as type of roadmap for this technological territory and a provision that can aid practitioners in future advancements.

Materials and Methods
A systematic review design based on PRISMA guidelines by Mohr, et al. [15] was used to examine the role of AI and other technologies for validating medical diagnosis in radiology. It looked at the current available state of the art to suggest advancements for future practice.
This review has restricted its search to English-language studies and on human subjects. However, no restriction on the initial timeline was selected, therefore, studies published before Dec 2020 were included, as this was the day last search was conducted. Medical images of various modalities, such as mammography or mastography, X-radiation (x-ray), Magnetic Resonnce Imaging (MRI), Ultrasound (US), and Computed tomography (CT), were all incorporated as radiological images. Studies examining AI that combine medical images and other types of clinical data were included. Articles based on clinical settings were included in this review, informing understanding about the use of AI in various radiological imaging and modality settings. No limits were placed on the target population, the intended context for using the model, or the disease outcome of interest. Editorials, letters, comments, conference abstracts and proceedings, and review articles were also included. This review has exhaustively searched PubMed, Cochrane, MEDLINE, and EMBASE databases to identify original research articles that examine the role of AI and machine learning in investigating medical images toward diagnostic decision-making. Also, owing to limited data on the topic, a grey literature search was conducted by incorporating a search through customizing the Google search engine. The complete list of inclusion and exclusion criteria is shown on Table 1. The keywords used were "Artificial Intelligence OR Machine learning OR Deep learning" AND "Medical Imaging OR radiological studies" AND "Diagnosis OR Accuracy OR Performance" ( Table   2). Both published and unpublished studies were searched rigorously before conducted the review to prevent reviewer selection bias and publication bias.
Initially, after searching all databases, studies were deduplicated using Ref- Works. After deduplication, the title and abstracts were read and screened (1 st Open Journal of Radiology  screening) based on inclusion and exclusion criteria (Table 1). Full texts of the included articles were then obtained and screened (2nd screening) using the same criteria. The two independent reviewers assessed the eligibility criteria of the screened titles and abstracts and later the full text of the search results; non-consensus was resolved by a third reviewer. A Microsoft Excel Sheet was prepared to screen the included and excluded studies. Articles included after a 2nd screening were then screened (3rd screening) for quality analysis using the STROBE checklist [16]. Again, the screening was carried out by two independent reviewers to ensure that the included studies had internal and external validity. The article search procedure is shown on Table 3.
A pre-developed and standardized proforma was used during the data extraction process. First author name, year of publication, study type, methodology of the study and final outcome of the study were extracted and summarized in a table which was then used for data analysis. The outcomes of studies were based on complex mathematical and statistical AI models, which include deep learning R. Ahmad Open Journal of Radiology

Search Results
A total of 4348 studies were yielded from the search process, as shown on the PRISMA flow diagram ( Figure 1). After deduplication, the yield was reduced to 3109 articles. Following screening of title and abstract this was further decreased to 787 articles, and after going through the 2nd stage of the screening process, 31 articles remained. The removal of 756 studies was based on the following criteria: the studies did not report the desired outcome, the radiological aspect was not adequately discussed, or the setting was not clinical. Following a quality analysis, 27 articles were finally included in the review. The data extraction from these articles is shown on Table 4.

Inception of AI in Radiology
In the recent past, digital images and advanced imaging processing techniques were not readily available, even though significant outcomes were accomplished using unique applications of early computers [17]. A type of paradigm shift occurred in computer science during the 1980s, through the invention of the microchip, which allowed scientists to develop systems that can support radiologists in ways previously unimaginable [5]. Following this development, various approaches have been proposed, some having widespread success, such as hypertext, rule-based and case-based reasoning, Bayesian networks [8], and artificial neural networks (ANNs). Open Journal of Radiology  Since 1943, ANNs have held a dominant position among AI techniques used in modern medicine [18]. The incorporation of ANNs has been one of the most effective and prolific applications of AI in radiological history [6]. ANNs are analogues of neurons in the human brain, comprising an amassed network of highly interconnected computer procedures, which perform parallel computations for data processing, with weighted connections between each node of the network [2]. This weighting results in the activation of other artificial neurons in the network, based on mathematical formulas [19]. The combined weighting of connections, as a product of the whole artificial neuronal structure, is used to represent the knowledge base of the system. Supervised learning can be used to train ANNs, through a comparison of expected and actual outcomes. Unsupervised or semi-supervised learning can also train ANNs, based on generic techniques, such as clustering, anomaly detection, and association, which are enhanced with each case to assure authentic diagnoses [20]. ANNs are capable of extrapolating the input data of simple cases to handle more difficult scenarios. Additionally, a system can be updated by personalized expert knowledge, where both individual observations and images can be utilized as inputs [2] [3]. This process allows the continuous progression of a system, learning in a similar way that corresponds with human cognitive developmnet, but with the benefit of a permanent memory function [21]. AI-based R. Ahmad Open Journal of Radiology computer-aided detected (CAD) programs are the most common application of ANNs in medical imaging. Images are investigated using ANN software to emphasize areas of risk, stimulating additional investigation by a radiologist. ANNs have been successfully applied to the characterization of cancerous tissue, leading to an increase in reading sensitivity, specificity, and accuracy in the detection of cancerous lesions and the recurrence of such phenomena.

Evolution of AI in Radiology: Current Concepts
The segmentation, staging, and diagnosis of disease are covered within the term "characterization" [18]. These activities are achieved by quantifying the radiological aspects of an abnormality, which include gathering data about internal texture and the size and extent of findings. Humans are not generally able to account for the many wide-ranging qualitative characteristics found in routine medical imaging examinations [11]. This is exacerbated by the inevitable differences between individual interpretations and patient characteristics. By contrast, AI leverages big data that processes a large number of quantitative aspects collaboratively, using iterative techniques [13].
In the past decade, there have been a number of significant innovations in medical imaging using deep learning techniques for image classification. Deep learning involves computational models that learn data characteristics rapidly from multiple levels. The dominance of deep learning in medical imaging studies has increased as the amount of data available has improved, largely the result of the powerful hardware in current computers. Indeed, the accuracy rates of ANNs now surpass human radiologist performance in narrow-based tasks, such as detecting lung nodule features [22]. Also, machine learning permits reliable automated detection of lung nodules in CT scans and pneumonia in chest x-rays [23]. The behavior of pre-cancerous lesions on CT scans is predicted by means of modeling, or regression, and prevents superfluous invasive examinations, such as biopsy [24]. This has significant potential in population screening for cancer, especially in countries where radiologists may be overworked or lack specialized training.
The demonstration of technological capabilities and the identification of novel systems in radiology is the first step in devising a strategy for practice, but which, however, may pose a threat to practitioners of other disciplines. For example, other medical practitioners may spend more time with certain patients than radiologists and so might choose to purchase particular AI technologies, and in doing so automatically compete with radiologists for resources [25]. By contrast, in a typical current practical scenario, the use of AI could be highly effective if integrated into the work of many professions and that of their respective colleagues. For example, even the simple analysis of electronic health records may improve the success rate of radiologists [26]. Thus, a strategic objective for radiologists could be to distinguish themselves by creating AI that does hybrid work through collective intelligence software [27]. Open Journal of Radiology Intelligence augmentation is a topical buzzword that often accompanies discourse in AI, such as found at a recent World Economic Forum (2017). Intelligence augmentation in radiology concerns higher levels of accuracy in diagnosis achieved through the amalgamation of human radiologists and AI, forming a hybrid intelligence [28]. It is effective because clusters of AI agents and humans working collectively make more effective predictions compared to AI techniques or humans used independently [26]. This process of evaluation may or may not hold for radiological diagnosis, however, which arguably requires greater scrutiny and validation of peer-reviewed articles. Patient safety standards must also be considered in these systems, while judicial transparency is created through observation of a human radiologist, providing legal accountability [28]. Also, the requirement for precision diagnostics requires additional research into AI approaches. Machine learning may become a leading tool in the future to extrapolate large data sets derived from imaging to evaluate cancer genes, behavior, and response to treatment, enabling radiologists to discover associations between pathogenesis and imaging features of tumors [27]. Also, precision diagnostics can plausibly be integrated for degenerative and chronic diseases, including coronary heart disorders and Alzheimer's disease, as well as for any disease with imaging and genetic biomarker associations [29].
While significant progress has been made, machine learning technology is currently quite a way from successful integration into radiology practice. While technologies have been publicly lauded and promoted in specialist disciplines, they have often failed to achieve potential in the implementation phase; indeed, the birth and death of technologies is often labelled the "hype cycle". One major issue, for instance, is that computing systems are not yet technically fast enough to render outcomes within a clinically appropriate time frame for urgent diagnoses and emergencies, especially if the required technologies are not widely available in medical institutions [29].
It is important to consider aspects such as picture archiving and communication system (PACS), electronic health records, IT environments, and radiology information systems in the implementation phase of AI [30]. Increasing technological sophistication is based on the acquisition and improvement of hardware, as well as communication between departments and hospitals. If the technology is indispensable, protecting data storage and cloud platforms will become increasingly important [31]. AI applications provide an essential approach in the extraction of latent information from images, and is not hindered by geographical distance with respect to the information source. Current advancements in deep learning and big data have laid the ground for the field of radiomics, involving the use of hundreds of abstract mathematical characteristics of images that can be characterized and interpreted using AI techniques, and also easily integrated with other data, such as that relating to therapeutic outcomes and genomics findings.
In recent years, there has been an increase in reported workload following the R. Ahmad Open Journal of Radiology current reliance of hospitals on imaging. Thus, the use of an AI system may be beneficial in reducing daily workload in the radiology department and keeping up-to-date with hospital services [22]. Another crucial role of AI may be to screen for normal plain films and review abnormal films in cancer screening.
Similarly, malignancy can be identified through using question-specific AI techniques. However, the use of AI technologies was found to be one of the most crucial challenges for the user in the initial phases of one study [1] [22]. Nevertheless, an AI tool gives independency to health professionals in the reporting of simple scans. AI tools may likely be developed as a way for doctors to incorporate a type of "autopilot" to their daily activities, which may perhaps be innovated upon to produce more complex integrated systems. In the context of neuroimaging, considerations about AI solutions have emerged in various MRI projects, such as the large-scale Human Connectome Project, which aims to access brain connectivity [7] [21].

Future Developments
One of the most central and complicated aspects of AI systems is dealing with the data source itself. Extensive medical data is required for moderate ease of retrieval and access [12]. Millions of medical images are produced each year, since one out of four patients receives a CT examination and one out of ten patients receives an MRI examination [10]. Progress in digital health system protocols worldwide have assured that medical images are electronically handled in a systematic way, permitting the use of appropriate practice for developing and emerging countries.
A patient cohort selection associated through AI identifies objects through segmentation techniques [34]. This ensures that a well-defined set of criteria is observed in the training data. Furthermore, it assists in mitigating unwanted variation in the data, which is a product of data acquisition principles and imaging protocols, including actual imaging and contrast agent administration across institutions [18].
The substandard performance of a number of automated and semi-automated segmentation AI programs has perhaps restricted their use in the curatation of data [20] Complications with machine learning have emerged with respect to rare diseases, where automated labeling algorithms are not effective. This issue is exacerbated when a number of human readers lack prior exposure and so are not able to verify unidentified diseases. However, unsupervised learning is an approach that allows automated data curation [32]. Recent developments in unsupervised learning include various auto-encoders and generative adversarial networks; these are highly effective, where discriminated aspects are learned despite a lack of comprehensive labeling [33].
Unsupervised domain adaptation has been explored recently through the use of adversarial networks for segmenting brain MRI, which permit accuracy and generalizability closer to supervised learning methods [23]. Sparse auto-encoders  [24]. Unparalleled open-access to labeled medical imaging data is offered by the Cancer Imaging Archive, which allows rapid AI model prototyping and hence reduces the need for comprehensive data curation procedures.
The use of patient data brings forth ethical concerns with respect to the training of AI systems. Data are not securely connected to state-of-the-art AI systems in medical institutions [28]. However, recently, a stricter privacy policy has been made possible in the US through the Health Insurance Portability and Accountability Act of 1996, which has enforced compliant storage systems. These systems share only the trained model and allow multiple actors to work collaboratively with AI models irrespective of their input data sets [29].
A decentralized federated learning approach has been used in other contributions to the field. In this system, data remains local during training, but the combination of local updates permits the interpretation of a shared model. The live copies of the shared model result in the desired inference, but reduce privacy concerns and data sharing [33]. Encrypted data train deep learning networks for predicting decryption keys to assure comprehensive confidentiality in the overall process. All these solutions create sustainable data for an AI ecosystem, without undermining privacy and compliance, even though this system is only in an early phase of development [33].
The advancements made in radiological imaging and diagnosis through AI calibration is an important development of medical practice. However, the full benefit may be in the collaboration of AI with humans. Using AI systems, radiologists can help promote and improve the diagnostic capability of radiological modalities, and in return this can help the advancement of the field. Future studies need to focus on the application of AI in clinical diagnosis using radiological images. Radiologist suggestions and requirements should be considered in the process of AI advancement in radiology, to enable smoother implementation and strengthen clinical practice.
This study may limit in that it included a language bias, since only data in English was considered. It also had to include grey literature owing to a limitation in the actual quantity of relevant studies; for example, none of the studies used was a randomized control trial. However, as grey literature was also searched in this study, there was no publication bias. Finally, all included studies had high heterogeneity because a meta-analysis was not carried out; the results were synthesized in a narrative manner. Therefore, the limitations associated with narrative review. However, by preventing reviewer selection bias and extensive referencing, the bias related to narrative review was minimized.

Conclusion
The integration of automated systems through information systems and AI is an

Funding Statement
The study is self-funded.