Publications are aplenty, particularly during the COVID-19 pandemic. As of February 10, 2022, there are 241,998 publications on the Web of Science Core Collection. Of these publications, 22,457 are review articles, and 147,381 are articles. The United States of America (n = 67026) and the Republic of China (n = 23,929) contributed to most of the pandemic publications. This list of research is enormous and continues to explode. Efforts to synthesize published evidence are becoming increasingly impractical because of the deluge of published evidence. Non-synthesized information in human language text is challenging to use in clinical practice.
Systematic reviews locate, assess, and synthesize relevant research findings on a health topic, making the information easily accessible to decision-makers [1]. A systematic review summarizes the literature on an existing clinical topic based on eligibility criteria. A review takes over 1000 hours of highly skilled manual labor to complete [2]. From the formulation of a query through publication, it took an average of 28 months for authors to complete a review (n = 14, range 18-46 months) [3]. From protocol registration to publication, systematic reviews registered with PROSPERO took an average of 67 weeks [4].
The time required to complete a review is highly variable. The time required will vary based on the review's topic, the authors' experience, the methods used (e.g., the number of attempts to collect unpublished data), the number of papers included, and the editorial team's help. However, authors may predict the time required by considering the tasks involved and the time needed to finish each task. These tasks include literature search, study selection, critical appraisal of the literature, data extraction, data analysis, text analysis, content writing, journal selection, reference and document management [5]. Even though they are very thorough, systematic review methods cannot keep up with the massive published information. Manual processes used to write systematic reviews are unsustainable. Once published, reviews quickly become obsolete.
Artificial Intelligence is the "science and engineering of making intelligent machines, especially intelligent computer programs" [6]. It is the ability of a machine to do cognitive functions (i.e., reasoning, perceiving, decision-making, problem-solving) [7]. AI aims to imitate human-like behavior [8]. For example, the average reading time of radiologists is approximately 6.5 minutes per CT scan, while their AI requires only 2.73 seconds [9]. Because humans speak around 150 words per minute but can write only 40 words per minute on average, an efficient voice recognition function will be useful for devices like computers to transform speeches into machine-readable texts [10]. Covidence reduces the time to write systematic reviews by up to 30% [11].
Researchers have developed methods to semi-automate systematic review writing through use of artificial intelligence. Machine learning (ML) and natural language processing (NLP) are commonly used to semi-automate systematic review writing [12]. ML employs computer algorithms that improve over time because of repeated data input. ML is similar to logistic regression, which is frequently employed in epidemiology. It analyzes enormous volumes of data using statistical modeling [13]. Without being programmed, the model makes predictions or decisions. For example, the ML model calculates the likelihood that an article is relevant and included in the systematic review. On the other hand, NLP analyzes vast amounts of text. The computer understands the articles' contents by analyzing the human language texts. It extracts information and insights from the articles and organizes them [14]. In contrast to solely syntactic text processing, NLP can isolate and analyze the underlying semantic meaning. Text categorization and data extraction are the two most common NLP technologies used in systematic reviews [15,16]. Text classification arranges documents based on predefined criteria [17]. For example, it uses titles and abstracts to identify randomized controlled trial studies. Data extraction identifies specific words or numbers or combinations that match a variable of interest. For instance, NLP will extract numerical values from heart rate measurements to determine the influence of facemasks on heart rate during exercise.
The Systematic Review Toolbox is a web-based catalog of tools that support various tasks in systematic review writing. The comprehensive toolbox list is at http://systematicreviewtools.com/index.php [4]. Below are sample tools used in semi-automating systematic review writing (Table 1) [18-26].
Table 1: The systematic review toolbox. View Table 1
AI has the potential of outperforming humans in systematic review writing. However, full automation without human intervention is from reality. Even if the rate at which machines think could be virtually infinite or infinitely fast compared with humans, machines cannot fully automate systematic review writing. AI cannot comprehend the most basic of the real world. However, AI can help facilitate systematic review writing through semi-automation. It can make processes in systematic review writing efficient; however, AI cannot replace humans in ensuring the validity of results, application of results to real-life scenarios, and problem-solving. Humans remain involved in systematic review writing rather than being replaced. The human judgment remains necessary, especially for research generating questions, problem analysis, and solving [27].