Under some conditions, there can be relatively standardized sets of contextual information that will be broadly useful for judgments, such as the PICO (Population, Intervention, Comparator, Outcome) and worksheets like Risk of Bias scores for precisely focused systematic reviews of RCTs. The org/Oxford Center for Evidence-Based Medicine has a list of useful worksheets for critical appraisal of various genres of research, including qualitative and case studies.
The fourth meeting of the International Collaboration for Automation of systematic reviews (ICASR) was held 5–6 November 2019 in The Hague, the Netherlands. ICASR is an interdisciplinary group whose goal is to maximize the use of technology for conducting rapid, accurate, and efficient systematic reviews of scientific evidence. The group seeks to facilitate the development and acceptance of automated techniques for systematic reviews. In 2018, the major themes discussed were the transferability of automation tools (i.e., tools developed for other purposes that might be used by systematic reviewers), the automated recognition of study design in multiple disciplines and applications, and approaches for the evaluation of automation tools.
systematic reviews answer specific questions based on primary literature. However, systematic reviews on the same topic frequently disagree, yet there are no approaches for understanding why at a glance. Our goal is to provide a visual summary that could be useful to researchers, policy makers, and health care professionals in understanding why health controversies persist in the expert literature over time. We present a case study of a single controversy in public health, around the question: “Is reducing dietary salt beneficial at a population level?” We define and visualize three new constructs: the overall evidence base, which consists of the evidence summarized by systematic reviews (the inclusion network) and the unused evidence (isolated nodes). Our network visualization shows at a glance what evidence has been synthesized by each systematic review. Visualizing the temporal evolution of the network captures two key moments when new scientific opinions emerged, both associated with a turn to new sets of evidence that had little to no overlap with previously reviewed evidence. Limited overlap between the evidence reviewed was also found for systematic reviews published in the same year. Future work will focus on understanding the reasons for limited overlap and automating this methodology for medical literature databases.
systematic review is a type of literature review designed to synthesize all available evidence on a given question. systematic reviews require significant time and effort, which has led to the continuing development of computer support. This paper seeks to identify the gaps and opportunities for computer support. By interviewing experienced systematic reviewers from diverse fields, we identify the technical problems and challenges reviewers face in conducting a systematic review and their current uses of computer support. We propose potential research directions for how computer support could help to speed the systematic review process while retaining or improving review quality.
Systematic reviews provide more than just a summary of the research literature related to a particular topic or question--rather they offer clear and compelling answers to questions related to the ”who,” "why," and "when" of studies. In this chapter, the authors draw on their experiences with systematic reviews—one as an editor of a highly regarded educational research journal, the other as a researcher and review author—to trace the growing popularity of systematic reviews in education literature and to pose a series of challenges to aspiring review authors to motivate and enliven their work. In particular, the authors stress the importance of melding scientific and rigorous review procedures with 'stylish' academic writing that engages its audience through effective storytelling, attention to context (the people, places, policies, and practices represented in the studies under review), and clear implications for research and practice.
In the limit, systematic reviews do sophisticated computations over these quantitative "fixed" values of evidence level to reach an overall conclusion about the weight of evidence behind a single claim. These are extremely valuable! But they do presuppose a level of "fixedness" and consensus over what constitutes certainty or strength of evidence, which may or may not exist!
as far as systematic review goes, not necessarily exhaustive - screening was done manually too, first by title, so there's risk of false negatives if it's not in the title. Not sure if Google Scholar did query expansion or soft matching at the time, maybe that might help.
Despite recognition that database search alone is inadequate even within the health sciences, it appears that reviewers in fields that have adopted systematic review are choosing to rely primarily, or only, on database search for information retrieval. This commentary reminds readers of factors that call into question the appropriateness of default reliance on database searches particularly as systematic review is adapted for use in new and lower consensus fields. It then discusses alternative methods for information retrieval that require development, formalisation, and evaluation. Our goals are to encourage reviewers to reflect critically and transparently on their choice of information retrieval methods and to encourage investment in research on alternatives.