Skip to content Skip to sidebar Skip to footer

Ch 22 Bodies Machines and Male Power Review

James Thomas, Lisa M Askie, Jesse A Berlin, Julian H Elliott, Davina Ghersi, Marker Simmonds, Yemisi Takwoingi, Jayne F Tierney, Julian PT Higgins

Key Points:
  • Cochrane Reviews should reflect the state of electric current knowledge, only maintaining their currency is a challenge due to resource limitations. Information technology is difficult to know when a given review might become out of date, but tools are available to assist in identifying when a review might need updating.
  • Living systematic reviews are systematic reviews that are continually updated, with new bear witness being incorporated as soon equally it becomes bachelor. They are useful in apace evolving fields where enquiry is published frequently. New technologies and improve processes for data storage and reuse are being developed to facilitate the rapid identification and synthesis of new testify.
  • A prospective meta-analysis is a meta-analysis of studies (usually randomized trials) that were identified or even collectively planned to be eligible for the meta-analysis earlier the results of the studies became known. They are usually undertaken by a collaborative grouping including authors of the studies to be included, and they usually collect and analyse individual participant data.
  • Formal sequential statistical methods are discouraged for standard updated meta-analyses in most circumstances for Cochrane Reviews. They should not be used for the principal analyses, or to depict principal conclusions. Sequential methods may, however, be used in the context of a prospectively planned serial of randomized trials.

This chapter should exist cited as: Thomas J, Askie LM, Berlin JA, Elliott JH, Ghersi D, Simmonds Chiliad, Takwoingi Y, Tierney JF, Higgins HPT. Chapter 22: Prospective approaches to accumulating evidence. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.

22.1 Introduction

Iain Chalmers' vision of "a library of trial overviews which will exist updated when new information go available" (Chalmers 1986), became the mission and founding purpose of Cochrane. Thousands of systematic reviews are now published in the Cochrane Database of Systematic Reviews, presenting critical summaries of the evidence. Withal, maintaining the currency of these reviews through periodic updates, consistent with Chalmers' vision, has been a challenge. Moreover, equally the global community of researchers has begun to come across enquiry in a cumulative fashion, rather than in terms of individual studies, the idea of 'prospective' meta-analyses has emerged. A prospective meta-assay (PMA) begins with the idea that future studies volition be integrated within a systematic review and works backwards to program a programme of trials with the explicit purpose of their future integration.

The first part of this chapter covers methods for keeping abreast of the accumulating show to help a review team understand when a systematic review might need updating (see Section 22.ii). This includes the processes that can be put into place to monitor relevant publications, and algorithms that have been proposed to determine whether or when it is appropriate to revisit the review to contain new findings. Nosotros outline a vision for regularly updated reviews, known equally 'living' systematic reviews, which are continually updated, with new evidence being identified and incorporated as soon as it becomes available.

While evidence surveillance and living systematic reviews may require some modifications to review processes, and can dramatically improve the delivery time and currency of updates, they are nevertheless essentially following a retrospective model of reviewing the existing prove base. The retrospective nature of most systematic reviews poses an inevitable claiming, in that the choice of what types of evidence to include may be influenced by authors' knowledge of the context and findings of the available studies. This might innovate bias into whatsoever aspect of the review's eligibility criteria including the selection of a target population, the nature of the intervention(s), choice of comparator and the outcomes to exist assessed. The best way to overcome this trouble is to place prove entirely prospectively, that is before the results of the studies are known. Section 22.3 describes such prospectively planned meta-analyses.

Finally, Section 22.iv addresses concerns most the regular repeating of statistical tests in meta-analyses as they are updated over time. Cochrane actively discourages use of the notion of statistical significance in favour of reporting estimates and confidence intervals, then such concerns should not arise. Nevertheless, sequential approaches are an established method in randomized trials, and may play a role in a prospectively planned series of trials in a prospective meta-analysis.

22.2 Evidence surveillance: active monitoring of the accumulating prove

22.2.1 Maintaining the currency of systematic reviews

Cochrane Reviews were conceived with the vision that they be kept upwards to engagement. For many years, a policy was in identify of updating each Cochrane Review at to the lowest degree every two years. This policy was non closely followed due to a range of problems including: a lack of resources; the need to residual starting new reviews with maintaining older ones; the rapidly growing book of research in some areas of wellness care and the paucity of new evidence in others; and challenges in knowing at whatsoever given point in fourth dimension whether a systematic review was out of date and therefore mayhap giving misleading, and potentially harmful, advice.

Maintaining the currency of systematic reviews past incorporating new evidence is important in many cases. For case, i study suggested that while the conclusions of most reviews might be valid for five or more than years, the findings of 23% might be out of date within two years, and vii% were outdated at the fourth dimension of their publication (Shojania et al 2007). Systematic reviews in chop-chop evolving fields are particularly at risk of condign out of appointment, leading to the development of a range of methods for identifying when a systematic review might need to be updated.

22.two.2 Signals for updating

Strategies for prioritizing updates, and for updating only reviews that warrant it, accept been developed (Martínez García et al 2017) (see Chapter ii, Section 2.iv.1). A multi-component tool was proposed by Takwoingi and colleagues in 2013 (Takwoingi et al 2013). Garner and colleagues have refined this tool and described a staged process that starts by assessing the extent to which the review is up to date (including relevance of the question, touch on of the review and implementation of appropriate and up-to-date methods), then examines whether relevant new evidence or new systematic review methodology are available, and then assesses the potential touch on of updating the review in terms of whether the findings are likely to alter (Garner et al 2016). For a detailed word of updating Cochrane Reviews, see online Chapter Four.

Information about the availability of new (or newly identified) evidence may come up from a multifariousness of sources and employ a diverse range of approaches (Garner et al 2016), including:

  • re-running the full search strategies in the original review;
  • using an abbreviated search strategy;
  • using literature notification services;
  • developing machine-learning algorithms based on report reports identified for the original review;
  • tracking studies in clinical trials (and other) registries;
  • checking studies included in related systematic reviews; and
  • other formal surveillance methods.

Searches of bibliographic databases may be streamlined by using literature notification services ('alerts'), whereby searches are run automatically at regular intervals, with potentially relevant new research existence provided ('pushed') to the review authors (run into Chapter 4, Section 4.4.9). Alternatively, it may be possible to run automatic searches via an application programming interface (API). Unfortunately, only some databases offer notification services and, of those that do non, but some offer an open API that allows review authors to set their ain automated searches. Thus, this arroyo is most useful when the studies likely to be relevant to the review are those indexed in systems that volition work inside a 'push' model (typically, large mainstream biomedical databases such as MEDLINE). A further key claiming, which is lessening over time, is that trials and other registries, websites and other unpublished sources typically require manual searches, so it is inappropriate to rely entirely on 'push' services to identify all new evidence. See Department 22.2.4 for further information on technological approaches to ameliorate this.

Statistical methods take been proposed to assess the extent to which new prove might bear upon the findings of a systematic review. Sample size calculations can incorporate the outcome of a current meta-analysis, thus providing information about how additional studies of a detail sample size could have an touch on on the results of an updated meta-analysis (Sutton et al 2007, Roloff et al 2013). These methods demonstrate in many cases that new evidence may have very little impact on a random-effects meta-analysis if in that location is heterogeneity across studies, and they require assumptions that the time to come studies will be like to the existing studies. Their practical use in deciding whether to update a systematic review may therefore be limited.

As function of their development of the aforementioned tool, Takwoingi and colleagues created a prediction equation based on findings from a sample of 65 updated Cochrane Reviews (Takwoingi et al 2013). They collated a list of numerical 'signals' as candidate predictors of changing conclusions on updating (including, for example, heterogeneity statistics in the original meta-analysis, presence of a large new study, and various measures of the amount of data in the new studies versus the original meta-assay). Their prediction equation involved two of these signals: the ratio of statistical information (inverse variance) in the new versus the original studies, and the number of new studies. Farther work is required to develop ways to operationalize this arroyo efficiently, as it requires detailed cognition of the new testify; one time this is in place, much of the endeavour to perform the update has already been expended.

22.2.3 'Living' systematic reviews

A 'living' systematic review (LSR) is a systematic review that is continually updated, with new (or newly identified) evidence incorporated every bit soon as it becomes available (Elliott et al 2014, Elliott et al 2017). Such regular and frequent updating has been suggested for reviews of loftier priority to conclusion makers, when certainty in the existing evidence is low or very low, and when at that place is likely to be new research evidence (Elliott et al 2017).

Continual surveillance for new research evidence is undertaken by frequent searches (east.g. monthly), and new information is incorporated into the review in a timely manner (e.one thousand. within a month of its identification). Ongoing developments in technology, which we overview in Section 22.2.4. An important issue when setting up an LSR is that the search methods and anticipated frequency of review updates are made explicit in the review protocol. This transparency is helpful for end-users, giving them the opportunity to plan downstream decisions around the expected dates of new versions, and reducing the need for others to program or undertake review updates. The maintenance of LSRs offers the possibility for decision makers to update their processes in line with evidence updates from the LSR; for instance, facilitating 'living' guidelines (Akl et al 2017), although ongoing challenges include the clear communication to authors, editors and users on what has changed when testify is updated, and how to implement frequently updated guidelines. Practical guidance on initiating and maintaining LSRs has been adult by the Living Evidence Network.

22.ii.4 Technologies to back up testify surveillance

Moving towards more regular updates of reviews may yield benefits in terms of their currency (Elliott et al 2014), just streamlining the necessary increase in searching is required if they are non to consume more resources than traditional approaches. Fortunately, new developments in data and computer science offer some potential for reductions in transmission effort through automation. (For an overview of a range of these technologies see Chapter 4, Department 4.six.6.2.)

New systems (such as the Epistemonikos database, which contains the results of regular searches of multiple datasets), offer potential reductions in the number of databases that individuals need to search, as well as reducing duplication of endeavour across review teams. In addition, the growth in interest of open admission publications has led to the creation of large datasets of open access bibliographic records, such equally OpenCitation, CrossRef and Microsoft Bookish. As these datasets keep to grow to contain all relevant records in their respective areas, they may likewise reduce the need for writer teams to search as many different sources as they currently demand to.

Undertaking regular searches as well requires the regular screening of records retrieved for eligibility. Once the review has been fix and initial searches screened, subsequent updates can reduce manual screening effort using automation tools that 'learn' the review'south eligibility criteria based on previous screening decisions by the review authors. Automation tools that are congenital on large numbers of records for more than generic utilize are besides available, such as Cochrane's RCT Classifier, which tin can be used to filter studies that are unlikely to be randomized trials from a ready of records (Thomas et al 2017). Cochrane has also adult Cochrane Crowd, which crowdsources decisions classifying studies as randomized trials, (see Chapter 4, Section 4.6.6.2).

Later on stages of the review process can besides exist assisted using new technologies. These include chance-of-bias assessment, the extraction of structured data from tables in PDF files, information extraction from reports (such as identifying the number of participants in a study and characteristics of the intervention) and even the writing of review results. These technologies are less well-advanced than those used for study identification.

These diverse tools aim to reduce manual effort at specific points in the standard systematic review process. However, Cochrane is also setting up systems that aim to change the written report selection procedure quite substantially, as depicted in Effigy 22.two.a. These developments begin with the prospective identification of relevant testify, outside of the context of any given review, including bibliographic and trial registry records, through centralized routine searches of appropriate sources. These records catamenia through a 'pipeline' which classifies the records in detail using a combination of motorcar learning and human effort (including Cochrane Crowd). First, the type of study is determined and, if information technology is likely to be a randomized trial, then the record proceeds to be classified in terms of its review topic and its PICO elements using terms from the Cochrane Linked Data ontology. Finally, relevant data are extracted from the total text report. The viability of such a organisation depends upon its accuracy, which is contingent on human decisions being consistent and correct. For this reason, the early on focus on randomized trials is appropriate, every bit a clear and widely understood definition exists for this type of study. Overall, the accuracy of Cochrane Crowd for identification of randomized trials exceeds 99%; and the auto learning system is similarly calibrated to reach over 99% think (Wallace et al 2017, Marshall et al 2018).

Setting up such a system for centralized study discovery is yielding benefits through economies of calibration. For example, in the past the same decisions well-nigh the same studies have been made multiple times across different reviews because previously there was no fashion of sharing these decisions between reviews. Duplication in manual effort is existence reduced substantially past ensuring that decisions made about a given record (e.chiliad. whether or not it describes a randomized trial) are simply made once. These decisions are then reflected in the inclusion of studies in the Cochrane Register of Studies, which can then be searched more efficiently for hereafter reviews. The arrangement benefits farther from its scale by learning that if a record is relevant for i review, it is unlikely to exist relevant for reviews with quite different eligibility criteria. Ultimately, the aim is for randomized trials to be identified for reviews through a unmarried search of their PICO classifications in the cardinal database, with new studies for existing reviews being identified automatically.

Figure 22.2.a Evidence Pipeline

22.iii Prospectively planned meta-assay

22.3.ane What is a prospective meta-analysis?

A properly conducted systematic review defines the question to exist addressed in advance of the identification of potentially eligible trials. Systematic reviews are by nature, however, retrospective because the trials included are commonly identified after the trials accept been completed and the results reported. A prospective meta-analysis (PMA) is a systematic review and meta-assay of studies that are identified, evaluated and adamant to exist eligible for the meta-assay before the relevant results of any of those studies become known. Most feel of PMA comes from their awarding to randomized trials. In this department we focus on PMAs of trials, although most of the same considerations will also apply to systematic reviews of other types of studies.

PMA can help to overcome some of the problems of retrospective meta-analyses of individual participant data or of aggregate data by enabling:

  1. hypotheses to be specified without prior knowledge of the results of individual trials (including hypotheses underlying subgroup analyses);
  2. selection criteria to be applied to trials prospectively; and
  3. assay methods to exist chosen before the results of individual trials are known, avoiding potential difficulties in estimation arising from data-dependent decisions.

PMAs are usually initiated when trials have already started recruiting, and are carried out past collaborative groups including representatives from each of the participating trials. They accept tended to involve collecting private participant data (IPD), such that they have many features in common with retrospective IPD meta-analyses (see also Chapter 26).

If initiated early enough, PMA provides an opportunity for trial blueprint, data collection and other trial processes to exist standardized across the eligible ongoing trials. For example, the investigators may agree to use the same instrument to measure a item outcome, and to measure the outcome at the same time-points in each trial. In a Cochrane Review of interventions for preventing obesity in children, for example, the multifariousness and unreliability of some of the outcome measures made information technology hard to combine information across trials (Summerbell et al 2005). A PMA of this question proposed a set of shared standards so that some of the issues raised by lack of standardization could exist addressed (Steinbeck et al 2006).

PMAs based on IPD take been conducted by trialists in cardiovascular disease (Simes 1995, WHO-ISI Blood Pressure Lowering Treatment Trialists' Collaboration 1998), childhood leukaemia (Shuster and Gieser 1996, Valsecchi and Masera 1996), babyhood and adolescent obesity (Steinbeck et al 2006, Askie et al 2010) and neonatology (Askie et al 2018). There are areas such as infectious diseases, however, where the opportunity to employ PMA has largely been missed (Ioannidis and Lau 1999).

Where resource are express, information technology may yet be possible to undertake a prospective systematic review and meta-analysis based on aggregate information, rather than IPD, as we discuss in Section 22.three.6. In practice, these are ofttimes initiated at a afterward phase during the course of the trials, and then there is less opportunity to standardize behave of the trials. However, it is possible to harmonize data for inclusion in meta-assay.

22.three.1.1 What is the deviation between a prospective meta-analysis and a big multicentre trial?

PMAs based on IPD are similar to multicentre clinical trials and have similar advantages, including increased sample size, increased diversity of treatment settings and populations, and the power to examine heterogeneity of intervention effects across multiple settings. However, whereas traditional multicentre trials implement a single protocol beyond all sites to reduce variability in trial conduct amidst centres, PMAs allow investigators greater flexibility in how their trial is conducted. Sites tin follow a local protocol appropriate to local circumstances, with the local protocol being aligned with elements of a PMA protocol that are mutual to all included trials.

PMAs may exist an bonny culling when a unmarried, adequately sized trial is infeasible for practical or political reasons (Simes 1987, Probstfield and Applegate 1998). They may also be useful when two or more trials addressing the same question are started with the investigators ignorant of the existence of the other trial(s): once these similar trials are identified, investigators can program prospectively to combine their results in a meta-analysis.

Multifariousness in the design of the included trials is a potentially desirable feature of PMA every bit it may improve generalizability. For example, FICSIT (Frailty and Injuries: Cooperative Studies of Intervention Techniques) was a pre-planned meta-analysis of eight trials of exercise-based interventions in a fragile elderly population (Schechtman and Ory 2001). The 8 FICSIT sites defined their own interventions using site-specific endpoints and evaluations and differing entry criteria (except that all participants were elderly).

22.three.1.2 Negotiating collaboration

As with retrospective IPD meta-analyses, negotiating and establishing a potent collaboration with the participating trialists is essential to the success of a PMA (run into Chapter 26, Sections 26.one.3 and 26.2.1). The collaboration usually has a steering group or secretariat that manages the projection on a twenty-four hours-to-day ground. Considering the collaboration must be formed before the results of any trial are known, an of import focus of a PMA's collaborative efforts is ofttimes on reaching agreement on trial population, design and data drove methods for each of the participating trials. Ideally, the collaborative group will agree on a core mutual protocol and data items (including operational definitions) that will exist collected across all trials. While individual trials can include local protocol amendments or boosted information items, the investigators should ensure that these will not compromise the cadre mutual protocol elements.

It is advisable for the collaborative grouping to obtain an explicit (and signed) collaboration understanding from each of the trial groups. This should likewise encourage noun contributions by the individual investigators, ensure 'buy-in' to the concept of the PMA, and facilitate input into the protocol.

22.3.1.3 Confidentiality of private participant data and results

Confidentiality issues regarding information anonymity and security are similar to those for IPD meta-analyses (see Chapter 26, Section 26.2.4). Specific bug for PMA include planning how to deal with trials as they achieve completion and publish their results, and how to manage bug relating to data and safety monitoring, including the impact of interim analyses of individual trials in the PMA, or possibly a pooled interim analysis of the PMA (encounter also Section 22.iii.5).

22.three.ii Writing a protocol for a prospective meta-analysis

All PMAs should be registered on PROSPERO or a similar registry, and accept a publicly available protocol. For an example protocol, see the NeOProM Collaboration protocol (Askie et al 2011). Developing a protocol for a PMA is conceptually similar to the process for a systematic review with a traditional meta-assay component (Moher et al 2015). All the same, some considerations are unique to a PMA, as follows.

Objectives, eligibility and outcomes As for whatever systematic review or meta-analysis, the protocol for a PMA should specify its objectives and eligibility criteria for inclusion of the trials (including trial design, participants, interventions and comparators). In addition, information technology should specify which outcomes will be measured by all trials in the PMA, and when and how these should exist measured. Additionally, details of subgroup assay variables should be specified.

Search methods Just as for a retrospective systematic review, a systematic search should exist performed to identify all eligible ongoing trials, in order to maximize precision. The protocol should describe in detail the efforts made to identify ongoing, or planned trials, or to identify trialists with a common interest in developing a PMA, including how potential collaborators have been (or volition be) located and approached to participate.

Trial details Details of trials already identified for inclusion should be listed in the protocol, including their trial registration identifiers, the predictable number of participants and timelines for each participating trial. The protocol should country whether a signed agreement to collaborate has been obtained from the appropriate representative of each trial (e.chiliad. the sponsor or principal investigator). The protocol should include a argument that, at the time of inclusion in the PMA, no trial results related to the PMA inquiry question were known to anyone outside each trial's own data monitoring committee. If eligible trials are identified but non included in the PMA considering their results related to the PMA inquiry question are already known, the PMA protocol should outline how these information will exist dealt with. For case, sensitivity analyses including data from these trials might be planned. The protocol should describe actions to be taken if subsequent trials are located while the PMA is in progress.

Information collection and analysis The protocol should outline the plans for the collection and analyses of information in a similar way to that of a standard, aggregate data meta-analysis or an IPD meta-assay. Details of overall sample size and ability calculations, interim analyses (if applicable) and subgroup analyses should be provided. For a prospectively planned series of trials, a sequential approach to the meta-analysis may be reasonable (see Section 22.4).

In an IPD-PMA, the protocol should describe what will happen if the investigators of some trials within the PMA are unable (or unwilling) to provide participant-level data. Would the PMA secretariat, for instance, take advisable summary data? The protocol should specify whether there is an intention to update the PMA data at regular intervals via ongoing cycles of data drove (e.g. five yearly). A detailed statistical analysis plan should be agreed and made public before the receipt or analysis of any data to be included in the PMA.

Management and co-ordination The PMA protocol should outline details of project management structure (including whatsoever committees, see Department 22.three.i.two), the procedures for data management (how data are to be collected, the format required, when data volition be required to be submitted, quality balls procedures, etc; meet Chapter 26, Section 26.ii), and who volition exist responsible for the statistical analyses.

Publication policy It is important to have an authorship policy in place for the PMA (e.g. specifying that publications volition exist in the group name, but besides including a list of private authors), and a policy on manuscript grooming (due east.g. germination of a writing committee, opportunities to annotate on draft papers).

A unique issue that arises inside the context of the PMA (which would generally not ascend for a multicentre trial or a retrospective IPD meta-assay) is whether or not individual trials should publish their ain results separately and, if so, the timing of those publications. In add-on to contributing to the PMA, it is likely that investigators will prefer trial-specific publications to announced earlier the combined PMA results are published. It is recommended that PMA publication(s) clearly indicate the sources of the included data and refer to prior publications of the individual included trials.

22.3.3 Data drove in a prospective meta-analysis

Participating trials in a PMA usually concord to supply individual participant data once their individual trials are completed and published. Every bit trialists prospectively decide which data they volition collect and in what format, the need to re-define and re-lawmaking supplied information should be less problematic than is often the case with a retrospective IPD meta-assay.

Once information are received by the PMA secretariat, they should be rigorously checked using the same procedures as for IPD meta-analyses, including checking for missing or duplicated information, conducting information plausibility checks, assessing patterns of randomization, and ensuring the information supplied is up to date (see Chapter 26, Department 26.3). Data queries will be resolved by direct consultation with the individual trialists before being included in the terminal combined dataset for analysis.

22.3.iv Data analysis in prospective meta-analysis

Most PMAs will utilize similar analysis methods to those employed in retrospective IPD meta-analyses (encounter Chapter 26, Section 26.4). The utilise of participant-level data besides permits more statistically powerful investigations of whether intervention effects vary according to participant characteristics, and in some cases let prognostic modelling.

22.3.five Interim analysis and data monitoring in prospective meta-analysis

Private clinical trials frequently include a plan for interim analyses of data, particularly to monitor safety of the interventions. PMA offers a unique opportunity to perform these interim analyses using data contributed past all trials. Under the auspices of an over-arching data condom monitoring commission (DSMC) for the PMA, available information may be combined from all trials for an acting analysis, or assessed separately past each trial and the results then shared amongst the DSMCs of all the participating trials.

The ability to perform combined interim analyses raises some ethical bug. Is it, for instance, advisable to continue randomization within individual trials if an overall internet benefit of an intervention has been demonstrated in the combined analysis? When results are non known in the subgroups of clinical involvement, or for less common endpoints, should the investigators continue to proceed with the PMA to obtain farther information regarding overall internet clinical benefit? If each trial has its own DSMC, then communication amongst committees would be beneficial in this situation, as recommended by Hillman and Louis (Hillman and Louis 2003). This would exist helpful, for case, in deciding whether or not to close an individual trial early because of evidence of efficacy from the combined interim data. It could be argued that knowledge of emerging, apropos, combined safety data from all participating trials might actually reduce the chances of spurious early stopping of an individual trial. Information technology would exist helpful, therefore, for the private trial DSMCs inside the PMA to prefer a mutual agreement that individual trials should not be stopped until the aims of the PMA, with respect to subgroups and uncommon endpoints (or 'net clinical benefit'), are achieved.

Another possible option might be to consider limiting enrolment in the continuing trials to participants in a particular subgroup of interest if such a determination makes clinical and statistical sense. In whatsoever case, it might be appropriate to apply the concepts of sequential meta-assay methodology, as discussed in Section 22.4, to derive stringent stopping rules for the PMA as individual trial results become available.

22.3.6 Prospective approaches based on amass data: the Framework for Adaptive Meta-analysis (FAME)

The Framework for Adaptive Meta-analysis (FAME) is a combination of 'traditional' and prospective elements that is suitable for aggregate data (rather than IPD) meta-analysis and is responsive to emerging trial results. In the FAME approach, all methods are defined in a publicly available systematic review protocol ideally earlier all trial results are known. The approach aims to take all eligible trials into business relationship, including those that have been completed (and analysed) and those that are still to consummate or report (Tierney et al 2017). FAME can exist used to conceptualize the earliest opportunity for a reliable aggregate data meta-assay, which may be well in advance of all relevant results becoming available. The key steps of FAME are as follows.

one) Commencement the systematic review process whilst well-nigh trials are ongoing or yet to report

This makes information technology possible to plan the objectives, eligibility criteria, outcomes and analyses with trivial or no knowledge of eligible trial results, and too to anticipate the emergence of trial results so that completion of the review and meta-analysis can be aligned accordingly.

2) Search comprehensively for published, unpublished and ongoing eligible trials

This ensures that the meta-assay planning is based on all potential trial data and that results tin can be placed in the context of all the current and probable future bear witness. Briefing proceedings, report registers and investigator networks are therefore important sources of information. Although unpublished and ongoing studies should exist examined for any systematic review, evidence suggests that it is not standard practice (Folio et al 2016).

3) Liaise with trialists to develop and maintain a detailed understanding of these trials

Liaising with trialists provides information on how trials are progressing and when results are likely to exist bachelor, but it also provides information on trial design, bear and analysis, bringing greater clarity to eligibility screening and accuracy to risk-of-bias assessments (Vale et al 2013).

four) Predict if and when sufficient results will be available for reliable and robust meta-analysis (typically using aggregate information)

The data from steps two and 3 most how results will sally over time allows a prospective cess of the feasibility and timing of a reliable meta-assay. A first indicator of reliability is that the projected amount of participants or events that would be available for the meta-assay constitute an 'optimal information size' (Pogue and Yusuf 1997). In other words they would provide sufficient power to detect realistic effects of the intervention under investigation, on the basis of standard methods of sample size calculation. A 2nd indicator of reliability is that the anticipated participants or events would incorporate a substantial proportion of the full eligible ('relative information size'). This serves to minimize the likelihood of reporting or other data availability biases. Such predictions and decisions for FAME should exist outlined in the systematic review protocol.

v) Conduct meta-assay and interpret results, taking account of available and unavailable information

Interpretation should consider how representative the actual data obtained are, and the potential touch of the results of unpublished or ongoing trials that were non included. This is in addition to the management and precision of the meta-analysis result and consistency of effects across trials, equally is standard.

six) Appraise the value of updating the systematic review and meta-assay in the future

If the results of a meta-analysis are non deemed definitive, it is of import to ascertain whether there is likely to exist value in updating with trial results that volition sally in the future and, if so, whether aggregate information will suffice or IPD might be needed.

FAME has been used to evaluate reliably the furnishings of prostate cancer interventions well in advance of all trial results beingness bachelor (Vale et al 2016, Rydzewska et al 2017). In these reviews, collaboration with trial investigators provided admission to pre-publication results, expediting the review process further and assuasive publication in the aforementioned fourth dimension frame as key trial results, increasing the visibility and potential impact of both. It likewise enabled admission to additional event, subgroup and toxicity analyses, which immune a more than consistent and thorough assay than is frequently possible with aggregate data. Such an approach requires a suitable non-disclosure agreement between the review authors and the trial authors.

Additionally, FAME could be used in the living systematic review context (Crequit et al 2016, Elliott et al 2017, Nikolakopoulou et al 2018), either to provide a suitable baseline meta-analysis, or to predict when a living update might be definitive. Combining multiple FAME reviews in a network meta-analysis (Vale et al 2018) offers an alternative to living network meta-analysis for the timely synthesis of competing treatments (Crequit et al 2016, Nikolakopoulou et al 2018).

22.iv Statistical analysis of accumulating evidence

22.4.one Statistical issues arising from repeating meta-analyses

In any prospective or updated systematic review the body of evidence may abound over fourth dimension, and meta-analyses may be repeated with the addition of new studies. If each meta-assay is interpreted through the use of a statistical exam of significance (eastward.g. categorizing a finding equally 'statistically significant' if the P value is less than 0.05 or 'not statistically significant' otherwise), and then on each occasion the conclusion has a 5% chance of being wrong if the zippo hypothesis (that there is no difference between experimental and comparator interventions on average) is truthful. Such an incorrect decision is often called a type I mistake. If significance tests are repeated each time a meta-analysis is updated with new studies, then the probability that at least one of the repeated meta-analyses volition produce a P value lower than 0.05 under the cipher hypothesis (i.e. the probability of a type I error) is somewhat higher than 5% (Berkey et al 1996). This has led some researchers to be concerned virtually the statistical methods they were using when meta-analyses are repeated over time, for fear they were leading to spurious findings.

A related concern is that we may wish to determine when at that place is enough show in the meta-analysis to be able to say that the question is sufficiently well-answered. Traditionally, 'enough evidence' has been interpreted as information with enough statistical power (e.one thousand. 80% or xc% power) to notice a specific magnitude of effect using a significance test. This requires that attention be paid to type 2 fault, which is the chance that a true (non-null) upshot will fail to be picked up by the test. When meta-analyses are repeated over time, statistical power may exist expected to increase as new studies are added. Nonetheless, merely as blazon I error is non controlled across repeated analyses, neither is type 2 error.

Statistical methods for meta-assay have been proposed to address these concerns. They are known as sequential approaches, and are derived from methods ordinarily used in clinical trials. The ceremoniousness of applying sequential methods in the context of a systematic review has been hotly debated. Nosotros describe the main methods in brief in Section 22.iv.two, and in Section 22.four.iii nosotros explain that the use of sequential methods is explicitly discouraged in the context of a Cochrane Review, but may exist reasonable in the context of a PMA.

22.4.2 Sequential statistical methods for meta-analysis

Interim analyses are often performed in randomized trials, then the trial can exist stopped early on if there is disarming prove that the intervention is beneficial or harmful. Sequential methods have been developed that aim to control type I and Ii errors in the context of a clinical trial. These methods have been adjusted for prospectively adding studies to a meta-analysis, rather than prospectively adding participants to a trial.

The main methods involve pre-specification of a stopping rule. The stopping rule is informed by considerations of (i) type I error; (two) type Two error; (c) a clinically important magnitude of effect; and (four) the desired backdrop of the stopping rule (e.g. whether it is particularly important to avert stopping also shortly). To control type Ii error, it is necessary to quantify the amount of information that has accumulated to date. This can be measured using sample size (number of participants) or using statistical information (i.eastward. the sum of the inverse-variance weights in the meta-analysis).

Implementation of the stopping dominion can be done in several ways. 1 possibility is to perform a statistical examination in the usual way only to lower the threshold for interpreting the result as statistically significant. This penalization of the type I error rate at each analysis may be viewed as 'spending' (or distributing) proportions of the error over the repeated analyses. The amount of penalization is specified to create the stopping rule, and is referred to equally an 'alpha spending function' (because alpha is often used as shorthand for the acceptable type I fault charge per unit).

An alternative mode of implementing a stopping rule is to plot the path of the accumulating bear witness. Specifically, the plot is a besprinkle plot of a cumulative measure of effect magnitude (one convenient option is the sum of the study issue estimates times their meta-analytic weights) against a cumulative measure out of statistical data (a convenient pick is the sum of the meta-analytic weights) at each update. The plotted points are compared with a plot 'boundary', which is determined uniquely by the iv pre-specified considerations of a stopping dominion noted to a higher place. A conclusive issue is deemed to be achieved if a point in the plot falls exterior the boundary. For meta-analysis, a rectangular boundary has been recommended, as this reduces the adventure of crossing a boundary very early on; this also produces a scheme that is equivalent to the near popular alpha-spending approach proposed by O'Brien and Fleming (O'Brien and Fleming 1979). Additional stopping boundaries can be added to exam for futility, so the updating process can exist stopped if it is unlikely that a meaningful upshot will be found.

Methods translate directly from sequential clinical trials to a sequential fixed-effect meta-assay. Random-furnishings meta-analyses are more than problematic. For sequential methods based on statistical weights, the between-study variation (heterogeneity) is naturally incorporated. For methods based on sample size, adjustments can be made to the target sample size to reverberate the impact of between-study variation. Either fashion, at that place are of import technical issues with the methods because between-written report variation impacts on the results of a random-effects meta-assay and information technology is impossible to anticipate how much between-study variation there will be in the accumulating testify. Whereas it would be natural to wait that adding studies to a meta-analysis increases precision, this is non necessarily the case under a random-effects model. Specifically, if a new set of studies is added to a meta-analysis amongst which at that place is substantially more than heterogeneity than in the previous studies, and so the estimated betwixt-study variance will get up, and the confidence interval for the new totality of studies may get wider rather than narrower. Possibilities to reduce the impact of this include: (i) using a fixed value (a prior guess) for the amount of between-study heterogeneity throughout the sequential scheme; and (two) using a high gauge of the amount of heterogeneity during the early on stages of the sequential scheme.

Sequential approaches can exist inverted to produce a series of conviction intervals, one for each update, which reflects the sequential scheme. This allows representation of the results in a conventional forest plot. The interpretation of these confidence intervals is that we can exist 95% confident that all confidence intervals in the entire series of adapted confidence intervals (across all updates) comprise the true intervention effect. The adjusted confidence interval excludes the null value only if a stopping purlieus is crossed. This is a somewhat technical interpretation that is unlikely to exist helpful in the interpretation of results within any particular update of a review.

There are several choices to make when deciding on a sequential approach to meta-analysis. 2 detail sets of choices take been articulated in papers by Wetterslev, Thorlund, Brok and colleagues, and past Whitehead, Higgins and colleagues.

The first group refer to their methods as 'trials sequential analysis' (TSA). They use the principle of alpha spending and articulate the desirable total amount of information in terms of sample size (Wetterslev et al 2008, Brok et al 2009, Thorlund et al 2009). This sample size is calculated in the same fashion equally if the meta-analysis was a single clinical trial, by setting a desired type I error, an assumed event size, and the desired statistical power to detect that effect. They recommended that the sample size be adapted for heterogeneity, using either some pre-specified estimate of heterogeneity or the best current estimate of heterogeneity in the meta-analysis. The adjustment is more often than not made using a statistic called D2, which produces a larger required sample size, although the more than widely used Iii statistic may exist used instead (Wetterslev et al 2009).

Whitehead and Higgins implemented a boundaries arroyo and stand for information using statistical information (specifically, the sum of the meta-analytic weights) (Whitehead 1997, Higgins et al 2011). As noted, this implicitly adjusts for heterogeneity because as heterogeneity increases, the information contained in the meta-analysis decreases. In this approach, the cumulative information can subtract betwixt updates as well as increase (i.e. the path tin go backwards in relation to the purlieus). These authors suggest a parallel Bayesian arroyo to updating the estimate of between-report heterogeneity, starting with an informative prior distribution, to reduce the adventure that the path will go backwards (Higgins et al 2011). If the prior estimate of heterogeneity is suitably large, the method can business relationship for underestimation of heterogeneity early in the updating process.

22.4.3 Using sequential approaches to meta-analysis in Cochrane Reviews

Formal sequential meta-analysis approaches are discouraged for updated meta-analyses in most circumstances within the Cochrane context. They should non be used for the principal analyses, or to depict main conclusions. This is for the post-obit reasons.

  1. The results of each meta-analysis, conducted at any point in time, indicate the electric current best evidence of the estimated intervention effect and its accompanying dubiety. These results need to stand up on their ain merit. Decision makers should use the currently available testify, and their decisions should non be influenced past previous meta-analyses or plans for futurity updates.
  2. Cochrane Review authors should interpret evidence on the basis of the estimated magnitude of the effect of intervention and its uncertainty (usually quantified using a confidence interval) and not on the basis of statistical significance (meet Chapter 15, Section 15.3.1). In particular, Cochrane Review authors should not draw binary interpretations of intervention effects as present or absent, based on defining results as 'pregnant' or 'non-significant' (encounter Chapter 15, Section fifteen.iii.2).
  3. At that place are important differences between the context of an individual trial and the context of a meta-analysis. Whereas a trialist is in control of recruitment of farther participants, the meta-annotator (except in the context of a prospective meta-analysis) has no control over designing or affecting trials that are eligible for the meta-assay, so it would be impossible to construct a gear up of workable stopping rules which require a pre-planned set of acting analyses. Conversely, planned adjustments for time to come updates may exist unnecessary if new evidence does non appear.
  4. A meta-analysis will not commonly relate to a single decision or single decision maker, so that a sequential adjustment volition not capture the complexity of the determination making process. Furthermore, Cochrane summarizes evidence for the benefit of multiple end users including patients, health professionals, policy conclusion makers and guideline developers. Different decision makers may cull to use the testify differently and reach different decisions based on dissimilar priorities and contexts. They might not concur with sequential adjustments or stopping rules ready up by review authors.
  5. Heterogeneity is prevalent in meta-analyses and random-effects models are commonly used when heterogeneity is present. Sequential methods have important methodological limitations when heterogeneity is present.

It remains of import for review authors to avoid over-optimistic conclusions being drawn from a small-scale number of studies. Review authors demand to be particularly careful not to over-interpret promising findings when in that location is very little evidence. Such findings could exist due to gamble, to bias, or to use of meta-analytic methods that accept poor properties when there are few studies (see Chapter ten, Section 10.x.four), and might be overturned at later updates of the review. Evaluating the confidence in the body of evidence, for example using the GRADE framework, should highlight when in that location is insufficient information (i.due east. too much imprecision) for firm conclusions to be fatigued.

Sequential approaches to meta-analysis may be used in Cochrane Reviews in 2 situations.

1. Sequential methods may be used in the context of a prospectively planned serial of clinical trials, when the primary analysis is a meta-analysis of the findings across trials, as discussed in Section 22.iii. In this case, the meta-analysts are in control of the product of new data and crossing a boundary in a sequential scheme would betoken that no further data demand to exist collected.

2. Sequential methods may be performed every bit secondary analyses in Cochrane Reviews, to provide an additional interpretation of the data from a specific perspective. If sequential approaches are to be applied, and then (i) they must be planned prospectively (and not retrospectively), with a full analysis plan provided in the protocol; and (ii) the assumptions underlying the sequential blueprint must exist clearly conveyed and justified, including the parameters determining the blueprint such every bit the clinically of import consequence size, assumptions near heterogeneity, and both the type I and type Two error rates.

22.5 Affiliate data

Authors: James Thomas, Lisa M Askie, Jesse A Berlin, Julian H Elliott, Davina Ghersi, Mark Simmonds, Yemisi Takwoingi, Jayne F Tierney, Julian PT Higgins

Acknowledgements: The post-obit contributed to the policy on use of sequential approaches to meta-analysis: Christopher Schmid, Stephen Senn, Jonathan Sterne, Elena Kulinskaya, Martin Posch, Kit Roes and Joanne McKenzie.

Funding: JFT'south work is funded by the UK Medical Research Quango (MC_UU_12023/24). JT is supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Practical Health Research and Care North Thames at Barts Wellness NHS Trust. JHE is supported by a Career Development Fellowship from the Australian National Wellness and Medical Research Quango (APP1126434). Development of Cochrane's Evidence Pipeline and RCT Classifier was supported by Cochrane'southward Game Changer Initiative and the Australian National Wellness and Medical Research Council through a Partnership Project Grant (APP1114605). JPTH is a member of the NIHR Biomedical Research Eye at University Hospitals Bristol NHS Foundation Trust and the University of Bristol. JPTH received funding from National Institute for Health Inquiry Senior Investigator honour NF-SI-0617-10145. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Intendance.

22.6 References

Akl EA, Meerpohl JJ, Elliott J, Kahale LA, Schunemann HJ, Living Systematic Review N. Living systematic reviews: 4. Living guideline recommendations. Periodical of Clinical Epidemiology 2017; 91: 47-53.

Askie LM, Baur LA, Campbell K, Daniels L, Hesketh K, Magarey A, Mihrshahi S, Rissel C, Simes RJ, Taylor B, Taylor R, Voysey G, Wen LM, on behalf of the EPOCH Collaboration. The Early Prevention of Obesity in CHildren (EPOCH) Collaboration - an Individual Patient Data Prospective Meta-Analysis [study protocol]. BMC Public Health 2010; 10: 728.

Askie LM, Brocklehurst P, Darlow BA, Finer N, Schmidt B, Tarnow-Mordi W. NeOProM: Neonatal Oxygenation Prospective Meta-analysis Collaboration study protocol. BMC Pediatrics 2011; xi: 6.

Askie LM, Darlow BA, Finer N, et al. Association between oxygen saturation targeting and death or inability in extremely preterm infants in the neonatal oxygenation prospective meta-analysis collaboration. JAMA 2018; 319: 2190-2201.

Berkey CS, Mosteller F, Lau J, Antman EM. Doubtfulness of the time of first significance in random effects cumulative meta-analysis. Controlled Clinical Trials 1996; 17: 357-371.

Brok J, Thorlund Chiliad, Wetterslev J, Gluud C. Apparently conclusive meta-analyses may be inconclusive--Trial sequential assay adjustment of random error risk due to repetitive testing of accumulating data in apparently conclusive neonatal meta-analyses. International Journal of Epidemiology 2009; 38: 287-298.

Chalmers I. Electronic publications for updating controlled trial reviews. The Lancet 1986; 328: 287.

Crequit P, Trinquart Fifty, Yavchitz A, Ravaud P. Wasted inquiry when systematic reviews fail to provide a complete and up-to-appointment bear witness synthesis: the instance of lung cancer. BMC Medicine 2016; fourteen: eight.

Elliott JH, Turner T, Clavisi O, Thomas J, Higgins JPT, Mavergames C, Gruen RL. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Medicine 2014; eleven: e1001603.

Elliott JH, Synnot A, Turner T, Simmonds M, Akl EA, McDonald Due south, Salanti G, Meerpohl J, MacLehose H, Hilton J, Tovey D, Shemilt I, Thomas J, Living Systematic Review N. Living systematic review: 1. Introduction-the why, what, when, and how. Periodical of Clinical Epidemiology 2017; 91: 23-thirty.

Garner P, Hopewell S, Chandler J, MacLehose H, Schünemann HJ, Akl EA, Beyene J, Chang S, Churchill R, Dearness K, Guyatt G, Lefebvre C, Liles B, Marshall R, Martinez Garcia L, Mavergames C, Nasser G, Qaseem A, Sampson M, Soares-Weiser Chiliad, Takwoingi Y, Thabane 50, Trivella M, Tugwell P, Welsh Due east, Wilson EC, Schünemann HJ, Panel for Updating Guidance for Systematic Reviews (PUGs). When and how to update systematic reviews: consensus and checklist. BMJ 2016; 354: i3507.

Higgins JPT, Whitehead A, Simmonds M. Sequential methods for random-furnishings meta-assay. Statistics in Medicine 2011; 30: 903-921.

Hillman DW, Louis TA. DSMB case report: conclusion making when a similar clinical trial is stopped early on. Controlled Clinical Trials 2003; 24: 85-91.

Ioannidis JPA, Lau J. State of the show: current status and prospects of meta-analysis in infectious diseases. Clinical Infectious Diseases 1999; 29: 1178-1185.

Marshall IJ, Noel-Storr A, Kuiper J, Thomas J, Wallace BC. Car learning for identifying Randomized Controlled Trials: An evaluation and practitioner's guide. Research Synthesis Methods 2018; ix: 602-614.

Martínez García Fifty, Pardo-Hernandez H, Superchi C, Niño de Guzman Eastward, Ballesteros One thousand, Ibargoyen Roteta N, McFarlane Due east, Posso M, Roqué IFM, Rotaeche Del Campo R, Sanabria AJ, Selva A, Solà I, Vernooij RWM, Alonso-Coello P. Methodological systematic review identifies major limitations in prioritization processes for updating. Journal of Clinical Epidemiology 2017; 86: 11-24.

Moher D, Shamseer L, Clarke K, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA. Preferred reporting items for systematic review and meta-assay protocols (PRISMA-P) 2015 statement. Systematic Reviews 2015; 4: 1.

Nikolakopoulou A, Mavridis D, Furukawa TA, Cipriani A, Tricco AC, Straus SE, Siontis GCM, Egger Grand, Salanti Thou. Living network meta-assay compared with pairwise meta-analysis in comparative effectiveness research: empirical study. BMJ 2018; 360: k585.

O'Brien PC, Fleming TR. A multiple testing procedure for clinical trials. Biometrics 1979; 35: 549-556.

Folio MJ, Shamseer Fifty, Altman DG, Tetzlaff J, Sampson M, Tricco AC, Catalá-López F, Li L, Reid EK, Sarkis-Onofre R, Moher D. Epidemiology and reporting characteristics of systematic reviews of biomedical research: A cantankerous-sectional written report. PLoS Medicine 2016; thirteen: e1002028.

Pogue JM, Yusuf Due south. Cumulating show from randomized trials: utilizing sequential monitoring boundaries for cumulative meta-assay. Controlled Clinical Trials 1997; 18: 580-593; discussion 661-586.

Probstfield J, Applegate WB. Prospective meta-analysis: Ahoy! A clinical trial? Journal of the American Geriatrics Lodge 1998; 43: 452-453.

Roloff V, Higgins JPT, Sutton AJ. Planning future studies based on the conditional power of a meta-analysis. Statistics in Medicine 2013; 32: eleven-24.

Rydzewska LHM, Burdett Southward, Vale CL, Clarke NW, Fizazi Thou, Kheoh T, Stonemason Doc, Miladinovic B, James ND, Parmar MKB, Spears MR, Sweeney CJ, Sydes MR, Tran N, Tierney JF, STOPCaP Abiraterone Collaborators. Adding abiraterone to androgen impecuniousness therapy in men with metastatic hormone-sensitive prostate cancer: A systematic review and meta-analysis. European Journal of Cancer 2017; 84: 88-101.

Schechtman One thousand, Ory M. The effects of exercise on the quality of life of frail older adults: a preplanned meta-analysis of the FICSIT trials. Annals of Behavioural Medicine 2001; 23: 186-197.

Shojania KG, Sampson Yard, Ansari MT, Ji J, Doucette S, Moher D. How quickly do systematic reviews go out of date? A survival analysis. Annals of Internal Medicine 2007; 147: 224-233.

Shuster JJ, Gieser Prisoner of war. Meta-analysis and prospective meta-analysis in childhood leukemia clinical enquiry. Register of Oncology 1996; 7: 1009-1014.

Simes RJ. Against publication bias: a cohort blueprint for meta-analysis. Statistics in Medicine 1987; 6: 11-29.

Simes RJ. Prospective meta-assay of cholesterol-lowering studies: the Prospective Pravastatin Pooling (PPP) Projection and the Cholesterol Treatment Trialists' (CTT) Collaboration. American Periodical of Cardiology 1995; 76: 122c-126c.

Steinbeck KS, Baur LA, Morris AM, Ghersi D. A proposed protocol for the evolution of a annals of trials of weight management of childhood overweight and obesity. International Journal of Obesity 2006; thirty: 2-5.

Summerbell CD, Waters E, Edmunds LD, Kelly S, Brown T, Campbell KJ. Interventions for preventing obesity in children. 3 ed2005.

Sutton AJ, Cooper NJ, Jones DR, Lambert PC, Thompson JR, Abrams KR. Evidence-based sample size calculations based upon updated meta-analysis. Statistics in Medicine 2007; 26: 2479-2500.

Takwoingi Y, Hopewell South, Tovey D, Sutton AJ. A multicomponent determination tool for prioritising the updating of systematic reviews. BMJ 2013; 347: f7191.

Thomas J, Noel-Storr A, Marshall I, Wallace B, McDonald Due south, Mavergames C, Glasziou P, Shemilt I, Synnot A, Turner T, Elliott J, Living Systematic Review N. Living systematic reviews: 2. Combining human and motorcar try. Periodical of Clinical Epidemiology 2017; 91: 31-37.

Thorlund Yard, Devereaux PJ, Wetterslev J, Guyatt G, Ioannidis JPA, Thabane L, Gluud LL, Als-Nielsen B, Gluud C. Tin can trial sequential monitoring boundaries reduce spurious inferences from meta-analyses? International Journal of Epidemiology 2009; 38: 276-286.

Tierney J, Vale CL, Burdett S, Fisher D, Rydzewska L, Parmar MKB. Timely and reliable evaluation of the effects of interventions: a framework for adaptive meta-analysis (FAME). Trials 2017; 18.

Vale CL, Tierney JF, Burdett South. Can trial quality exist reliably assessed from published reports of cancer trials: evaluation of risk of bias assessments in systematic reviews. BMJ 2013; 346: f1798.

Vale CL, Burdett S, Rydzewska LHM, Albiges Fifty, Clarke NW, Fisher D, Fizazi One thousand, Gravis Chiliad, James ND, Mason MD, Parmar MKB, Sweeney CJ, Sydes MR, Tombal B, Tierney JF, STOpCaP Steering Grouping. Add-on of docetaxel or bisphosphonates to standard of intendance in men with localised or metastatic, hormone-sensitive prostate cancer: a systematic review and meta-analyses of amass data. Lancet Oncology 2016; 17: 243-256.

Vale CL, Fisher DJ, White IR, Carpenter JR, Burdett S, Clarke NW, Fizazi K, Gravis Yard, James ND, Mason Md, Parmar MKB, Rydzewska LH, Sweeney CJ, Spears MR, Sydes MR, Tierney JF. What is the optimal systemic treatment of men with metastatic, hormone-naive prostate cancer? A STOPCAP systematic review and network meta-assay. Register of Oncology 2018; 29: 1249-1257.

Valsecchi MG, Masera Thou. A new challenge in clinical inquiry in babyhood ALL: the prospective meta-analysis strategy for intergroup collaboration. Annals of Oncology 1996; 7: 1005-1008.

Wallace BC, Noel-Storr A, Marshall IJ, Cohen AM, Smalheiser NR, Thomas J. Identifying reports of randomized controlled trials (RCTs) via a hybrid machine learning and crowdsourcing arroyo. Periodical of the American Medical Information science Association 2017; 24: 1165-1168.

Wetterslev J, Thorlund G, Brok J, Gluud C. Trial sequential analysis may plant when firm evidence is reached in cumulative meta-assay. Journal of Clinical Epidemiology 2008; 61: 64-75.

Wetterslev J, Thorlund 1000, Brok J, Gluud C. Estimating required information size by quantifying diversity in random-effects model meta-analyses. BMC Medical Inquiry Methodology 2009; 9: 86.

Whitehead A. A prospectively planned cumulative meta-analysis practical to a series of concurrent clinical trials. Statistics in Medicine 1997; xvi: 2901-2913.

WHO-ISI Blood Pressure Lowering Treatment Trialists' Collaboration. Protocol for prospective collaborative overviews of major randomised trials of blood-pressure-lowering treatments. Journal of Hypertension 1998; sixteen: 127-137.

For permission to re-employ material from the Handbook (either academic or commercial), please see here for full details.

gibsonshewit1957.blogspot.com

Source: https://training.cochrane.org/handbook/current/chapter-22

Post a Comment for "Ch 22 Bodies Machines and Male Power Review"