当前位置:科学网首页 > 小柯机器人 >详情
不良事件系统评价在数据提取的再现性方面存在较大问题
作者:小柯机器人 发布时间:2022/5/15 13:57:07

加拿大阿尔伯塔大学Sunita Vohra团队研究了不良事件证据综合实践中数据提取的有效性。这一研究成果于2022年5月10日发表在《英国医学杂志》上。

为了研究不良事件系统评价中数据提取的有效性,数据提取错误对结果的影响,并建立数据提取错误的分类框架,以支持进一步的方法学研究,研究组在PubMed上检索2015年1月1日至2020年1月1日发表的合格系统评价进行再现性研究。随机对照试验的荟萃数据来自四位作者的系统评价。原始数据源(例如全文和ClinicalTrials.gov)随后由同一作者引用,以重现这些荟萃分析中使用的数据。

系统评价纳入了基于将安全性报告为唯一结果的医疗干预措施的随机对照试验、包括5个及以上随机对照试验的至少一对荟萃分析,以及每个试验都有干预组和对照组事件计数和样本量2×2数据表的荟萃分析。主要结局从三个层面总结数据提取错误:研究层面、荟萃分析层面和系统评价层面。并进一步研究这些错误对结果的潜在影响。

研究组共纳入201项系统评价和829项成对荟萃分析,涉及10386项随机对照试验。10386项试验中有1762项(17.0%)无法重现数据提取。829项荟萃分析中有554项(66.8%)至少有一项随机对照试验存在数据提取错误;201篇系统评价中有171篇(85.1%)至少进行了一次有数据提取错误的荟萃分析。

最常见的数据提取错误类型是数值错误(49.2%,867/1762)和模糊错误(29.9%,526/1762),主要由结果的模糊定义引起。这些类别之后是其他三类:零假设错误、错误识别和不匹配错误。研究组在288项荟萃分析中分析了这些错误的影响。

数据提取错误导致288项荟萃分析中有10项(3.5%)改变了效果方向,288项荟萃分析中有19项(6.6%)改变了P值的显著性。有两种及以上不同类型错误的荟萃分析比只有一种类型错误的荟萃分析更容易受到这些变化的影响(对于中度变化,分别为10.4%和28.2%;对于较大的变化,分别为12.8%和3.2%,组间差异均显著)。

研究结果表明,不良事件的系统评价在数据提取的再现性方面可能存在严重问题,这些错误可能会误导结论。迫切需要实施指南来帮助未来系统评价的作者提高数据提取的有效性。

附:英文原文

Title: Validity of data extraction in evidence synthesis practice of adverse events: reproducibility study

Author: Chang Xu, Tianqi Yu, Luis Furuya-Kanamori, Lifeng Lin, Liliane Zorzela, Xiaoqin Zhou, Hanming Dai, Yoon Loke, Sunita Vohra

Issue&Volume: 2022/05/10

Abstract:

Objectives To investigate the validity of data extraction in systematic reviews of adverse events, the effect of data extraction errors on the results, and to develop a classification framework for data extraction errors to support further methodological research.

Design Reproducibility study.

Data sources PubMed was searched for eligible systematic reviews published between 1 January 2015 and 1 January 2020. Metadata from the randomised controlled trials were extracted from the systematic reviews by four authors. The original data sources (eg, full text and ClinicalTrials.gov) were then referred to by the same authors to reproduce the data used in these meta-analyses.

Eligibility criteria for selecting studies Systematic reviews were included when based on randomised controlled trials for healthcare interventions that reported safety as the exclusive outcome, with at least one pair meta-analysis that included five or more randomised controlled trials and with a 2×2 table of data for event counts and sample sizes in intervention and control arms available for each trial in the meta-analysis.

Main outcome measures The primary outcome was data extraction errors summarised at three levels: study level, meta-analysis level, and systematic review level. The potential effect of such errors on the results was further investigated.

Results 201 systematic reviews and 829 pairwise meta-analyses involving 10386 randomised controlled trials were included. Data extraction could not be reproduced in 1762 (17.0%) of 10386 trials. In 554 (66.8%) of 829 meta-analyses, at least one randomised controlled trial had data extraction errors; 171 (85.1%) of 201 systematic reviews had at least one meta-analysis with data extraction errors. The most common types of data extraction errors were numerical errors (49.2%, 867/1762) and ambiguous errors (29.9%, 526/1762), mainly caused by ambiguous definitions of the outcomes. These categories were followed by three others: zero assumption errors, misidentification, and mismatching errors. The impact of these errors were analysed on 288 meta-analyses. Data extraction errors led to 10 (3.5%) of 288 meta-analyses changing the direction of the effect and 19 (6.6%) of 288 meta-analyses changing the significance of the P value. Meta-analyses that had two or more different types of errors were more susceptible to these changes than those with only one type of error (for moderate changes, 11 (28.2%) of 39 v 26 (10.4%) 249, P=0.002; for large changes, 5 (12.8%) of 39 v 8 (3.2%) of 249, P=0.01).

Conclusion Systematic reviews of adverse events potentially have serious issues in terms of the reproducibility of the data extraction, and these errors can mislead the conclusions. Implementation guidelines are urgently required to help authors of future systematic reviews improve the validity of data extraction.

DOI: 10.1136/bmj-2021-069155

Source: https://www.bmj.com/content/377/bmj-2021-069155

期刊信息

BMJ-British Medical Journal:《英国医学杂志》,创刊于1840年。隶属于BMJ出版集团,最新IF:27.604
官方网址:http://www.bmj.com/
投稿链接:https://mc.manuscriptcentral.com/bmj