当前位置:科学网首页 > 小柯机器人 >详情
比较医学影像人工智能与临床医生表现的研究大多不可靠
作者:小柯机器人 发布时间:2020/3/28 21:03:58

英国伦敦帝国理工学院Myura Nagendran研究组取得一项新突破。他们对人工智能与临床医生的优势比较研究进行了系统评价。2020年3月25日,《英国医学杂志》发表了这一成果。

医学成像对深度学习研究越来越感兴趣。卷积神经网络(CNN)在深度学习中的主要区别性特征是,当CNN收到原始数据时,它们会发展出自己的模式识别所需的表达形式。该算法自己学习图像的特征,这些特征对于分类很重要,而非由人类告知要使用哪些特征。

比较医学成像诊断深度学习算法与临床医生表现的研究有很多,为了系统评估这些研究的设计、报告标准、偏倚风险和主张,研究组对Medline、Embase等大型数据库从2010年至2019年6月的相关研究进行检索并进行回顾性分析。

研究组发现了10条深度学习随机临床试验的记录,其中2条已发表,还有8条正在进行。在确定的81项非随机临床试验中,9项是前瞻性的,仅6项是在现实世界临床环境中进行的试验。比较组中医学专家的中位数仅为4名。

对所有数据集和代码的完全访问受到严重限制。81项研究中有58项的总体偏倚风险很高,且未严格遵守报告标准。有61篇研究的摘要指出,人工智能的性能不劣于临床医生。只有31项(38%)研究表示需要进一步的前瞻性研究或试验。

总之,医学影像学中很少有前瞻性深度学习研究和随机试验。大多数非随机试验均无前瞻性,发生偏倚的风险较高,且偏离了现有的报告标准。大多数研究中都缺乏数据和代码的可用性,并且人类比较组通常样本很小。

附:英文原文

Title: Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies in medical imaging

Author: Myura Nagendran, Yang Chen, Christopher A Lovejoy, Anthony C Gordon, Matthieu Komorowski, Hugh Harvey, Eric J Topol, John P A Ioannidis, Gary S Collins, Mahiben Maruthappu

Issue&Volume: 2020/03/25

Abstract: Abstract

Objective To systematically examine the design, reporting standards, risk of bias, and claims of studies comparing the performance of diagnostic deep learning algorithms for medical imaging with that of expert clinicians.

Design Systematic review.

Data sources Medline, Embase, Cochrane Central Register of Controlled Trials, and the World Health Organization trial registry from 2010 to June 2019.

Eligibility criteria for selecting studies Randomised trial registrations and non-randomised studies comparing the performance of a deep learning algorithm in medical imaging with a contemporary group of one or more expert clinicians. Medical imaging has seen a growing interest in deep learning research. The main distinguishing feature of convolutional neural networks (CNNs) in deep learning is that when CNNs are fed with raw data, they develop their own representations needed for pattern recognition. The algorithm learns for itself the features of an image that are important for classification rather than being told by humans which features to use. The selected studies aimed to use medical imaging for predicting absolute risk of existing disease or classification into diagnostic groups (eg, disease or non-disease). For example, raw chest radiographs tagged with a label such as pneumothorax or no pneumothorax and the CNN learning which pixel patterns suggest pneumothorax.

Review methods Adherence to reporting standards was assessed by using CONSORT (consolidated standards of reporting trials) for randomised studies and TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) for non-randomised studies. Risk of bias was assessed by using the Cochrane risk of bias tool for randomised studies and PROBAST (prediction model risk of bias assessment tool) for non-randomised studies.

Results Only 10 records were found for deep learning randomised clinical trials, two of which have been published (with low risk of bias, except for lack of blinding, and high adherence to reporting standards) and eight are ongoing. Of 81 non-randomised clinical trials identified, only nine were prospective and just six were tested in a real world clinical setting. The median number of experts in the comparator group was only four (interquartile range 2-9). Full access to all datasets and code was severely limited (unavailable in 95% and 93% of studies, respectively). The overall risk of bias was high in 58 of 81 studies and adherence to reporting standards was suboptimal (<50% adherence for 12 of 29 TRIPOD items). 61 of 81 studies stated in their abstract that performance of artificial intelligence was at least comparable to (or better than) that of clinicians. Only 31 of 81 studies (38%) stated that further prospective studies or trials were required.

Conclusions Few prospective deep learning studies and randomised trials exist in medical imaging. Most non-randomised trials are not prospective, are at high risk of bias, and deviate from existing reporting standards. Data and code availability are lacking in most studies, and human comparator groups are often small. Future studies should diminish risk of bias, enhance real world clinical relevance, improve reporting and transparency, and appropriately temper conclusions.

DOI: 10.1136/bmj.m689

Source: https://www.bmj.com/content/368/bmj.m689

期刊信息

BMJ-British Medical Journal:《英国医学杂志》,创刊于1840年。隶属于BMJ出版集团,最新IF:27.604
官方网址:http://www.bmj.com/
投稿链接:https://mc.manuscriptcentral.com/bmj