近日,德国计算机科学研究所教授Constantin Pape及其课题组研究出切片用显微镜。该研究于2025年2月12日发表于国际一流学术期刊《自然—方法学》杂志上。
研究小组提出了一种对多维显微数据进行分割和跟踪的工具μSAM (Segment Anything for Microscopy)。它是基于分割任何东西,一个视觉基础模型的图像分割。该课题组通过微调光学和电子显微镜的通用模型来扩展它,这明显提高了广泛成像条件下的分割质量。
该课题组研究人员还在napari插件中实现了交互式和自动分割,可以加快不同的分割任务,并为不同显微镜模式的显微镜注释提供统一的解决方案。他们的工作构成了视觉基础模型在显微镜中的应用,为使用一组功能强大的深度学习模型解决该领域的图像分析任务奠定了基础。
据了解,尽管为此目的开发了许多工具,但显微镜图像中物体的准确分割仍然是许多研究人员的瓶颈。
附:英文原文
Title: Segment Anything for Microscopy
Author: Archit, Anwai, Freckmann, Luca, Nair, Sushmita, Khalid, Nabeel, Hilt, Paul, Rajashekar, Vikas, Freitag, Marei, Teuber, Carolin, Buckley, Genevieve, von Haaren, Sebastian, Gupta, Sagnik, Dengel, Andreas, Ahmed, Sheraz, Pape, Constantin
Issue&Volume: 2025-02-12
Abstract: Accurate segmentation of objects in microscopy images remains a bottleneck for many researchers despite the number of tools developed for this purpose. Here, we present Segment Anything for Microscopy (μSAM), a tool for segmentation and tracking in multidimensional microscopy data. It is based on Segment Anything, a vision foundation model for image segmentation. We extend it by fine-tuning generalist models for light and electron microscopy that clearly improve segmentation quality for a wide range of imaging conditions. We also implement interactive and automatic segmentation in a napari plugin that can speed up diverse segmentation tasks and provides a unified solution for microscopy annotation across different microscopy modalities. Our work constitutes the application of vision foundation models in microscopy, laying the groundwork for solving image analysis tasks in this domain with a small set of powerful deep learning models.
DOI: 10.1038/s41592-024-02580-4
Source: https://www.nature.com/articles/s41592-024-02580-4
Nature Methods:《自然—方法学》,创刊于2004年。隶属于施普林格·自然出版集团,最新IF:47.99
官方网址:https://www.nature.com/nmeth/
投稿链接:https://mts-nmeth.nature.com/cgi-bin/main.plex