当前位置:科学网首页 > 小柯机器人 >详情
科学家将点跟踪与姿势动态联系起来用于解析行为
作者:小柯机器人 发布时间:2024/7/18 13:44:09

近日,美国哈佛医学院Sandeep Robert Datta等研究人员合作开发出新方法,通过将点跟踪与姿势动态联系起来来解析行为。该研究于2024年7月12日在线发表于国际一流学术期刊《自然—方法学》。

研究人员介绍了Keypoint-MoSeq,这是一种基于机器学习的平台,用于从关键点数据中识别行为模块(“音节”),无需人工监督。Keypoint-MoSeq使用生成模型来区分关键点噪声和行为,从而能够识别其边界对应于姿态动态自然亚秒中断的音节。Keypoint-MoSeq在识别这些转换、捕捉神经活动和行为之间的相关性以及根据人工标注分类单独或社交行为方面,表现优于常用的替代聚类方法。

Keypoint-MoSeq还适用于多种物种,并在音节时间尺度之外具有广泛适用性,能够识别小鼠中的快速嗅觉对齐运动以及果蝇中的一系列振荡行为。因此,Keypoint-MoSeq通过标准视频记录揭示了行为的模块化结构。

据介绍,关键点跟踪算法可以灵活地从各种环境中获得的视频中量化动物运动。然而,如何将连续的关键点数据解析为离散动作仍不明确。这一挑战尤为严重,因为关键点数据容易受到高频抖动的影响,聚类算法可能会误将其视为动作之间的转换。

附:英文原文

Title: Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics

Author: Weinreb, Caleb, Pearl, Jonah E., Lin, Sherry, Osman, Mohammed Abdal Monium, Zhang, Libby, Annapragada, Sidharth, Conlin, Eli, Hoffmann, Red, Makowska, Sofia, Gillis, Winthrop F., Jay, Maya, Ye, Shaokai, Mathis, Alexander, Mathis, Mackenzie W., Pereira, Talmo, Linderman, Scott W., Datta, Sandeep Robert

Issue&Volume: 2024-07-12

Abstract: Keypoint tracking algorithms can flexibly quantify animal movement from videos obtained in a wide variety of settings. However, it remains unclear how to parse continuous keypoint data into discrete actions. This challenge is particularly acute because keypoint data are susceptible to high-frequency jitter that clustering algorithms can mistake for transitions between actions. Here we present keypoint-MoSeq, a machine learning-based platform for identifying behavioral modules (‘syllables’) from keypoint data without human supervision. Keypoint-MoSeq uses a generative model to distinguish keypoint noise from behavior, enabling it to identify syllables whose boundaries correspond to natural sub-second discontinuities in pose dynamics. Keypoint-MoSeq outperforms commonly used alternative clustering methods at identifying these transitions, at capturing correlations between neural activity and behavior and at classifying either solitary or social behaviors in accordance with human annotations. Keypoint-MoSeq also works in multiple species and generalizes beyond the syllable timescale, identifying fast sniff-aligned movements in mice and a spectrum of oscillatory behaviors in fruit flies. Keypoint-MoSeq, therefore, renders accessible the modular structure of behavior through standard video recordings.

DOI: 10.1038/s41592-024-02318-2

Source: https://www.nature.com/articles/s41592-024-02318-2

期刊信息

Nature Methods:《自然—方法学》,创刊于2004年。隶属于施普林格·自然出版集团,最新IF:47.99
官方网址:https://www.nature.com/nmeth/
投稿链接:https://mts-nmeth.nature.com/cgi-bin/main.plex