当前位置:科学网首页 > 小柯机器人 >详情
集成光子神经网络与片上反向传播训练
作者:小柯机器人 发布时间:2026/3/20 16:17:41

近日,美国诺基亚贝尔实验室Farshid Ashtiani团队研究了集成光子神经网络与片上反向传播训练。2026年3月18日出版的《自然》杂志发表了这项成果。

可扩展集成光子神经网络(PNN)的稳健且可重复的性能高度依赖于其训练质量。基于梯度的反向传播算法凭借其可扩展性、普适性与高效实现,已成为训练数字神经网络的主流算法。因此,在光子平台中实现全光学反向传播备受关注。当前,由于缺乏可扩展的片上激活梯度,训练光子神经网络要么依赖数字计算机执行反向传播(这会因不可避免的器件差异与环境波动而降低性能),要么采用无法充分发挥反向传播训练优势的无梯度算法。

研究组首次展示了集成光子深度神经网络的端到端片上梯度下降反向传播训练。所有线性与非线性计算均在单一光子芯片上完成,尽管存在显著但典型的制造工艺导致的器件差异,仍实现了可扩展且稳健的训练。在两项非线性数据分类任务中,该芯片在不使用数字计算机的情况下,准确率(超过90%)和稳健性均达到参考数字模型的水平。将反向传播训练的优势与光子神经网络相结合,为未来可扩展且可靠的光子计算系统向各类光子神经网络架构的推广提供了可能。

附:英文原文

Title: Integrated photonic neural network with on-chip backpropagation training

Author: Ashtiani, Farshid, Idjadi, Mohamad Hossein, Kim, Kwangwoong

Issue&Volume: 2026-03-18

Abstract: The robust and repeatable performance of scalable integrated photonic neural networks (PNNs)1,2,3 strongly depends on the quality of their training. Gradient-based backpropagation is the mainstream algorithm for training digital neural networks thanks to its scalability, versatility and implementation efficiency4. Consequently, there is an interest in implementing it within a photonic platform in an all-optical manner. At present, owing to the lack of a scalable on-chip activation gradient5, training PNNs has relied on digital computers to run backpropagation, whose performance is reduced in the presence of inevitable device-to-device and environmental variations, or on gradient-free algorithms that do not fully benefit from the versatility of backpropagation training. Here we report the demonstration of an integrated photonic deep neural network, trained end-to-end with on-chip gradient-descent backpropagation. All linear and nonlinear computations are performed on a single photonic chip, leading to scalable and robust training, despite the considerable yet typical fabrication-induced device variations. In two nonlinear data classification tasks, chip performance matches that of the reference digital model in accuracy (over 90%) and robustness without using a digital computer. Integrating the advantages of backpropagation training with PNNs allows for generalization to various PNN architectures for future scalable and reliable photonic computing systems.

DOI: 10.1038/s41586-026-10262-8

Source: https://www.nature.com/articles/s41586-026-10262-8

期刊信息

Nature:《自然》,创刊于1869年。隶属于施普林格·自然出版集团,最新IF:69.504
官方网址:http://www.nature.com/
投稿链接:http://www.nature.com/authors/submit_manuscript.html