Comment on “All-optical machine learning using diffractive deep neural networks”
Lin et al. (Reports, 7 September 2018, p. 1004) reported a remarkable proposal that employs a passive, strictly linear optical setup to perform pattern classifications. But interpreting the multilayer diffractive setup as a deep neural network and advocating it as an all-optical deep learning framework are not well justified and represent a mischaracterization of the system by overlooking its defining characteristics of perfect linearity and strict passivity.
Comment on “All-optical machine learning using diffractive deep neural networks”
1Ambow Research Institute, Ambow Education Group, Beijing 100088, China; 2Department of Gastroenterology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou 510630, China; 3Center for Biomedical Informatics, College of Medicine, Texas A&M University, Houston, TX, USA
Compiled July 22, 2019
00footnotetext: Corresponding authors: Xiuqing Wei (firstname.lastname@example.org), Yanlong Sun (email@example.com), Hongbin Wang (firstname.lastname@example.org)\@afterheading
Lin et al.  proposed a combination of methods for creating a computer-generated volumetric hologram (CGVH) made of multiple planar diffractive elements, and using such hologram to scatter and directionally focus each of a multitude of pattern-imprinted coherent light fields into a designated spatial region on an image sensor, effectively realizing a functionality of pattern recognition and classification. Their all-optical multi-planed setup bears a certain resemblance to the multi-layered structure of a deep neural network (DNN) , but that is about as far as the similarity goes.
It is a mischaracterization to interpret the CGVH construct as a DNN, when its functionality is strictly limited to linear transformations of the input light field, thus unable to perform any task of statistical inference/prediction beyond the capacity of a single layer perceptron [2, 3]. Apart from the glaring absence of nonlinear activations therein, the passive CGVH setup also lacks parameter tunability to support neural network learning, except for, perhaps, the spacings between diffractive elements.
As such, the authors’ claim of their CGVH providing “an all-optical deep learning framework in which the neural network is physically formed by multiple layers of diffractive surfaces” is perilously confusing and misleading, by overly stressing the superficial similarity between a multi-planed optical diffractive setup and a multilayer neural network, and glossing over a wide range of technical challenges in implementing a truly all-optical machine learning mechanism. A CGVH optical network remains strictly linear, where the linearity severely limits the achievable computations but lends convenience and mathematical rigor to analyses on the possible functionalities and performance limitations, while a multilayer neural network requires some nonlinearity to prevent layer collapsing, where the quintessential nonlinearity routinely defies rigorous mathematical analysis but affords Turing-complete computations.
To a broader audience who are familiar with linear optics, Lin et al.’s CGVH setup and working principles are reminiscent of volume hologram , volume optics , and optical mode converters [8, 6, 7]. Therefore, the richly developed linear theory of wave optics and mechanics can be applied to analyze the performance of a CGVH-based pattern classifier, using a rigorous theory of light propagation and scattering . Duly noting the all-important linearity is not merely a scholastic preference or a rhetorical option. Rather, it has significant theoretical and practical ramifications. It would be inexcusable and a disservice to willfully neglect the vast literature and results on the physics and mathematics of linear systems.
Without loss of generality and not underestimating its information and communication capacities, a CGVH optical network can be considered as consisting of layers of planar diffractive elements, with , , and each panel of diffractive elements contains no more than resolvable pixels to transmit, receive, or modulate a light field, such that each panel is completely characterized by an -dimensional complex-valued vector. The -th layer under a coherent illumination generates an input of amplitude image field represented by a -dimensional complex-valued vector , , while the -th layer is an image detector that does no better than reporting a -dimensional real-valued vector of optical intensities , , subject to unavoidable noise. For each , the -th panel of diffractive elements is characterized by a diagonal matrix , with representing the absorption and phase delay by the -th pixel, .
The free-space wave propagation between the -th and the -th planes, , can be described by a linear operator , with denoting the unitary matrix of Fourier transform which turns a real space image into a spatial frequency image , indexing a plane wave with an associated phase velocity along the optical axis normal to the planes, and is the spacing between the two planes in question. Naturally, represents the inverse Fourier transform. Then, , the amplitude field incident on the -th panel becomes , which is then pixel-wise modulated by the -th panel of diffractive elements and turned into . The cascade of optical diffractions continues until an amplitude field exits from the -th and last panel of diffractive elements, which finally propagates to the detector and becomes , with .
A crucially important fact is that the entire optical setup can be described by a single matrix , regardless of , even if it approaches infinity and the setup becomes a continuous volume hologram. is a contraction operator, with all of its singular values upper bounded by . The truth of the matter is that, not only the linear transformations of any series of passive optical elements give rise to a single contraction operator , but also any given contraction operator as a wave field transformation can be realized through a serious of free-space propagations and point-wise amplitude modulations . Moreover, any linear optical device can be considered as an optical mode converter . Linearity is key to the possibility of lumping multiple steps of operations into a single matrix of transformation and the amenability to rigorous mathematical analyses using matrix algebra. It is regrettable that reference  failed to seize upon and exploit the opportunity. One of the most important consequences of strict linearity in a CGVH-based all-optical pattern classifier is that the pattern discrimination power (PDP) becomes severely limited, being no better than the Euclidean distance discriminator, as will be proven rigorously below via an inequality on vector norms.
Consider two differently patterned input images and producing amplitude fields and on the detector plane, which square into intensity images and , and induce proportional electric signals with the addition of unavoidable noise. In Lin et al.’s proposal, each intensity image is projected into a -dimensional vector, , by partially integrating the image within each of designated spatial areas. The PDP of the all-optical pattern classifier derives from a difference between -dimensional vectors due to different patterns, which is well characterized and upper-bounded by the total variation distance (TVD) , with indexing the real or imaginary part of a complex value. The TVD is in turn upper-bounded as
where denotes the norm, corresponding to the total light power of an optical image. With image fields normalized, , the TVD is upper-bounded by . Therefore, the PDP of an all-optical pattern classifier, devoid of any nonlinear activation that resembles a biological or artificial neuron, does not go beyond the classical Euclidean distance algorithms [10, 11]. When two different images are in close similarity that the distance is small and below a certain noise level, the system of Lin et al.’s will have a hard time to tell and apart, no matter how obviously different they are to human eyes, or how easily they may be distinguished by a bona fide deep neural network with nonlinear activation functions.
By contrast, a canonical DNN has at least one hidden layer that implements a nonlinear activation function . A true optical DNN would require nonlinear optical interactions . Nonlinearity could potentially enable a DNN to escape the TVD bound of PDP in (1). While deep learning in the presence of nonlinear activation functions, or fundamentally the input-output behavior of a typical nonlinear network, is not as well understood mathematically as a linear system, it is generally believed that nonlinearity endows a DNN with certain computational power for better performances in learning and predicting, specifically, noise suppression and pattern discrimination. Distributed nonlinear activations in a DNN have the potential to regulate signals and suppress noise and detrimental inferences, much like distributed signal regeneration in a long-haul communications network [13, 14]. In other fields such as quantum computing, it is known that weak and distributed nonlinear amplitude evolution could amplify a small difference between initial states into drastically different and easily distinguishable output results .
In closing, despite the shortcomings, Lin et al.’s report still represents a significant contribution to interdisciplinary researches at the intersections of many scientific and technological fields, including volume optics, linear transformations, 3D printing, and of course pattern recognition/classification as the authors proposed originally. The reported results of numerical simulations and experimental tests have indicated potentials of coherent light diffraction through a volume hologram to serve as a linear classifier for pattern recognition applications. The optical multi-planed setup demonstrated an efficient implementation of a complicated linear transformation, namely, a sophisticated optical mode converter. Following Lin et al.’s trail blazing, the interested scientific and engineering community will no doubt get busy and start working on both the theoretical fundamentals and the engineering practicals. But the first thing in business is to place the CGVH setup in the right technical context, recognize its characteristic linearity, and take advantage of a vast literature and an immense knowledge accumulation in the related fields.
Disclosure statement The authors have no potential financial or non-financial conflicts of interest.
Notes on contributors All authors contributed equally in researching, collating, and writing.
-  X. Lin et al., “All-optical machine learning using diffractive deep neural networks,” Science, vol. 361, pp. 1004-1008 (2018).
-  I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (MIT Press, 2016).
-  M. L. Minsky and S. A. Papert, Perceptrons. (MIT Press, 1969).
-  J. W. Goodman, Introduction to Fourier Optics, 3rd Ed. (Roberts & Company Pub., 2005).
-  T. D. Gerke and R. Piestun, “Aperiodic volume optics,” Nature Photon., vol. 4, pp. 188-193 (2010).
-  J.-F. Morizur et al., “Programmable unitary spatial mode manipulation,” J. Opt. Soc. Am. A, vol. 27, no. 11, pp. 2524-2531 (2010).
-  G. Labroille et al., “Efficient and mode selective spatial mode multiplexer based on multi-plane light conversion,” Opt. Exp., vol. 22, no. 13, pp. 15599-15607 (2014).
-  D. A. B. Miller, “All linear optical devices are mode converters ,” Opt. Exp., vol. 20, no. 21, pp. 23985-23993 (2012).
-  Z. I. Borevich and S. L. Krupetskii, “Subgroups of the unitary group that contain the group of diagonal matrices,” J. Soviet Math., vol. 17, 1951-1959 (1981).
-  D. Michie, D. J. Spiegelhalter, and C. C. Taylor (Eds.), Machine Learning, Neural and Statistical Classification (Ellis Horwood Ltd., 1994).
-  C. M. Bishop, Pattern Recognition and Machine Learning (Springer, 2006).
-  Y. Shen et al., “Deep learning with coherent nanophotonic circuits,” Nature Photon., vol. 11, pp. 441-446 (2017).
-  O. Leclerc, B. Lavigne, D. Chiaroni, and E. Desurvire, “All-optical regeneration: principles and WDM implementation,” Ch. 15 in I. Kaminow and T. Li (Eds.), Optical Fiber Telecommunications IV A: Components (Academic Press, 2002).
-  J.-C. Simon et al., “All-optical regeneration techniques,” Ann. Telecomm., vol. 58, no. 11-12, pp. 1708-1724 (2003).
-  D. S. Abrams and S. Lloyd, “Nonlinear quantum mechanics implies polynomial-time solution for NP-complete and #P problems,” Phys. Rev. Lett., vol. 81, no. 18, pp. 3992-3995 (1998).