Concept and Experimental Demonstration of Optical IM/DD End-to-End System Optimization using a Generative Model

Concept and Experimental Demonstration of Optical IM/DD End-to-End System Optimization using a Generative Model


We perform an experimental end-to-end transceiver optimization via deep learning using a generative adversarial network to approximate the test-bed channel. Previously, optimization was only possible through a prior assumption of an explicit simplified channel model. 2020 The Authors


1Optical Networks Group, Dept. of Electronic & Electrical Engineering, UCL, WC1E 7JE London, U.K.
2Nokia Bell Labs, 70435 Stuttgart, Germany
3Communications Engineering Lab, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany \


(060.4510) Optical communications, (060.2330) Fiber optics communications, (060.4080) Modulation

1 Introduction

The optimization of a complete communication system via deep learning has attracted great interest since the introduction of the idea in [1]. The application of such end-to-end neural network-based autoencoders is of particular importance in communication scenarios where the optimum transmitter-receiver pair is not known, or is computationally prohibitive to implement. An example is optical fibre communication based on intensity modulation and direct detection (IM/DD), where the joint effects of chromatic dispersion, introducing intersymbol interference (ISI), and square-law detection render the channel nonlinear with memory. The absence of optimal, computationally feasible, processing algorithms for IM/DD systems prompted the introduction of end-to-end deep learning in [2], where it was shown experimentally that a simple feed-forward neural network (FFNN) scheme can outperform pulse-amplitude modulation (PAM) systems with simple feed-forward linear equalizers. Moreover, sequence processing by a recurrent neural network (RNN) was used in [3] to tailor the autoencoder to the dispersive channel properties. This led to an improved system performance with lower computational complexity than PAM transmission with receivers using nonlinear processing [3] or maximum likelihood sequence detection [4].

The framework of learnable communication systems considers the channel as a part of the deep neural network. The application of autoencoders has thus been limited to scenarios where the channel model is known and differentiable. In addition, when applied to an actual system, a transceiver representation learned on a prior assumption of a specific model can result in sub-optimal performance [5, Sec. IV] [2, Sec. VI-C]. Fine-tuning the receiver part of the autoencoder with collected data from measurements was used in [5]. However, optimizing the transmitter without the explicit knowledge of the underlying channel model remains an open problem. To circumvent using a model, a reinforcement learning method was proposed and verified for simple memoryless channels[6]. Alternatively, a generative model of the channel can be obtained via a generative adversarial network (GAN) and used for gradient backpropagation

Figure 1: Schematic of the conditional GAN structure for training of the generative model.

during end-to-end optimization [7, 8, 9].

We employ a GAN to acquire a model for our experimental IM/DD test-bed. We perform iterative steps of training the generative model on experimental data and use it to optimize the transceiver. To the best of our knowledge, this is the first end-to-end optimization of an optical communication system via deep learning based on measured data.

2 Generative Adversarial Network Design for the Optical Autoencoder

Generative adversarial networks are an effective tool for the learning of generative models [10]. A GAN typically employs two artificial neural networks (ANNs) with competing objective functions, which are trained iteratively. The schematic of the GAN we used is shown in Fig. 1. The generator network aims at translating its input into a (possibly high-dimensional) output sample, mimicking some ground truth distribution of the data. The discriminator

Layer Outputs
Table 1: Generator (bold) and discriminator ANNs.

acts as a binary classifier between real (ground truth) and fake (generated) samples. Related to communication systems, GANs can be used to train a generative model which mimics the function of the channel by approximating its conditional probability distribution. In the framework of autoencoder design, the model can be used in the end-to-end system optimization. In this work we employ a simple FFNN autoencoder. The transmitter maps the input messages , each of which carrying bits, into blocks (symbols) of samples, which we denote as . After propagation through the channel, the symbols are fed to the receiver as . Since ISI in optical IM/DD stems from both preceding and succeeding symbols, the goal of the generative model is to approximate , where , an odd integer, is the modeled symbol memory and . To mimic the probability distribution, the generator takes as an input the concatenation of a vector of uniformly-distributed random samples with . It transforms it into the fake symbol , where . The discriminator is first fed with the real and then the fake symbol , both conditioned on , producing the output probability vectors or , respectively, where . Table 1 shows the layers of the two ANNs, trained in a supervised manner. The sets of parameters (discriminator) and (generator) are iteratively updated via stochastic gradient descent (SGD), using the Adam algorithm, aimed at minimizing the losses


over a batch of elements from the training set, where and are the labels for real and fake symbols, respectively, and is the cross entropy. Note that the objective of the discriminator is to classify and correctly. By minimizing , the generator learns representations that are statistically similar to . We train the GAN over steps, each of 4 consecutive discriminator updates with the learning rate of , followed by a generator update with the rate gradually reduced every 200 steps from to .

3 End-to-End Optimization Algorithm and Experimental Performance Results

Figure 2: Schematic of the IM/DD experiment, showing the optimization via the generative channel model. LPF: low-pass filter, DAC/ADC: digital-to-analog/analog-to-digital converter, MZM: Mach-Zehnder modulator, SSMF: standard single mode fibre, TIA: trans-impedance amplifier.
Figure 3: (Left) Flow chart of the transmission and optimization algorithm. (Right) Experimental BER as a function of optimization iteration. Insets: a) error probabilities at ; b) 2D t-SNE representation of the waveforms output of the transmitter ANN at ; c) error probabilities at .

The schematic of the IM/DD test-bed is shown in Fig. 2, which we describe together with the end-to-end system optimization method, shown in Fig. 3 (left). At iteration the transmitter and receiver ANNs are initialized with parameters trained offline, following [2, Sec. III-D], using the IM/DD channel model of [3]. random sequences of messages ,  (3 bits), are generated and encoded by the transmitter into symbol ( samples) sequences . These are filtered by a 32 GHz LPF and applied to a 84 GSa/s DAC, resulting in a 42 Gb/s data rate. The waveform is fed to an MZM, biased at the quadrature point, modulating a 1550 nm laser. The signal is launched at 1 dBm power into an un-amplifed 20 km span of SSMF. The power of the received waveform is directly detected by an AC-coupled PIN+TIA and real-time sampled by an ADC. After scaling and offset correction, the digital signal is utilized in two ways: it is fed to the receiver for processing, symbol decision and BER calculation using an optimized bit mapping (see [4, Sec. III-C]). Moreover, the transmitted and received sequences are used for GAN training, followed by transceiver optimization. Part of the data ( elements), grouped as and , was used for the single step of transceiver learning within this algorithm iteration. The remaining part, assuming , was structured as , with , , and . First, the generative model is trained via the procedure in Sec. 2. Each row from and was used to condition the GAN and provide the ground truth symbol, respectively. Next, the transceiver learning is performed, minimizing the cross entropy , with and rows from and , via SGD using Adam with learning rate. To enable the gradient backpropagation, the generator model is applied in lieu of the test-bed, enabling the update of transceiver parameters () in a single process. This completes an iteration of the optimization algorithm. The transmitter and receiver representations are updated and we repeat transmission, GAN training and optimization. GAN training required 500 transmissions of different data sequences, which was the time-limiting step in our setup, as well as stable link parameters to ensure the model validity. Thus, as a proof of concept, we performed 10 optimization iterations, which already resulted in performance gain. The BER of the system during optimization is shown in Fig. 3 (right), with a monotonic reduction observed on each iteration. This improvement is illustrated via insets a) and c), showing the error probabilities at and , respectively. No errors exceeding the 1% threshold occur at , for which b) depicts the modulation constellation in 2D via t-SNE dimensionality reduction [1]. Compared to the previously possible receiver-only optimization on measured data, our method increases the -factor by  dB.

4 Conclusions

We experimentally implemented deep learning of a transceiver based on measured data, an important step towards practical end-to-end optimized transmission. Instead of using an explicit channel model, a generative model of the test-bed was trained to approximate its conditional distribution. It enabled gradient backpropagation in an iteration of the end-to-end system learning. We observed a monotonic decrease in BER on each step of the optimization.

Work under the EU Marie Skłodowska-Curie project COIN (676448/H2020-MSCA-ITN-2015) & UK EPSRC TRANSNET.


  1. T. O’Shea and J. Hoydis, “An introduction to deep learning for the physical layer,” IEEE Trans. Cogn. Commun. Netw. 3(4), 563-575 (2017).
  2. B. Karanov et al., “End-to-end deep learning of optical fiber communications,” J. Lightw. Technol. 36(20), 4843-4855 (2018).
  3. B. Karanov et al., “End-to-end optimized transmission over dispersive intensity-modulated channels using bidirectional recurrent neural networks,” Opt. Express 27(14), 19650-19663 (2019).
  4. B. Karanov et al., “Deep learning for communication over dispersive nonlinear channels: performance and comparison with classical digital signal processing,” in Proc. of 57th Allerton Conference on Communication, Control and Computing, pp. 1-8 (2019).
  5. S. Dörner et al., “Deep learning-based communication over the air,” IEEE J. Sel. Topics Signal Process. 12(1), 132-143 (2018).
  6. F. Ait Aoudia and J. Hoydis, “End-to-end learning of communications systems without a channel model,” in Proc. of 52nd Asilomar Conference on Signals, Systems, and Computers, pp. 298-303 (2018).
  7. T. O’Shea et al., “Approximating the void: Learning stochastic channel models from observation with variational generative adversarial networks,” in Proc. of 2019 International Conference on Computing, Networking and Communications, pp. 681-686 (2019).
  8. H. Ye et al., “Channel agnostic end-to-end learning based communication systems with conditional GAN,” in Proc. of GLOBECOM (2018).
  9. A. Smith and J. Downey, “A communication channel density estimating generative adversarial network,” NASA Technical Reports 2019. Accessed on: Oct. 2, 2019. [Online]. Available:
  10. I. Goodfellow et al., “Generative adversarial nets,” in Proc. of Conference on Neural Information Processing Systems (NIPS), pp. 1-9 (2014).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description