Spatially Coupled Codes and Optical Fiber Communications: An Ideal Match?

Spatially Coupled Codes and Optical Fiber Communications: An Ideal Match?


In this paper, we highlight the class of spatially coupled codes and discuss their applicability to long-haul and submarine optical communication systems. We first demonstrate how to optimize irregular spatially coupled LDPC codes for their use in optical communications with limited decoding hardware complexity and then present simulation results with an FPGA-based decoder where we show that very low error rates can be achieved and that conventional block-based LDPC codes can be outperformed. In the second part of the paper, we focus on the combination of spatially coupled LDPC codes with different demodulators and detectors, important for future systems with adaptive modulation and for varying channel characteristics. We demonstrate that SC codes can be employed as universal, channel-agnostic coding schemes.


Error correction codes, Low-density parity-check codes, Spatial coupling, Optical Communications

1 Introduction


Modern high-speed optical communication systems require high-performing Forward Error Correction (FEC) implementations that support throughputs of 100 Gbit/s or multiples thereof, that have low power consumption, that realize Net Coding Gains close to the theoretical limits at a target Bit Error Rate (BER) of , and that are preferably adapted to the peculiarities of the optical channel.

Especially with the advent of coherent transmission schemes and the utilization of high resolution Analog-to-Digital Converters, soft-decision decoding has become an attractive means of reliably increasing the transmission reach of lightwave systems. Currently, there are two popular classes of codes for soft-decision decoding that are attractive for implementation in optical receivers at decoding throughputs of 100 Gbit/s and above: Low-Density Parity-Check (LDPC) codes and Block Turbo Codes. The latter can be decoded with a highly parallelizable, rapidly converging soft-decision decoding algorithm, usually have a large minimum distance, but require large block lengths of more than  bits to realize codes with small overheads, leading to decoding latencies that can be detrimental in certain applications. With overheads of more than 15% to 20%, these codes no longer perform well, at least under hard-decision decoding [1]. Low-Density Parity-Check (LDPC) codes are understood and are suited to realize codes with lengths of a few  bits and overheads above %.

Recently, the class of Spatially Coupled (SC) codes[2] has gained widespread interest due to the fact that these codes are asymptotically capacity-achieving, have appealing encoding and decoding complexity and show outstanding practical decoding performance. SC codes are an extension of existing coding schemes by a superimposed convolutional structure. The technique of spatial coupling can be applied to most existing codes, the most popular are however LDPC codes [2] and BTCs [3], which have found use in optical communications (staircase codes) and show outstanding performance, operating within  dB of the capacity of the hard-decision AWGN channel.

In this paper, we discuss the use of SC codes in optical communications and especially focus on SC-LDPC codes. We summarize some recent advances and design guidelines for SC-LDPC codes and show by means of an Field-Programmable Gate Array (FPGA)-based decoding platform that large gains at low bit error rates can be realized with relatively small codes when compared with state-of-the-art LDPC codes. The aim of this paper is to show that SC-LDPC codes are mature channel codes that are viable candidates for future optical communication systems with large NCGs. Furthermore, their universality makes them attractive for flexible transceivers with adaptive modulation.

2 LDPC & Spatially Coupled LDPC Codes

An LDPC code is defined by the null space of a sparse parity-check matrix of size where the code contains all binary code words of length such that , i.e., .

Each row of is considered to be a check node, while each column of is usually termed variable node. We say that the variable degree (or variable node degree) of a code is regular with degree if the number of “1”s in each column is constant and amounts to . We say that the check degree (or check node degree) of a code is regular with degree if the number of “1”s in each row of is constant and amounts to . The class of irregular LDPC codes has the property that the number of “1”s in each column and/or row is not constant. The degree profile of an irregular LDPC code indicates the fraction of columns/rows of a certain degree. More precisely, represents the fraction of columns with “1”s (e.g., if , half the columns of have three “1”s). Note that has to hold. Similarly, represents the fraction of rows (i.e., checks) with “1”s.

LDPC codes form an important class of codes in optical communications [4]. LDPC codes with soft-decision decoding are currently being deployed in systems operating at 100 Gbit/s and, e.g., utilizing 16 iterations [5]. Modern high-performance FEC systems in optical communications are sometimes constructed using a soft-decision LDPC inner code which reduces the BER to a level of to and a hard-decision algebraic outer cleanup code which pushes the system BER to levels below  [6]. The outer cleanup code is used to combat the error floor that is present in most LDPC codes. Note that the implementation of a coding system with an outer cleanup code requires a thorough understanding of the LDPC code and a properly designed interleaver between the LDPC and the outer code. Recently, there has been some interest to avoid the use of an outer cleanup code and to use only soft-decision LDPC codes with very low error floors, leading to coding schemes with less rate loss and less latency. With increasing computational resources, it is now also feasible to evaluate very low target BERs of LDPC codes and optimize the codes to have very low error floors below the system’s target BER [7]. Although the internal data flow of an LDPC decoder may be larger by more than an order of magnitude [8] than that of a BTC, several techniques can be used to lower the data-flow, e.g., the use of layered decoding [9] and min-sum decoding, requiring only two -ary, binary and one -ary message per check node.

SC-LDPC codes were introduced more than a decade ago[10]2 but their outstanding properties have only been fully realized recently, when Lentmaier et al. noticed[11] that the estimated decoding performance of a certain class of terminated protograph-based SC-LDPC codes with a simple message passing decoder is close to the performance of the underlying code ensemble under Maximum Likelihood (ML) decoding as grows, which was subsequently proven rigorously in [2, 12], if certain particular conditions on the code structure are fulfilled.

A left-terminated SC-LDPC code is basically an LDPC code with a structured, infinitely extended parity-check matrix


with being sparse binary parity-check matrices with and denoting the syndrome former memory of the code. Every code word of the code has to fulfill . One advantage of SC-LDPC codes is that the infinitely long code words can conveniently be decoded with acceptable latency using a simple windowed decoder [13]. In practice, in order to construct codes of finite length, e.g., to adhere to certain framing structures in the communication system at hand, the infinitely extended matrix is terminated resulting in finite length code. One example of termination is zero-termination, where the matrix is cut off after parts, resulting in a code of length and a parity-check matrix of size . Note that this termination leads to a rate loss, which can however be kept small if is chosen large enough. For a discussion of termination schemes, we refer the interested reader to [4, 14].

SC codes are now emerging in various applications. Two examples of SC product codes are the staircase code[8] and the braided BCH codes [15], for hard-decision decoding in optical communications. SC-LDPC codes may also be viable for pragmatic coded modulation schemes [16, 14].

In order to simplify the design of hardware, we first drop the time dependency and only consider the time-independent (left-terminated) parity-check matrix with , , which is attractive for implementation as the sub-matrices can be easily reused in the encoder and decoder hardware. In this time-invariant construction with , we can give the following upper bound on the minimum distance of the code [4, Eq. (7)]


To construct codes with large enough minimum distances, we maximize the size of the sub-matrices , i.e., , which has a quadratic influence on (2). In order to keep the complexity of the so-constructed code small, we restrict ourselves to small values of the syndrome former memory with either or . We call such codes weakly coupled codes [17].

3 Rapidly Converging Sc-Ldpc Codes

In the past, irregular block LDPC codes have been used to design codes that perform very well for low SNRs, but these schemes do sometimes suffer from relatively high error floors requiring the use of an outer code that leads to inherent rate losses. In the case of SC-LDPC codes, we can use the irregularity to control the propagation speed of the decoding wave of a windowed decoder, i.e., we can minimize the number of iterations that are necessary until a windowed decoder can advance by one step [18]. To simplify the code construction and to illustrate the concept, we only use the most simple form of irregularity and construct slightly irregular SC-LDPC codes with degree-3 and additionally with either degree-4 or degree-6 variable nodes. We avoid degree-2 variables nodes due to their potentially detrimental effect on the error floor. Also, in contrast to block LDPC codes, degree-2 nodes are not of the same importance for SC-LDPC codes. We vary the fraction of degree-4 or degree-6 nodes between 0 and 1 and select the check nodes such that a rate (25% overhead) code is constructed. We perform full density evolution using the irregular version of Kudekar’s ensemble [2] for random spatial coupling with using an AWGN channel and measure the required values to advance the decoding wave by steps.

The density evolution results are shown in Fig. 1 for varying and . We can see that using additionally degree-4 (besides degree-3) variables does not lead to noteworthy gains, which is why we focus on additional degree-6 nodes in this paper. The convergence speed improves by selecting a proper value of leading to a smaller required . The selection depends however on . As we intend to construct low complexity decoders with , we can see that in this case, the optimum is achieved with (20% of degree-6 variable nodes, 80% of degree-3 variable nodes). We can see that by proper selection of , we can obtain codes that have an improved decoding convergence, however, we also see that depending on the selection, a worse convergence behavior than for the regular case can result. We also observe that if we want a code that operates extremely close to capacity, the optimum value of is larger (around ) than for the more practical case, where the optimum lies at . Note that although we use Kudekar’s ensemble for density evolution, the codes we construct in the next section are generated from protographs, similar to those in [11], as these exhibit better finite length performance.

Figure 1: Required to operate a windowed decoder with iterations per segment for slightly irregular weakly coupled SC codes with degree-3 nodes and additionally degree-4 (dashed lines) or degree-6 (solid lines) variable nodes. (results obtained by density evolution)

3.1 FPGA-based Verification

In order to verify the performance of the rapidly converging weakly coupled SC-LDPC codes, we use a Field-Programmable Gate Array (FPGA) platform, whose high-level diagram is illustrated in Fig. 2 [17]. This platform is similar to other platforms reported in the literature [7] and consists of three parts: A Gaussian noise generator, an FEC decoder and an error detecting circuit. The Gaussian noise generator generates Gaussian distributed Log-Likelihood Ratios, stemming from BPSK transmission over an AWGN channel, using uniform random number generators and the Box-Muller transform. These are then fed to the LDPC decoder after quantization to 15 levels. The LDPC decoder is based on the layered decoding algorithm [9] and uses a scaled-minsum check node computation rule with constant scaling factor.

The windowed decoder that is implemented can be sub-divided into three steps. In the first step, a new sub-block of quantized LLRs is received from the random number generator and put into the vacant position of the decoder’s LLR memory. Decoding takes place by considering copies of . The windowed decoded considers an equivalent matrix of size which it processes before shifting in new values. In order to maximize the hardware utilization, within a window, we use two parallel decoders that operate on non-overlapping portions of that matrix. In a first step, the first decoding engine operates on the first check nodes of the matrix under consideration while the second engine operates in parallel on the check nodes starting at position . In general, the first engine processes the check nodes at position while the second engine processes the check node . Note that only a single iteration is carried out to guarantee the required throughput, corresponding effectively to iterations per bit (due to the use of two engines).

Figure 2: High-level schematic of the FPGA evaluation platform.


The output of the LDPC decoder is connected to the BER evaluation unit, which counts the bit errors and reports the error positions. We use Virtex-7 FPGAs allowing for a throughput of several Gbit/s to evaluate the BER performance of several coding schemes of rate , i.e., of 25% coding overhead. We select this particular rate due to its importance in today’s  Dense Wavelength Division Multiplex (DWDM) systems. Current and future  Gbit/s (with QPSK) or  Gbit/s (with 16-QAM) systems are often operated in  GHz channels with an exploitable bandwidth of roughly  GHz due to Reconfigurable Add-Drop Multiplexers with non-flat frequency characteristic. With almost rectangular pulse shapes (root-raised cosine with small roll-off ) and today’s generation of Digital-to-Analog Converters, symbol rates of  GBaud can be realized. With dual-polarization QPSK transmission, gross bit rates of 128 Gbit/s can be realized. Assuming signaling and protocol overheads of 3 Gbit/s, this leads to a code that adds 25 Gbit/s parity overhead (i.e., of rate ). We compare three codes:

  • As reference, we consider a regular block QC-LDPC code (marker ) with variable node degree and check node degree . The code is a quasi-cyclic code of girth 10 and block length , constructed using cyclically shifted identity matrices of size and decoded with row-layered iterations.

  • SC-LDPC Code A () is the rapidly converging irregular code with syndrome former memory , and and check node degree . The sub-block size is ().

  • SC-LDPC Code B () is a regular code with and syndrome former memory . The size of the sub-matrices is identical to those of SC-LDPC code A, however, we select .

Both SC codes are constructed from cyclic permutation matrices of size and are terminated after subblocks. The simulation results are shown in Fig. 3. The block code, which has a matrix that has been optimized for low error floors, is outperformed by both SC-LDPC codes. SC-LDPC code A offers a coding gain of around 0.3 dB at a BER of compared to the conventional block LDPC code, but an error floor starts to manifest. This error floor is not due to any trapping sets, but due to a few uncorrected bits after windowed decoding, which can be recovered with a few-error correcting outer code. Code B has a BER curve that starts to decay at worse channels, but the BER curves cross at . For the next simulated point, we did not observe any bit errors, and hence we conjecture a lower error error floor than for Code A. Note that no special measures have been taken to combat an error floor: only a plain scaled min-sum decoder has been used. With the block code, post-processing [7] may be necessary to combat the error floor.

Another advantage of SC-LDPC codes is that they are future-proof: While the block code does not benefit from further decoding iterations, as its performance is already close to its decoding threshold, the scaling behavior of the SC-LDPC code allows to carry out further iterations and achieve still larger coding gains, as the gap to the decoding threshold is still non-negligible. This makes these codes attractive for standardization.

Figure 3: Simulation results with FPGA-based windowed decoding, , two decoder instances.

4 SC-LDPC Codes for Modulation and Detection

As future optical networks tend to become increasingly flexible and elastic, transceivers that integrate a certain amount of flexibility with respect to coding and modulation formats are required. Especially the modulation format is expected to change when transceivers are designed for long-haul or short-haul applications, where the latter require high spectral efficiencies (e.g., data center interconnects). In this section, we show that SC codes are perfectly suited to be combined with varying modulation formats due to their universality properties [12]. We combine SC-LDPC codes with a modulator and use density evolution to show how the detector front-end influences the performance of the codes. In conventional (block) LDPC code design, usually the code needs to be “matched” to the transfer curve of the detection front-end [19]. If the code is not well matched to the front-end, a performance loss occurs. If the detector front-end has highly varying characteristics, due to, e.g., varying modulation formats or channels, several codes would need to be implemented and selected depending on the conditions, which is not feasible in optical networks, where feedback is usually difficult to realize and where different codes cannot be implemented due to hardware constraints.

In contrast to many block LDPC codes, spatially coupled LDPC codes can converge below the pinch-off in the EXIT chart due to the effect of threshold saturation [2]. Hence, even if the code is not well matched to the demodulator/detector from a traditional point of view, we can hope to successfully decode. We can hence use a single code which is universally good in all scenarios and the code design can stay agnostic to the channel/detector behavior. In order to illustrate the concept, we model the detector by a linear EXIT characteristic

where controls the slope of the characteristic and describes the mutual information of the communication channel. The slope models the effect of e.g., different modulation formats, different bit labelings in higher order modulation and different detectors. We assume that the output of the detector can be modeled using a Binary Erasure Channel (BEC). There therefore also use BEC message passing. We compare two different code approaches; first we use the spatially coupled ensemble presented in [2] with the density evolution equation for iterative detection given by (3) where denotes the node-perspective degree distribution polynomial, the edge-perspective degree distribution, and the edge message erasure probability of spatial position at iteration . Additionally, we generate protograph based codes end employ Multi-Edge-Type (MET) density evolution [11] including iterative detection. We consider two code families of rate : The first family is the rapidly converging code from Sec. 3 with and where we use and in Kudekar’s ensemble and with in the protograph ensemble. The second code is a regular code where we use Kudekar’s ensemble and a protograph ensemble with and with  [16].

Figure 4 shows the DE results where we use solid lines ( ) to show the decoding thresholds for Kudekar’s ensemble and dashed lines (   ) for the protograph-based ensemble. All SC codes have decoding thresholds close to the theoretical limit of and the decoding threshold is almost independent of the detector characteristic’s slope . A regular block LDPC code has a highly varying threshold for different slopes . The flat threshold behavior for SCLDPC codes indicates a universal, channel-agnostic behavior. Even an optimized irregular LDPC code will only be good for a single slope parameter [20]. In order to improve the decoding threshold, we may deliberately select a precoder that has an EXIT characteristic with slope , however, as the inset of Fig. 4 shows, the slope affects the decoding speed (measured at ), i.e., the number of iterations required to advance the decoding wave by one step, so that the complexity will grow alongside. For the case of the rapidly converging code, further increases the decoding speed.

Figure 4: Decoding thresholds of different SC-LDPC codes for varying detector characteristics with varying slope and a regular block code.

We have presented an example of such a system with differential detection () that is adapted to a channel with varying phase noise in [21]. Therein, a single spatially coupled code was able to outperform two different LDPC codes optimized for different channel characteristics.

5 Conclusions

In this paper, we have highlighted Spatially Coupled (SC)-LDPC codes as potential candidates for future lightwave transmission systems. We have optimized SC-LDPC codes for convergence speed and shown by means of an FPGA-based simulation that very low error rates can be obtained. Finally, we have shown that SC-LDPC can be good candidates if they employed in a system with iterative decoding and detection: a single code can be used in various channel conditions.


  1. footnotetext: Parts of this work were supported by the German Government in the frame of the CELTIC+/BMBF project SASER-SaveNet.
  2. Originally, these codes were called LDPC convolutional codes. The term “spatially coupled” has been introduced[2] to denote the more general phenomenon of coupling several independent code(word)s, by a superimposed, convolutional-like structure.


  1. J. Justesen, “Performance of product codes and related structures with iterated decoding,” IEEE Trans. Commun., vol. 59, no. 2, pp. 407–415, Feb. 2011.
  2. S. Kudekar, T. Richardson, and R. Urbanke, “Threshold saturation via spatial coupling: Why convolutional LDPC ensembles perform so well over the BEC,” IEEE Trans. Inf. Theory, Feb. 2011.
  3. L. M. Zhang and F. R. Kschischang, “Staircase codes with 6% to 33% overhead,” J. Lightw. Technol., vol. 32, no. 10, May 2014.
  4. A. Leven and L. Schmalen, “Status and recent advances on forward error correction technologies for lightwave systems,” J. Lightw. Technol., vol. 32, no. 16, Aug. 2014.
  5. E. Yamazaki et al., “Fast optical channel recovery in field demonstration of 100-GBit/s Ethernet over OTN using real-time DSP,” Optics Express, vol. 19, no. 14, pp. 13 179–13 184, Jul. 2011.
  6. Y. Miyata, K. Kubo, H. Yoshida, and T. Mizuochi, “Proposal for frame structure of optical channel transport unit employing LDPC codes for 100 Gb/s FEC,” in Proc. OFC/NFOEC, paper NThB2, 2009.
  7. Z. Zhang, L. Dolecek, B. Nikolic, V. Anantharam, and M. Wainwright, “Investigation of error floors of structured low-density parity-check codes by hardware emulation,” in Proc. GLOBECOM, 2006.
  8. B. P. Smith, A. Farhood, A. Hunt, F. R. Kschischang, and J. Lodge, “Staircase codes: FEC for 100 Gb/s OTN,” J. Lightw. Technol., vol. 30, no. 1, pp. 110–117, 2012.
  9. D. Hocevar, “A reduced complexity decoder architecture via layered decoding of LDPC codes,” in Proc. IEEE SiPS, 2004.
  10. A. J. Felström and K. S. Zigangirov, “Time-varying periodic convolutional codes with low-density parity-check matrix,” IEEE Trans. Inf. Theory, vol. 45, no. 6, pp. 2181–2191, Jun. 1999.
  11. M. Lentmaier, D. G. M. Mitchell, G. P. Fettweis, and D. J. Costello, Jr., “Asymptotically regular LDPC codes with linear distance growth and thresholds close to capacity,” in Proc. ITA, Jan. 2010.
  12. S. Kudekar, T. Richardson, and R. Urbanke, “Spatially coupled ensembles universally achieve capacity under belief propagation,” arXiv:1201:2999v1, Tech. Rep., 2012.
  13. A. R. Iyengar, M. Papaleo, P. H. Siegel, J. K. Wolf, A. Vanelli-Coralli, and G. E. Corazza, “Windowed decoding of protograph-based LDPC convolutional codes over erasure channels,” IEEE Trans. Inf. Theory, vol. 58, no. 4, pp. 2303–2320, April 2012.
  14. C. Häger, A. Graell i Amat, F. Brännström, A. Alvarado, and E. Agrell, “Comparison of terminated and tailbiting spatially coupled LDPC codes with optimized bit mapping for PM-64-QAM,” in Proc. ECOC, Cannes, France, 2014, paper Th.1.3.1.
  15. Y.-Y. Jian, H. D. Pfister, K. R. Narayanan, R. Rao, and R. Mazareh, “Iterative hard-decision decoding of braided BCH codes for high-speed optical communication,” in Proc. GLOBECOM, Atlanta, USA, 2013.
  16. L. Schmalen and S. ten Brink, “Combining spatially coupled LDPC codes with modulation and detection,” in Proc. ITG SCC, Munich, Germany, 2013.
  17. L. Schmalen, V. Aref, J. Cho, D. Suikat, D. Rösener, and A. Leven, “Spatially coupled soft-decision error correction for future lightwave systems,” J. Lightw. Technol., vol. 33, no. 5, Mar. 2015.
  18. V. Aref, L. Schmalen, and S. ten Brink, “On the convergence speed of spatially coupled LDPC ensembles,” in Proc. Allerton Conference on Communications, Control, and Computing, Oct. 2013, arXiv:1307.3780.
  19. S. ten Brink, G. Kramer, and A. Ashikhmin, “Design of low-density parity-check codes for modulation and detection,” IEEE Trans. Commun., vol. 52, no. 4, pp. 670–678, 2004.
  20. D. Pflüger, G. Bauch, F. Hauske, and Y. Zhao, “Design of LDPC codes for hybrid 10 Gbps/100 Gbps optical systems with optional differential modulation,” in Proc. ITG SCC, 2013.
  21. L. Schmalen, S. ten Brink, and A. Leven, “Spatially-coupled LDPC protograph codes for universal phase slip-tolerant differential decoding,” in Proc. OFC, Los Angeles, CA, USA, Mar. 2015, paper Th3E.6.
This is a comment super asjknd jkasnjk adsnkj
The feedback cannot be empty
Comments 0
The feedback cannot be empty
Add comment

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.