Use of a Constraint During LDPC Decoding in a Bliss Scheme
Abstract
Bliss schemes of a run length limited (RLL) codec in combination with an LDPC codec, generate LDPC parity bits over a systematic sequence of RLL channel bits that are inherently redundant as they satisfy e.g. a minimum run length constraint. That is the subsequences consisting of runs of length , viz. and , cannot occur. We propose to use this redundancy during LDPC decoding in a Bliss scheme by introducing additional constraint nodes in the factor graph used by the LDPC decoder. The messages sent from these new nodes to the variable or codeword bit nodes exert a “force” on the resulting softbit vector coming out of the LDPC decoding that give it a tendency to comply with the constraints. This way, we can significantly reduce the probability of decoding error.
I Introduction
Bliss schemes [1], also called modified concatenation schemes by Fan [2], place a modulation encoder and decoder as outer parenthesis and a systematic ECC encoder and decoder as inner parenthesis around the storage channel. This way, the modulation decoder follows the ECC decoder, rather than precedes it, thus, avoiding error propagation by the modulation decoder at the input of the ECC decoder as in the standard concatenation scheme [2]. As shown in Fig. 1, the parity bits generated from the sequence of modulation encoded channel bits are encoded with a second modulation encoder, where both modulation encoders are of the run length limited (RLL) type [3] .
The so called precoder in Fig. 1 performs an integration modulo2 in order to convert differential bits, also called bits within the context of RLL coding, to so called unipolar bits, that are indicative of the type of run on the storage medium. The channel SISO detector is shared between the systematic and the parity part of the LDPC codeword and produces softdecision values for the unipolar bits. The equalizer filter shortens the channel impulse response. The SISO RLL decoder operates on these softdecision unipolar bits produced by the channel SISO detectors. Hence, the number of states of the RLL SISO decoder is twice the number expected if differential bits were used, as the state incorporates an additional polarity bit^{1}^{1}1 The use of a unipolar RLL SISO as in Fig. 1 gives a moderate bit error performance improvement over a channel SISO that outputs soft differential bits and an RLL SISO that with half the number of states. Even better bit error performance is possible by combining the channel SISO and the RLL SISO in one joint SISO with the product space of the state space of the channel SISO and RLL SISO..
Bliss schemes have pros and cons in comparison with a standard concatenation scheme:

Pro: In the Bliss scheme, the bit error rate (BER) at the input of the LDPC decoder, at least for the dominating systematic part, is the bare BER after bitdetection, which has not been multiplied by the error propagation factor of the RLL SISO decoder.

Con: The error correction capability of the LDPC code is characterized by its code rate (and secondarily by its block length ). The RLL encoding of the systematic part increases the number of bits into the ECC parity generator. For instance, for the same LDPC code rate with a rate 2/3 RLL code, as applies for the case, the parity generator produces 1.5 times more parity bits, due to the RLL encoding of the systematic bit sequence. This additional amount of parity lowers the net rate of the Bliss scheme.
In order to mitigate the rate loss of Bliss schemes due to the parity generation over redundant data as mentioned above, Immink [7] introduced a compression codec in Bliss’ scheme. For storage channels with jitter noise on the transitions between run lengths, Zhang et al. [8] reduced the error propagation of the compression decoder for a RLL code by the use of Gray labeling. In this contribution, we take a different approach. We accept the redundancy of the systematic channel bit sequence, and modify the LDPC decoder to exploit this redundancy during its decoding iterations, in order to improve its error correction performance.
Our interest is in the use of the same RLL codes by both modulation encoders and low density parity check (LDPC) codes [4] as ECC codes with a high code rate (e.g. ). Especially, in nearfield optical storage as envisaged for the fourth generation of optical recording, RLL constraints with a minimum run length constraint of are still quite popular as evidenced by the recent RLL code design of Coene et al. [5]. In [6], we discussed a stitching technique to connect a systematic part of a codeword to its adjacent parity part.
Ii MinSum LDPC Decoding Revisited
The minsum LDPC decoding algorithm [9] is a simplification of the sumproduct algorithm that uses minimum and summation operations instead of multiplication and summation operations. Both algorithms are a special case of the message passing algorithm [10] that passes messages along the edges of a so called factor graph. These messages convey softdecision information, e.g. in the form of loglikelihood ratios (LLR). The minsum algorithm always uses a loglikelihood representation of the messages. For a binary random variable with probability () of taking on the value 0 (1), the loglikelihood equals . With a properly chosen scaling factor after the minimum operation, the performance loss of the minsum algorithm w.r.t. the sumproduct algorithm is minimal [11] [12]. The choice of the codeword bit node or “variable node” degrees in the factor graph [10] of the LDPC code seems to have only a minor influence for high LDPC code rate.
Let be the LDPC^{2}^{2}2Our method may also apply to other, similar codes with a low density parity check matrix, such as repeataccumulate (RA) codes. codeword length and be the number of parity check equations. The parity check matrix consists of rows and columns with elements from the binary Galois field . For a parity check equation with index , define the set of codeword bit positions that it checks, i.e.
Similarly, for a codeword bit position , define the set of indices of parity check equations that check the given bit position , i.e.
The factor graph associated with the parity check matrix has as set of vertices , the union of the set of bit nodes and a set of parity check nodes . The set of edges consists of all edges for which . For the sake of ease of discussion, from hereon, we assume that the LDPC code is regular, which means that all sets have the same size , and that all sets have the same size .
In general, the maximum likelihood or harddecision estimate of a binary random variable is determined by the sign of its loglikelihood value. The absolute value of a loglikelihood value is a reliability of the harddecision estimate. A loglikelihood value of zero corresponds to an erasure (no information).
We now discuss the various operators involved in the minsum algorithm. The operator following VAR that is applied inside the codeword bit nodes or “variable nodes” combines a number of sources of information (loglikelihood messages) about the bit associated with a variable node. These sources of information are assumed to be statistically independent. This assumption of statistical independence^{3}^{3}3Due to the presence of cycles in the factor graph, this is only an approximation. translates into a sum operator on the input loglikelihoods of a variable node.
The operator CHK that is used inside the check nodes approximates the loglikelihood value of the exclusiveor “” of its presumably statistically independent input variables.
(1) 
(2) 
where is the aforementioned scaling factor and
For a given iteration of the minsum algorithm, we define the following variables. As usual, in the message passing algorithm, messages are sent along edges of the factor graph.

– The decoder input message into variable node .

– The message sent from variable node to check node . It is obtained as a function VAR of message and the last received messages of all check nodes ,
(3) 
– The message sent from check node to variable node . It is obtained as a function CHK of the last received messages of all variable nodes ,
(4) 
– The decoder output messages. Unlike the messages the decoder output message uses all available information in a variable node . It is obtained as the function VAR of the message and the last received messages of all check nodes ,
(5)
A classical implementation of the minsum algorithm stores all received messages. During the first halfiteration, all messages are sent from all variable nodes to the check nodes. During the second halfiteration, all messages are sent from all check nodes to the variable nodes. A constant number of iterations can be used. The decoder output messages need not be evaluated for all iterations, but only for the final iteration.
Iii Constraint Nodes in the Factor Graph
Our approach is to modify the LDPC decoder so that it can use the knowledge that in the systematic channel sequence the subsequences and do not occur. Here, denotes a sequence consisting of copies of the value . This is achieved by addition of so called constraint nodes in the LDPC code’s factor graph. As stated before, our aim is to improve the error correction capability^{5}^{5}5Note, that a constraint itself does not imply a nontrivial Hamming distance . For instance the constrained sequences and evidently have Hamming distance . However, for the kind of intermediate range of target bit error rates after LDPC decoding in our storage application^{4}^{4}4We assume that e.g. an outer RS (or BCH) codes is used to achieve the ultralow bit error rates typical of storage applications. a high minimum distance of the LDPC code is not required for good or even superior error correction performance, anyway. of the LDPC decoder.
In the definition of a new, modified factor graph for the LDPC decoder in a Bliss scheme, it is very convenient that the constraint is a local constraint [10]. That means, that the constraint involves only a fixed, small number of codeword bits (codeword bit nodes). This sparseness of the connectivity matrix between the nodes is essential for efficient and effective operation of the modified LDPC decoder. This is all the more true, the smaller the value of is. For this reason, and for the practical importance of their applications [5], from hereon we concentrate on the based Bliss schemes.
The new factor graph with the constraint nodes is shown in Fig. 3. In general, the degree of a constraint node equals , as these nodes need to be able to detect the presence of the subsequences and of length . Define the following additional variables in the LDPC decoding algorithm.

– The input message to constraint node sent from variable node .

– The output message of constraint node sent to variable node .
Here, the index of the variable node (LDPC codeword variable node) is implicitly restricted to the systematic bit positions as there is no constraint on the parity part of an LDPC codeword.
As a general principle in message passing algorithms, the output message along a certain arc from a certain node is only allowed to depend on the (most recently) received input messages via all other arcs into that node . Hence, for constraint node with index , and most recently received loglikelihood messages and , we need to specify three output functions and in order to generate as many loglikelihood output messages , such that
Observe that, if one of the input loglikelihood values of a constraint node is zero, there is no indication that the constraint is violated, as it is not possible to conclude to a violation from the knowledge of fewer than harddecision estimates. Then, the output loglikelihood messages are to be zero. The output messages do not need to exert a force on the solution of the LDPC decoder in that case. On the contrary, if the input loglikelihoods have large absolute values, and their harddecision values indicate a violation of the constraint, the output values, also should have large absolute values. However, if the harddecision values indicate compliance with the constraint, the output loglikelihoods should be zero. Hence, in the spirit of the minsum algorithm we choose to let
Similar to Eq. (2), one can use a scaling factor to postmultiply the above minimum values.
Also the sign of is not allowed to depend on , etc. The signs are chosen such that they enforce the disappearance of violations of the constraint in the decoded LDPC codeword, see Tables I, II and III. As stated before, a zero entry in these tables applies when there is no violation of the constraint.
Iv MinSum LDPC Decoding with Constraint Nodes in the Factor Graph
With reference to the above definitions of loglikelihood messages, one iteration of the extension of the minsum LDPC decoding algorithm with constraint nodes in the factor graph is given by the following equations.
(6) 
V Simulation Results
We experimented with two pseudorandom regular LDPC codes. The codeword bit nodes or variable bit nodes all had degree . The shorter LDPC code had a code rate of and a code length of . The bit error rates after LDPC decoding obtained from simulations using this shorter code are shown in Fig. 4 and 5. In Fig. 4, the peak signaltonoise ratio (PSNR) of white additive noise at the channel output is varied for a fixed channel bit length of 53 nm, scaled to the numerical aperture and wavelength of Bluray disc [13]. The user capacity of the disc scaled to the physics of Bluray disc then equals 30.5 GB. The simulations used a channel model of the optical storage readout channel using the channel modulation transfer function from the BraatHopkins formalism [14]. The channel bit length is scaled to the physical readout parameters of Bluray disc. In Fig. 5, channel bit length is varied, at a fixed PSNR of 35 dB. The RLL code of [5] is used, that has code rate .
The number of LDPC iterations was set at 16. We use the schedule of Yeo et al. [15], wherein a single check node or constraint node update is followed by variable updates of the connected variable nodes as a kind of “mini decoding iterations.” We process the check nodes and the constraint nodes in sequence. An LDPC decoding iteration is then complete when all check nodes and all constraint nodes have undergone an update. This update schedule about halves the number of iterations that need to be performed in a decoder. Up to half a million LDPC codeword have been simulated per point of the graph.
In all graphs presented in this section, the top curve shows the BER after bitdetection. The middle curve shows the BER after standard LDPC decoding. The bottom curve shows the BER with decoding using the additional RLLconstraint nodes in the factor graph.
The longer LDPC code has a code rate of and a code length of . The BER simulation results for this longer code are shown in Fig. 6 and 7.
For a physical sector size of the storage medium of approx. 1 Mbit, the shorter LDPC code length allows combination with an 10bit outer RS code with about maximal length ( symbols). The longer LDPC code allows combination with an 8bit outer RS code of about maximal length (.
Vi Conclusion
We conclude that the use of a constraint during the decoding of an LDPCbased Bliss scheme, brings an advantage in PSNR of around 0.25 dB. Using this technique, the bit length of a simulated model of an optical storage channel can be decreased by ca. 0.6 %. This suggests that the use of constraint during LDPC decoding can enable an increase in the storage density of around 0.6 %.
Acknowledgment
These results were obtained during a joint project between Philips Research and Sony. The author is indebted to his former colleague Haibin Zhang, now with TNO Telecom, for the construction of the LDPC codes used in this research. Furthermore, the author would like to acknowledge stimulating discussions with his former colleagues Stan Baggen, Wim Coene and Bin Yin as well as his project members from Sony, Seji Kobajashi, Toshi Horigome, Makato Noda and Hiroyuki Yamagischi.
References
 [1] W. G. Bliss, “Circuitry for performing error correction calculations on baseband encoded data to eliminate error propagation,” IBM Techn. Discl. Bul., vol. 23, pp. 46334634, 1981.
 [2] J. L. Fan, Constrained Coding and Soft Iterative Decoding, Norwell, MA: Kluwer Academic Publishers, 2001, pp. 144145.
 [3] K. A. S. Immink, Codes for mass data storage systems, 2nd ed., Eindhoven, The Netherlands: Shannon Foundation Publishers, 2004, pp. 5164.
 [4] R. G. Gallager, LowDensity ParityCheck Codes, Cambridge, MA: MIT Press, 1963.
 [5] W. M. J. Coene, A. Hekstra, B. Yin, H. Yamagishi, M. Noda, A. Nakaoki, and T. Horigome, “A new d=1, k=10 softdecodable RLL code with r=2 RMTRconstraint and a 2to3 PCWA mapping for DCcontrol,” in Proc. of SPIE, Vol 6282, Optical Data Storage 2006, R. Katayoma, R. Schlesinga, editors.
 [6] A.P. Hekstra, W.M.J. Coene, R.J.W Debets, “A Stitching Technique for Bliss Schemes,” submitted to IEEE Tr. on Magn., May 2007.
 [7] K. A. S. Immink, “A practical method for approaching the channel capacity of constrained channels,” IEEE Trans. Inform. Theory, vol. 43, no. 5, pp. 13891399, Sept. 1997.
 [8] H. Zhang, A. P. Hekstra, W.M.J. Coene and B. Yin, “Performance investigation of softdecodable RLL codes in high density optical recording,” accepted for publication in IEEE Trans. on Magn., 2007. pp. 879887, Aug. 2006.
 [9] M. Fossorier, M. Mihaljevic, H. Imai, “Reduced complexity iterative decoding of lowdensity parity check codes based on belief propagation,” IEEE Trans. on Commun., Vol. COM47, No. 5, pp. 673680, May 1999.
 [10] F.R. Kschischang, B. J. Frey, H.A. Loeliger, “Factor Graphs and the SumProduct Algorithm,” IEEE Trans. Inf. Th., Vol. IT47, No. 2, Febr. 2001.
 [11] J. Chen, M. Fossorier, “Near optimum universal belief propagation based decoding of low density parity check codes,” IEEE Trans. on Commun., Vol. COM50, No. 3, pp. 406414, March 2002.
 [12] J. Heo, “Analysis of scaling soft information on low density parity check code,” Electron Lett., vol. 39, no. 2, pp. 219221, Jan. 2003.
 [13] T. Narahara, S. Kobayashi, M. Hattori, Y. Shimpuku, G. van den Enden, J. Kahlman, M. van Dijk, R. van Woudenberg, “Optical disc system for digital video recording,” Proc. ISOM/ODS 1999, Hawaii, Jul. 1999, SPIE Vol. 3864, pp. 5052.
 [14] J. Braat, Principles of Optical Disc Systems: Chapter2, Readout of Optical Discs, Adam Hilger, Ltd, 1985.
 [15] E. Yeo, B. Nikolic, V. Anantharam, “Architectures and implementations of lowdensity parity check decoding algorithms,” The 2002 45th Midwest Symp. on Circuits and Systems, Vol. 3, Aug. 2002, pp. III437III440.
+  +  + 
+    0 
  +  0 
     
+  +  0 
+     
  +  + 
    0 
+  +  0 
+     
  +  + 
    0 
Andries P. Hekstra (M’00) was born in Breda, the Netherlands in 1961. He received the ”Ingenieur” degree in Electrical Engineering from Eindhoven University, the Netherlands, summa cum laude in 1985 with a specialization in multiuser information theory. In 198586 he was a Young Graduate Trainee at the European Space Agency where he worked on spread spectrum telemetry systems in Darmstadt, Germany. Successively, he was a Ph.D. student at the Electrical Engineering Department of Cornell University, Ithaca, USA. There, he studied abstractions of VLSI packing problems. In 1990 he joined KPN Research in Leidschendam, the Netherlands, where he finished his Ph.D. degree in 1994 at Eindhoven University of Technology with his professor from Cornell University as second promoter. From 1995 to 2000 he investigated automatic assessment of video and speech quality using models of human perception and cognition. During 20012006, he worked at Philips Research, Eindhoven, mainly on error correction for optical storage systems. Since Sept. 2006, he works on applied communication and information theory at large within the research department of NXP Semiconductors, Eindhoven. 