Optimization of Signal-to-Noise-and-Distortion Ratio for Dynamic Range Limited Nonlinearities

# Optimization of Signal-to-Noise-and-Distortion Ratio for Dynamic Range Limited Nonlinearities

## Abstract

Many components used in signal processing and communication applications, such as power amplifiers and analog-to-digital converters, are nonlinear and have a finite dynamic range. The nonlinearity associated with these devices distorts the input, which can degrade the overall system performance. Signal-to-noise-and-distortion ratio (SNDR) is a common metric to quantify the performance degradation. One way to mitigate nonlinear distortions is by maximizing the SNDR. In this paper, we analyze how to maximize the SNDR of the nonlinearities in optical wireless communication (OWC) systems. Specifically, we answer the question of how to optimally predistort a double-sided memory-less nonlinearity that has both a “turn-on” value and a maximum “saturation” value. We show that the SNDR-maximizing response given the constraints is a double-sided limiter with a certain linear gain and a certain bias value. Both the gain and the bias are functions of the probability density function (PDF) of the input signal and the noise power. We also find a lower bound of the nonlinear system capacity, which is given by the SDNR and an upper bound determined by dynamic signal-to-noise ratio (DSNR). An application of the results herein is to design predistortion linearization of nonlinear devices like light emitting diodes (LEDs).

{IEEEkeywords}

Nonlinear distortion, dynamic range, clipping, predistortion, optical wireless communication.

\IEEEpeerreviewmaketitle

## 1 Introduction

In addition to being nonlinear, many components in a signal processing or communication system have a dynamic range constraint. For example, light emitting diodes (LEDs) are dynamic range constrained devices that appear in intensity modulation (IM) and direct detection (DD) based optical wireless communication (OWC) systems [1][2]. To drive an LED, the input electric signal must be positive and exceed the turn-on voltage of the device. On the other hand, the signal is also limited by the saturation point or maximum permissible value of the LED. Thus, the dynamic range constraint can be modeled as two-sided clipping. The same situation may happen in other applications such as digital audio processing [3].

Both nonlinearity and clipping result in distortions which may cause system performance degradation. SNDR is a commonly used metric to quantify the distortion that is uncorrelated to the signal [4]-[7]. Previous work in this area mainly concentrated on a family of amplitude-limited nonlinearities that is common in radio frequency (RF) system design involving nonlinear components such as power amplifiers (PAs) and mixers.

Different from the previous work, our study discusses the class of nonlinearities with a two-sided dynamic range constraint that is more commonly found in optical and acoustic systems. Authors in [8]-[12] illustrated the impact of LED nonlinearity and clipping noise in OWC systems. Some predistortion strategies were proposed in [13]-[15]. However, to the best of our knowledge, the optimal nonlinear mapping under the two-sided dynamic range constraint has not been studied.

There are two major differences from the amplitude-limited nonlinearity. First, the signal will be subject to turn-on clipping and saturation clipping to meet the dynamic range constraint. Second, DC biasing must be used to shift the signal to an appropriate level to minimize distortion. In this paper, we will show that the ideal linearizer that maximizes the SNDR is a double-sided limiter that has an affine response. The parameters of the response can be calculated from the distribution of the input signal and the noise power.

In additional to deriving the SNDR-optimal predistorter, we also relate a lower bound on channel capacity to the SNDR, further motivating the SNDR considerations. Finally, we employ another common distortion metric, dynamic signal-to-noise ratio (DSNR) to provide an upper bound on the double-sided clipping channel.

The remainder of this paper is organized as follows: Section II introduces the system model for dynamic range limited nonlinearity and the corresponding SNDR definition. In Section III, we derive the optimal nonlinear mapping that maximizes the SNDR and illustrate some examples. In section IV, we related the SNDR to the capacity of the nonlinear channel. Finally, Section VII concludes the paper. The detailed proofs of this paper are deferred to the Appendices.

## 2 System Model and SNDR Definition

### 2.1 System Model

Let us consider a system modeled by

 yo(t)=ho(xo(t))+v(t) (1)

where is a real-valued signal with mean and variance ; is a zero-mean additive noise process with variance ; is a memoryless nonlinear mapping with dynamic range constraint .

For notational simplicity, we omit the -dependence in the memoryless system and replace and by and . Then we have an equivalent system modeled by

 y=h(x)+v (2)

where is a memoryless nonlinear mapping with dynamic range constraint and is a zero-mean signal with variance .

### 2.2 SNDR Definition

According to Bussgang’s Theorem [16], the nonlinear mapping in (2) can be decomposed as

 h(x)=αx+d (3)

where is the distortion caused by and is a constant, selected so that is uncorrelated with , i.e., . Thus

 α=E[xh(x)]−E[xd]E[x2]=E[xh(x)]E[x2]=E[xh(x)]σ2x. (4)

The distortion power is given by

 εd=E[d2]−(E[d])2=E[h2(x)]−α2σ2x−E2[h(x)]. (5)

The signal-to-noise-and-distortion ratio (SNDR) is defined as

 SNDR=α2σ2xεd+σ2v=(E[xh(x)])2/σ2xE[h2(x)]−(E[xh(x)])2/σ2x−E2[h(x)]+σ2v. (6)

The definition of SNDR here is a little bit different from that in [7], because all the signals are real and the distortion contains DC biasing. Thus, the distortion power is modeled as variance rather than the secondary moment.

We see from (6) that the SNDR is related to the distribution of , the noise power and the nonlinear mapping . Our aim in the next section is to determine the function that maximizes the SNDR given a signal distribution and the two-sided clipping constraint.

## 3 SNDR Optimization and Examples

### 3.1 Optimization of SNDR

Similar to [7], let us use a function to normalize the nonlinear mapping :

 h(x)=Ag(xσx) (7)

where . Let and substitute (7) into (6), we obtain

 SNDR=E2[γg(γ)]E[g2(γ)]−E2[γg(γ)]−E2[g(γ)]+σ2v/A2=E2[γg(γ)]var[g(γ)]−E2[γg(γ)]+σ2v/A2 (8)

where is the variance of and .

The SNDR optimization problem can be stated as follows:

 maxg(⋅) SNDR (9) s.t. 0≤g(⋅)≤1 (10)

for a given distribution of , dynamic range and noise power .

Fig. 1 illustrates an example of the . The region of is divided into three sets , and .

 g(γ)=0,forγ∈L; (11) 0

Thus, to determine a nonlinear mapping , we need to find the sets , , and the shape of the function in .

We will solve this problem with the following steps:

1. find the optimal given , , ;

2. show that should be as large as possible;

3. determine and for the optimal solution.

###### Lemma 1

Assume that the sets , and are known, and . The function that maximizes the SNDR expression in (8) is of the form

 g(γ)=γη+β (14)

where

 η=CU0CS1+CU1−CS0CU1CU0−CU0CS0−(CU0)2+(1−CU0)σ2v/A2=CU0CS1+CU1−CS0CU1CU0CL0+(1−CS0)σ2v/A2, (15)
 β=CU0CS1+CU0CU1+CS1σ2v/A2CU0CS1+CU1−CS0CU1 (16)

with

 Csetnum=E[γnumIset(γ)] (17)

and is the indicator function:

 Iset(γ)={1,if γ∈set,0,otherwise. (18)

This lemma holds if and only if satisfies for all .

• See Appendix missing6.

This result rules out the functions whose shape over is nonlinear. Fig 2 demonstrates examples of functions that may satisfy Lemma 1. Here, the slope of the linear curve in can be either positive or negative.

Lemma 1 answered the question pertaining to the best shape of the function with given , and . The remaining question is how to determine the optimal sets , and so that the SNDR is maximum. This turns out to be a very challenging problem since we are seeking joint optimization over multiple sets. Let us consider first.

###### Lemma 2

Given sets , and , if can be enlarged to such that or , then a higher SNDR can be achieved.

• See Appendix missing7.

Fig. 3 shows how Lemma 2 works. can be enlarged by occupying the subsets of and . The larger the set , the better the SNDR that can be achieved. Just as Lemma 1, Lemma 2 holds if and only if satisfies for all , that is, or .

Even with the set determined, we still need to determine and .

###### Lemma 3

If , the that maximizes the SNDR satisfies and ; if , the that maximizes the SNDR satisfies and .

• Let us compare the SNDR between Fig. missinga and Fig. b. For , if there is a subset of in or a subset of in , which is illustrated in Fig. b, then we see that is decreased while the variance of is increased. Thus, the of Fig. b is less than the SDNR of Fig. a. Similarly, we can draw the same conclusion for the case with .

In the final analysis, Lemma 1, Lemma 2 and Lemma 3 imply that the optimal , and , in the sense of maximizing the SNDR, are , and if ; or , and if .

###### Theorem 1

Within the class of satisfying , the following maximizes the SNDR expression in (8):

 g(γ)=⎧⎪⎨⎪⎩0,γ≤−β⋆η⋆,γη⋆+β⋆,−β⋆η⋆≤γ≤η⋆−β⋆η⋆,1,γ≥η⋆−β⋆η⋆ (19)

for , or

 g(γ)=⎧⎪⎨⎪⎩1,γ≤η⋆−β⋆η⋆,γη⋆+β⋆,η⋆−β⋆η⋆≤γ≤−β⋆η⋆,0,γ≥−β⋆η⋆ (20)

for , where the and are found by solving the following transcendental equations:

 η⋆=CU⋆0CS⋆1+CU⋆1−CS⋆0CU⋆1CU⋆0CL⋆0+(1−CS⋆0)σ2v/A2, (21)
 β⋆=CU⋆0CS⋆1+CU⋆0CU⋆1+CS⋆1σ2v/A2CU⋆0CS⋆1+CU⋆1−CS⋆0CU⋆1 (22)

with

 CU⋆0={∫+∞η⋆−β⋆η⋆p(γ)dγ,for η⋆>0,∫η⋆−β⋆η⋆−∞p(γ)dγ,for η⋆<0; (23)
 CS⋆0=⎧⎨⎩∫η⋆−β⋆η⋆−β⋆η⋆p(γ)dγ,for η⋆>0,∫−β⋆η⋆η⋆−β⋆η⋆p(γ)dγ,for η⋆<0; (24)
 CL⋆0={∫−β⋆η⋆−∞p(γ)dγ,for η⋆>0,∫∞−β⋆η⋆p(γ)dγ,for η⋆<0; (25)
 CU⋆1={∫+∞η⋆−β⋆η⋆γp(γ)dγ,% for η⋆>0,∫η⋆−β⋆η⋆−∞γp(γ)dγ,for η⋆<0; (26)
 CS⋆1=⎧⎨⎩∫η⋆−β⋆η⋆−β⋆η⋆γp(γ)dγ,for η⋆>0,∫−β⋆η⋆η⋆−β⋆η⋆γp(γ)dγ,for η⋆<0 (27)

and is the probability density function (PDF) of . The optimal SNDR is found as

 SNDR⋆=11R(η⋆,β⋆)−1 (28)

where

 R(η⋆,β⋆)=CS⋆2+η⋆CU⋆1+η⋆β⋆CS⋆1 (29)

and

 CS⋆2=⎧⎨⎩∫η⋆−β⋆η⋆−β⋆η⋆γ2p(γ)dγ,for η⋆>0,∫−β⋆η⋆η⋆−β⋆η⋆γ2p(γ)dγ,for η⋆<0. (30)
• See the proofs of Lemma 1, Lemma 2 and Lemma 3.

Theorem 1 establishes that the nonlinearity in the shape of Fig. 5 is optimal.

Predistortion is a well-known linearization strategy in many applications such as RF amplifier linearization. For the dynamic range constrained nonlinearities like LED electrical-to-optical conversion, predistortion has been proposed to mitigate the nonlinear effects. Specifically, given a system nonlinearity , it is possible to apply a predistortion mapping so the overall response is linear. According to Theorem 1, it is best to make equal to the function given in (19) or (20) if is normalized with dynamic range constraint . Using the analytical tools presented above, we can answer the questions regarding the selection of the gain factor , DC biasing and the clipping regions on both sides, or equivalently, the sets and . Theorem 1 shows that these optimal parameters (in terms of SNDR) depend on the PDF of and the dynamic signal-to-noise ratio . Thus, our work can serve as a guideline for the system design. In the next subsection, examples are given to illustrate the calculations of the optimal factors and .

### 3.2 Examples for selections of optimal parameters

In the last subsection, we learned that the optimal factors and can be calculated by solving two transcendental equations (21) and (22). However, there may not be closed-form expressions for the solutions. Additionally, solving (21) and (22) may result in multiple solutions, but we only keep the real-valued ones since all the signals here are real-valued.

Here, let us take into account a specific class of input signals whose distributions exhibit axial symmetry, such as uniform distribution and Gaussian distribution. When the distribution of the input signal is axial symmetric, the optimal clipping regions and are also symmetric. Thus, , and . Then the factors and can be calculated:

 β⋆=CU⋆0CU⋆1CU⋆0CU⋆1+CL⋆0CU⋆1=0.5, (31)
 η⋆=2CU⋆0CU⋆1(CU⋆0)2+2CU⋆0σ2v/A2=2CU⋆1CU⋆0+2σ2v/A2. (32)

We see that the DC biasing will be the midpoint of the dynamic range. When the gain factor , it can be further expressed as:

 η⋆=2∫+∞0.5η⋆γp(γ)dγ∫+∞0.5η⋆p(γ)dγ+2σ2v/A2. (33)

When the gain factor , it can expressed as:

 η⋆=2∫0.5η⋆−∞γp(γ)dγ∫0.5η⋆−∞p(γ)dγ+2σ2v/A2. (34)

There is still no closed-form expression for gain factor . Next, as examples, let us consider the calculations for uniform distribution and Gaussian distribution specifically.

###### Example 1

When the original signal is uniformly distributed in the interval , we infer that the normalized signal is uniformly distributed in the interval with the PDF

 p(γ)={12√3,−√3≤γ≤√3,0,otherwise. (35)

For the case with , it is straightforward to calculate

 CU⋆1 = ∫√30.5η⋆γ12√3dγ=14√3(3−14η⋆2), (36) CU⋆0 = ∫√30.5η⋆12√3dγ=√3−0.5η⋆2√3. (37)

Substituting (36) and (37) into (33), we obtain

 η⋆=12√3(3−14η⋆2)√3−0.5η⋆2√3+2σ2v/A2. (38)

Equation (38) can be rewritten as a quadratic equation

 η⋆2−(16√3σ2v/A2+4√3)η⋆+12=0. (39)

Thus, we can obtain a closed-form solution for the optimal :

 η⋆=8√3σ2v/A2+2√3−4√12σ4v/A4+6σ2v/A2. (40)

We know that there should be two solutions for equation (39). In fact, the other solution is , which means that both and are 0. Thus, the solution given by (40) is the unique optimal selection for the gain factor . If is desired, the optimal solution is

 η⋆=−8√3σ2v/A2−2√3+4√12σ4v/A4+6σ2v/A2. (41)
###### Example 2

When the original signal is Gaussian distributed, then the normalized signal has a standard Gaussian distribution with the PDF

 p(γ)=1√2πe−12γ2. (42)

For the case with , we have

 CU⋆1 = ∫+∞0.5η⋆γ1√2πe−12γ2=1√2πe−18η⋆2, (43) CU⋆0 = 12−12erf(η⋆2√2) (44)

where is the error function with the definition

 erf(z)=1√π∫z−ze−γ2dγ. (45)

Substituting (43) and (44) into (33) and simplifying, we obtain

 η⋆(12−12erf(η⋆2√2)+2σ2v/A2)=2√2πe−18η⋆2. (46)

Here the optimal does not have a closed-form expression but can be easily calculated numerically. We can draw the similar conclusion for the case with .

### 3.3 Numerical results

Fig. 6 shows the optimal as a function of DSNR for the above examples.

Next, we illustrate the SNDR of two different nonlinear mappings. is the optimal solution chosen by Theorem 1. is a fixed mapping given below:

 g2(γ)=⎧⎨⎩0,γ≤−0.4,γ+0.4,−0.4≤γ≤0.6,1,γ≥0.6. (47)

The corresponding SNDR curves are shown in Fig. 7. This example illustrates that the nonlinearity yields a higher SNDR as compared to the other nonlinearity, as expected according to Theorem 1.

## 4 Relationship Between SNDR and Capacity

### 4.1 Lower Bound on Capacity

The capacity is given by

 C=maxpxoI(yo;xo)=maxpxI(y;x) (48)

where is the mutual information between and  [18]. To obtain the capacity of the dynamic range constrained channel, we need to solve the following optimization problem:

 maxpx,h(⋅) I(y;x) (49) s.t. 0≤h(⋅)≤A

for a specific zero-mean noise with variance . Moreover, it can be simplified as:

 maxpxs I(xs+v;xs) (50) s.t. 0≤xs≤A

which means that we need to find an input distribution in the interval to maximize the mutual information. Specially, when the noise is Gaussian, the issue is similar to Smith’s work in [17]. In this case, if DSNR is low, the capacity is achieved by an equal pair of mass points at and ; if DSNR is high, the asymptotic capacity is the same as the information rate due to a uniformly distributed input in  [17].

However, in most cases, we are most interested in the achievable data rate given a nonlinear channel mapping with any input and any noise. Similar to the work in [7], we obtain a lower bound on the information rate:

 I(y;x) (51) ≥ H(x)−12log(2πeσ2x)+12log⎛⎜ ⎜ ⎜⎝σ2yσ2y−σ2xyσ2x⎞⎟ ⎟ ⎟⎠ = H(x)−12log(2πeσ2x) + 12log⎛⎜ ⎜⎝A2σ2vvar[g(γ)]+1A2σ2vvar[g(γ)]+1−A2σ2vE2[γg(γ)]⎞⎟ ⎟⎠ = H(x)−12log(2πeσ2x)+12log(1+SNDR) (52)

by referring to (8). Since for any input distribution , by setting to be the PDF of a zero-mean Gaussian r.v., we obtain

 C≥12log(1+SNDR) (53)

with the SNDR evalutated for a Gaussian .

### 4.2 Upper Bound on Capacity

In this subsection, we find an upper bound for the capacity. Similar to [7], supposing is the PDF of that maximizes the capacity, i.e.,

 p∗y=argmaxpy[H(y)−H(y|x)]. (54)

We can write the capacity as

 C=I(y;x)|p∗y = H(y)|p∗y−H(y|x) (55) = H(y)|p∗y−H(v)

Next, we bound the entropy with the entropy of a Gaussian , yielding

 C ≤ 12log(2πeσ2y)−H(v) (56) = 12log(2πeσ2y)−12log(2πeσ2v)+12log(2πeσ2v)−H(v) = 12log(1+A2var[g(γ)]σ2v)+12log(2πeσ2v)−H(v) ≤ 12log(1+A24σ2v)+12log(2πeσ2v)−H(v)

where with . Specifically, if the noise is Gaussian, we have the upper bound:

 C≤12log(1+A24σ2v) (57)

Since and , we must have

 SNDR=α2σ2xεd+σ2v≤A24σ2v. (58)

is the defined DSNR which is the same as that in [10].

### 4.3 Example of Bounds

Since SNDR is determined by DSNR and the distribution of signal, we plot the bounds as functions of DSNR for Gaussian distributed signal, which is shown in Fig. 8. We also compare the lower bounds given by two different nonlinear mappings and , which are introduced in the last section. This example illustrates that the nonlinearity chosen according to Theorem 1 yields a tighter lower bound as compared to the other nonlinearity. In addition, we can see that the capacity of Gaussian channel as determined by Smith [17] is between the lower bounds and upper bound that we have.

## 5 Conclusion

The main contribution of this paper is the SNDR optimization within the family of dynamic range constrained memoryless nonlinearities. We showed that, under the dynamic range constraint, the optimal nonlinear mapping that maximizes the SNDR is a double-sided limiter with a particular gain and a particular bias level, which are determined based on the distribution of the input signal and the DSNR. In addition, we found that provides a lower bound on the nonlinear channel capacity, and serves as the upper bound. The results of this paper can be applied for optimal linearization of nonlinear components and efficient transmission of signals with double-sided clipping.

\appendices

## 6 Proof of Lemma 1

Since we are solving the optimization problem w.r.t. a function, the functional derivative is introduced here [7][19]. By using the Dirac delta function as a test function, the notion of functional derivative is defined as:

 δF[g(γ)]δg(γ0)=limϵ→0F[g(γ)+ϵδ(γ−γ0)]−F[g(γ)]ϵ. (59)

Just as the variable derivative operation, the linear property, product rule and chain rule hold for functional derivative. In addition, from (59), we infer that

 δg(γ)δg(γ0) = δ(γ−γ0), (60) δg2(γ)δg(γ0) = 2g(γ)δ(γ−γ0). (61)

To maximize the SNDR w.r.t , we need

 δSNDRδg(γ0)=0,∀γ0∈S. (62)

We infer that

 E[g(γ)]=E[IL(γ)g(γ)]+E[IS(γ)g(γ)]+E[IU(γ)g(γ)]=E[IS(γ)g(γ)]+E[IU(γ)]=E[IS(γ)g(γ)]+CU0. (63)

Similarly,

 E[γg(γ)] = E[IS(γ)γg(γ)]+CU1, (64) E[g2(γ)] = E[IS(γ)g2(γ)]+CU0. (65)

and are defined as in (17). It follows easily that

 CL0+CS0+CU0 = 1, (66) CL1+CS1+CU1 = 0 (67)

and

 CL0,CS0,CU0≥0. (68)

Substituting (63), (64) and (65) into (8)

 SNDR=N[g(γ)]D[g(γ)] (69)

where

 Q[g(γ)]=E[IS(γ)γg(γ)]+CU1, (70)
 N[g(γ)]=Q2[g(γ)], (71)
 Y[g(γ)]=E[IS(γ)g(γ)]+CU0, (72)
 D[g(γ)]=E[IS(γ)g2(γ)]+CU0+σ2vA2−Q2[g(γ)]−Y2[g(γ)]. (73)

Denote by the PDF of the random variable . Then

 E[IS(γ)g2(γ)]=∫IS(γ)g2(γ)p(γ)d