Upper Bound on the Capacity of Discrete-Time Wiener Phase Noise Channels

Upper Bound on the Capacity of Discrete-Time Wiener Phase Noise Channels

Luca Barletta and Gerhard Kramer Institute for Communications Engineering
Technische Universität München
D-80333 Munich, Germany
{luca.barletta, gerhard.kramer}@tum.de
Abstract

A discrete-time Wiener phase noise channel with an integrate-and-dump multi-sample receiver is studied. An upper bound to the capacity with an average input power constraint is derived, and a high signal-to-noise ratio (SNR) analysis is performed. If the oversampling factor grows as for , then the capacity pre-log is at most at high SNR.

I Introduction

Instabilities of the oscillators used for up- and down-conversion of signals in communication systems give rise to the phenomenon known as phase noise [1]. The impairment on the system performance can be severe even for high-quality oscillators, if the continuous-time waveform is processed by long filters at the receiver side. This is the case, for example, when the symbol time is very long, as happens when using orthogonal frequency division multiplexing. A study of the signal-to-noise ratio (SNR) penalty induced by filtering of a white phase noise process has been recently done in [2], where it is shown that the best projection receiver suffers an SNR loss that depends on the phase noise statistics.

Typically, the phase noise generated by oscillators is a random process with memory, and this makes the analysis of the capacity challenging. The phase noise is usually modeled as a Wiener process, as it turns out to be accurate in describing the phase noise statistics of certain lasers used in fiber-optic communications [3], and of free-running microwave oscillators [1]. Tight numerical bounds on the information rate of discrete-time phase noise channels with memory are given in [4, 5, 6, 7], while analytical results on single-user Wiener phase noise channels are given in [8, 9, 10, 11, 12] where it is shown that even weak phase noise becomes the limiting factor at high SNR.

In [11] an achievable rate region for the discrete-time Wiener phase noise channel with an integrate-and-dump oversampling receiver was derived. For the same channel and receive filter, in this paper we develop an upper bound to the capacity and characterize the pre-log111The factor in the capacity high-SNR expansion in front of . at high SNR.

The paper is organized as follows. The system model for the continuous-time channel is described in Sec. II, along with a simplification that leads to the discrete-time model under consideration. The upper bound to the capacity is derived in Sec. III, and the results are discussed in Sec. IV. Conclusions are drawn is Sec. V.

Notation: Capital letters denote random variables or random processes. The notation with is used for random vectors. With we denote the probability distribution of a real Gaussian random variable with zero mean and variance . The symbol means equality in distribution.

The symbol denotes the reduction of modulo , and the binary operator denotes summation modulo .

Given a complex random variable , we use the notation and to denote the amplitude and the phase of , respectively.

The operators , , and denote expectation, differential entropy, and mutual information, respectively.

The function denotes the natural logarithm of .

Ii System model

In this Section we describe how to obtain a discrete-time version of the continuous-time channel, and we point out the main assumption that leads to the simplified model analyzed in Sec. III.

The output of a continuous-time phase noise channel can be written as

(1)

where , is the data-bearing input waveform, and is a circularly symmetric complex white Gaussian noise. The phase process is given by

(2)

where is a standard Wiener process, i.e., a process characterized by the following properties:

  • ,

  • for any , is independent of the sigma algebra generated by ,

  • has continuous sample paths almost surely.

One can think of the Wiener phase process as an accumulation of white noise:

(3)

where is a standard white Gaussian noise process.

Ii-a Signals and Signal Space

Suppose is in the set of finite-energy signals in the interval . Let be an orthonormal basis of . We may write

(4)

where

(5)

is the complex conjugate of , and the are independent and identically distributed (iid), complex-valued, circularly symmetric, Gaussian random variables with zero mean and unit variance.

The projection of the received signal onto the th basis function is

(6)
(7)
(8)

The set of equations given by (8) for can be interpreted as the output of an infinite-dimensional multiple-input multiple-output channel, whose fading channel matrix is .

Ii-B Receivers with Finite Time Resolution

Consider a receiver whose time resolution is limited to seconds, in the sense that every projection must include at least a -second interval. More precisely, we set , where is the number of independent symbols transmitted in and is the oversampling factor, i.e., the number of samples per symbol. The integrate-and-dump receiver with resolution time uses the basis functions

(9)

for . With the choice (9), the fading channel matrix is diagonal and the channel’s output for is

(10)
(11)

where we have used the notation and . In (10) we have used (2), the property , the substitution

(12)

and the property . Finally, in step  we have used the substitution .

Since the oversampling factor is , and the basis functions are square in time domain, we have for , and we can write the model (11) as

(13)

for .

The vectors , , and are independent of each other. The variables are chosen as identically distributed with zero mean and variance , and the average power constraint is

(14)

Since we set the power spectral density of to 1, the power is also the SNR, i.e., .

Using (3), the variables follow a discrete-time Wiener process:

(15)

where the ’s are iid Gaussian variables with zero mean and variance . The fading variables ’s are complex-valued and iid, and is independent of . In other words, is correlated only to , and is independent of the vector .

Note that for any finite , or equivalently for any finite oversampling factor , the vector does not represent a sufficient statistic for the detection of given in the model (1). In other words, the finite time resolution receiver is generally suboptimal.

In this paper we study a simplified model, where the fading variables are all one, i.e., we have

(16)

This is a commonly-studied model, e.g., see [11, 13], and it is referred to as the discrete-time Wiener phase noise channel. The complete model (13) is harder to analyze than the model (16), because in the former the dependency between and must be addressed. On the other hand, if the oversampling factor grows unbounded, then each random variable converges to ; this suggests that the analysis of the model (16) can give insights into the analysis of model (13) for receivers with high time resolution.

Iii Upper bound on capacity

We compute an upper bound to the capacity of the discrete-time Wiener phase noise channel (16). For notational convenience, we use the following indexing for and :

(17)

and we group the output samples associated with in the vector .

The capacity is defined as

(18)

where the supremum is taken among the distributions of such that the average power constraint (14) is satisfied.

The mutual information rate can be upper-bounded as follows:

(19)

where step  holds by a data processing inequality and because is independent of , because is conditionally independent of given , follows by stationarity of the processes, and  by polar decomposition of and the chain rule.

For the amplitude channel, i.e., the first term in the right-hand side (RHS) of (19), we have

(20)

where  holds by a data processing inequality and because is independent of , holds due to the circular symmetry of the ’s, because is independent of any other quantity, because the processed variable is a sufficient statistic for the detection of , and  is an upper bound to the capacity of a non-coherent channel under an average power constraint [8, Eq. (16)] where represents a function independent of that vanishes for .

For the phase channel, i.e., the second term on the RHS of (19), we have

(21)
(22)

where in step  we bound by the information extracted by a genie-aided receiver that knows the additive noise , is obtained by deleting the amplitude contribution of , holds because forms a Markov chain, is obtained by applying reversible transformations, holds because the random variables are independent of any other quantity, holds by choosing a uniform distribution in for , and the last inequality is derived in the Appendix with

(23)

Suppose the oversampling factor grows as a power of the SNR, i.e., for . Inserting (20) and (22) into (19) and using (18) yields

(24)

which for large SNR gives

(25)

Iv Discussion

As a byproduct of (25), an upper bound to the capacity pre-log is

(26)

As shown in the previous Section, a pre-log of comes from the amplitude channel, while a contribution of comes from the phase channel. For example, if no oversampling is used (), one can let go to zero and obtain just the degrees of freedom provided by the amplitude channel, i.e., . This means that, without oversampling, the Wiener phase noise channel has the same degrees of freedom of the non-coherent channel. This is in accordance with the result given in [8].

If the oversampling factor grows as , for , then a pre-log higher than can not be achieved. Indeed, a pre-log as high as can be achieved with the processing described in [11]: the amplitude channel contributes with pre-log by using the statistic

(27)

to detect , and the phase channel contributes with pre-log by using the processing

(28)

to detect .

Fig. 1: Capacity pre-log bounds as a function of for high SNR. The oversampling factor is .

Figure 1 plots the known upper and lower bounds to the capacity pre-log at high SNR. The upper bound is the result of this paper, expressed in (26). The lower bound is based on results derived in [11]. More specifically:

  • the lower bound for the amplitude channel, shown as the dashed black line, was derived independent of the growth rate of the oversampling factor;

  • for the phase channel it was shown how to achieve pre-log for , hence the same pre-log can be achieved for . It is not difficult to use the results of [11] to extend the lower bound in the range . It turns out that the achievable pre-log linearly increases from to .

From the Figure, the capacity pre-log is exactly known in the range , where the upper and lower bounds agree. The upper bound derived in this paper does not rule out the possibility to achieve pre-log if the oversampling factor grows faster than .

Consider the case of receivers without oversampling ( or ). The analysis of Sec. III shows that the simplified model (16) has capacity pre-log , while the general discrete-time model that accounts also for the amplitude fading  (13) has a behavior at high SNR [14], i.e., pre-log . This means that, at least in the case , the simplified model is not a good approximation of the complete model.

V Conclusions

We have derived an upper bound to the capacity of discrete-time Wiener phase noise channels. As a byproduct, we have obtained an upper bound to the capacity pre-log at high SNR that depends on the growth rate of the oversampling factor used at the receiver. If the oversampling factor grows proportionally to , then a capacity pre-log higher than can not be achieved.

Previous results on a lower bound to the capacity pre-log allow to state that the capacity at high SNR is exactly for .

A lower bound to

The probability density function of is

(29)

for and zero elsewhere, and can be upper-bounded as follows for :

(30)

where step  follows by using for the terms with and for the terms with . Inequality  holds because for . The differential entropy of can be lower-bounded as follows:

(31)

where inequality  is due to (30) and the monotonicity of the logarithm, and the last inequality is due to . A lower bound to the second moment of is:

(32)

where

(33)

is the error function. Since all the terms of the summation are positive, the inequality follows by considering only the term for . An upper bound to the last expectation on the RHS of (31) is

(34)

The bound (31) with inequalities (32) and (34) give

(35)

Also, note that , so we have that the bound (35) is tight for small , i.e.,

(36)

Acknowledgment

L. Barletta was supported by the Technische Universität München –- Institute for Advanced Study, funded by the German Excellence Initiative. G. Kramer was supported by an Alexander von Humboldt Professorship endowed by the German Federal Ministry of Education and Research.

References

  • [1] A. Demir, A. Mehrotra, and J. Roychowdhury, “Phase noise in oscillators: a unifying theory and numerical methods for characterization,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 47, no. 5, pp. 655–674, May 2000.
  • [2] L. Barletta and G. Kramer, “On continuous-time white phase noise channels,” in IEEE Int. Symp. Inf. Theory (ISIT), June 2014, pp. 2426–2429.
  • [3] G. Foschini and G. Vannucci, “Characterizing filtered light waves corrupted by phase noise,” IEEE Trans. Inf. Theory, vol. 34, no. 6, pp. 1437–1448, Nov 1988.
  • [4] L. Barletta, M. Magarini, and A. Spalvieri, “Estimate of information rates of discrete-time first-order Markov phase noise channels,” IEEE Photon. Technol. Lett., vol. 23, no. 21, pp. 1582–1584, 2011.
  • [5] ——, “The information rate transferred through the discrete-time Wiener’s phase noise channel,” J. Lightwave Technol., vol. 30, no. 10, pp. 1480–1486, 2012.
  • [6] ——, “Tight upper and lower bounds to the information rate of the phase noise channel,” in IEEE Int. Symp. Inf. Theory (ISIT), 2013, pp. 2284–2288.
  • [7] L. Barletta, M. Magarini, S. Pecorino, and A. Spalvieri, “Upper and lower bounds to the information rate transferred through first-order Markov channels with free-running continuous state,” IEEE Trans. Inf. Theory, vol. 60, no. 7, pp. 3834–3844, July 2014.
  • [8] A. Lapidoth, “On phase noise channels at high SNR,” in IEEE Inf. Theory Workshop, 2002, pp. 1–4.
  • [9] A. Barbieri and G. Colavolpe, “On the information rate and repeat-accumulate code design for phase noise channels,” IEEE Trans. Commun., vol. 59, no. 12, pp. 3223–3228, 2011.
  • [10] H. Ghozlan and G. Kramer, “On Wiener phase noise channels at high signal-to-noise ratio,” in IEEE Int. Symp. Inf. Theory (ISIT), 2013, pp. 2279–2283.
  • [11] ——, “Phase modulation for discrete-time Wiener phase noise channels with oversampling at high SNR,” in IEEE Int. Symp. Inf. Theory (ISIT), 2014.
  • [12] L. Barletta and G. Kramer, “Signal-to-noise ratio penalties for continuous-time phase noise channels,” in Int. Conf. on Cognitive Radio Oriented Wirel. Networks (CROWNCOM’14), June 2014, pp. 232–235.
  • [13] M. Martalò, C. Tripodi, and R. Raheli, “On the information rate of phase noise-limited communications,” in Inf. Theory and Appl. Workshop (ITA), Feb 2013, pp. 1–7.
  • [14] A. Lapidoth and S. Moser, “Capacity bounds via duality with applications to multiple-antenna systems on flat-fading channels,” IEEE Trans. Inf. Theory, vol. 49, no. 10, pp. 2426–2467, 2003.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
75675
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description