Generative Adversarial Estimation of Channel Covariance in Vehicular Millimeter Wave Systems
Enabling highly-mobile millimeter wave (mmWave) systems is challenging because of the huge training overhead associated with acquiring the channel knowledge or designing the narrow beams. Current mmWave beam training and channel estimation techniques do not normally make use of the prior beam training or channel estimation observations. Intuitively, though, the channel matrices are functions of the various elements of the environment. Learning these functions can dramatically reduce the training overhead needed to obtain the channel knowledge. In this paper, a novel solution that exploits machine learning tools, namely conditional generative adversarial networks (GAN), is developed to learn these functions between the environment and the channel covariance matrices. More specifically, the proposed machine learning model treats the covariance matrices as 2D images and learns the mapping function relating the uplink received pilots, which act as RF signatures of the environment, and these images. Simulation results show that the developed strategy efficiently predicts the covariance matrices of the large-dimensional mmWave channels with negligible training overhead.
Millimeter wave (mmWave) communication is a promising technology for the applications that demand high data rates with high mobility such as vehicular communications and wireless virtual/augmented reality . Enabling these highly-mobile applications, though, requires developing efficient techniques that acquire the large mmWave channels with low training overhead. Since estimating the full channel matrices every coherence time may not be feasible in these highly-mobile systems, a reasonable approach is to obtain long-term channel statistics such as spatial channel covariance, which can then be leveraged for both the channel estimation and the precoder design [2, 3].
Prior work on mmWave channel covariance estimation leveraged the sparse nature of the channels and developed compressive sensing based techniques [4, 3]. While these techniques can generally reduce the training overhead compared to exhaustive search solutions, this overhead is still large for large array systems and scales with the number of antennas. Further, compressive estimation techniques normally make hard assumptions on the exact sparsity of the channels which render their practical feasibility questionable.
In this paper, we propose a novel solution for highly-mobile mmWave systems that leverages deep learning tools to efficiently learn and predict the mmWave channel covariance matrices. The key idea of this solution is to treat the covariance matrices as 2D images where GAN networks can be trained to learn the important features of these images. More specifically, following , the developed solution requires the mobile user to transmit only one uplink training sequence that gets received jointly by multiple base stations (BSs) using omni beam patterns, i.e., with negligible training overhead. These received training signals represent an RF signature of both the environment and the transmitter/receiver locations. A conditional GAN network is then leveraged to learn the implicit mapping function between the received training signals and a sparse representation of the channel covariance matrix. Simulation results, based on accurate ray tracing, show that the proposed solution can efficiently predict large-dimensional mmWave channel covariance matrices with small mean-squared errors.
Notation: We use the following notation throughout this paper: is a matrix, is a vector, is a scalar, and is a set. is the determinant of , whereas , , are its transpose and Hermitian (conjugate transpose). is a complex Gaussian random vector with mean and covariance . is used to denote expectation.
Ii System and Channel Models
In this section, we describe the adopted mmWave system and channel models.
Ii-a System Model
Consider the mmWave communication system shown in Fig. 1, where base stations (BSs) are simultaneously serving one mobile user. Each BS is equipped with antennas which form a uniform linear array and the user has only one antenna. The BSs are assumed to be connected to each other so that they can share the uplink training signals, received from the mobile user. For simplicity, we assume that each BS has only one RF chain and is applying analog-only combining via a network of phase shifters during the uplink transmission .
Considering a wideband OFDM system, the uplink training symbol at subcarrier is transformed to the time domain by a -point IFFT. A cyclic prefix is then added to the symbol block to generate the transmit signal. Let denote the uplink channel vector between the user and the th BS at the th subcarrier, the post-combining received signal at subcarrier at BS can then be expressed as
where is the analog combiner at BS , and is the Gaussian noise corrupting the received signal.
Ii-B Channel Model
A wideband geometric channel model with clusters is adopted for our mmWave system . In this model, each of the clusters contributes with one ray which has a time delay , and an angle of arrival (AoA) . If denotes the pulse shaping function, the delay-d channel vector between the user and th BS can be written as
where is the array response vector at the AoA , denotes the path-loss between BS and the user, and is the complex gain for the th path. Given the time domain channel in (2), the frequency domain channel vector at subcarrier can be expressed as follows:
Finally, the spatial channel covariance matrix is expressed as .
Iii Problem Definition
The main objective of this paper is to develop an efficient solution for the mmWave channel covariance estimation that requires very low training overhead. More specifically, considering the system model in Section II-A, our problem is to efficiently estimate the channel covariance at the th BS given the concatenated training sequence defined as
which is collected from all the coordinating BSs.
The major challenge of estimating the channel covariance in mmWave systems is the large channels dimensions due to the large number of antennas. This results in very high training overhead which becomes a limiting factor for the operation of highly-mobile mmWave systems such as vehicular communications and wireless virtual/augmented reality . Prior work [4, 3] attempted to reduce this training overhead relying on compressive sensing tools leveraging the note that mmWave channel estimation can be formulated as a sparse reconstruction problem . However, compressive sensing based solutions do not take advantage of the available history of the previous channel covariance observations. Further, these techniques still require the stacking of many channel samples to efficiently estimate the channel covariance, which makes them not capable of supporting highly-mobile mmWave applications. In the next section, we present our proposed solution that leverages deep learning tools to address this mmWave channel covariance estimation problem.
Iv Proposed GAN Based Approach
In this section, we present our machine learning based mmWave channel covariance prediction algorithm. First, we explain the main idea in Section IV-A, before delving into a detailed description of the developed solution and the machine learning modeling in Sections IV-B and IV-C.
Iv-a The Main Idea
The key challenge of estimating the channel covariance in highly-mobile mmWave applications is the large training overhead in time, due to the large number of antennas at the transmitters and/or the receivers. Prior research directions in mmWave channel (and channel covariance) estimation used to repeat the estimation process every time they channel (or channel covariance) changes, and did not make use of the previous observations of this estimation process. Intuitively, though, the channel and channel covariance matrices are some functions of the various elements of the environment including the transmitter/receiver locations and scatterers positions. The challenge is that these functions are difficult to characterize analytically as they normally involve many physical interactions and are unique to every environmental setup. Therefore, we propose to leverage the powerful capability of deep learning models to learn this mapping function and enable predicting the mmWave channel covariance matrix given a few features about the channel that are easy to estimate with low training overhead.
Omni-received signals: In , the authors showed that when the uplink training pilots are received simultaneously by multiple distributed basestations using omni or quasi-omni antenna patterns, these omni-received signals draw a rich multipath signature for the user location and its interaction with the surrounding environment. This is very interesting as no beam training is needed to acquire these omni-received signals, which dramatically reduces the training overhead. Inspired by this observation, we will adopt the model in which the uplink training pilots are received via only omni patterns, and train the machine learning model to learn the mapping between these omni-received signals and the channel covariance matrix. Mathematically, the omni-received signals are captured by the vector in (4) when the combining vectors are set to , i.e., by activating only one receive antenna element at every BS.
Factorized channel covariance: The mmWave channel covariance matrix has normally large dimensions, when employing large number of antennas at the transmitters/receivers. Further, each entry in this covariance matrix can generally take any complex value. This makes it hard for the machine learning models to efficiently predict the channel covariance matrix. Leveraging the sparse nature of the mmWave channels (the existence of only a few dominant paths in the channel), we propose to factorize the channel covariance matrix using the virtual channel model . Noting that in (2) is a finite (and typically small) number, the channel model in (2) can be written as
where is an unitary DFT matrix, with the th column equals to . Note that is a sparse vector with approximately nonzero elements corresponding to the paths of the channel. Similarly, the frequency domain channel vector at subcarrier can be written as
with . Finally, the spatial channel covariance, , can be factorized as
where . In this paper, we will refer to as the virtual channel covariance. Note that once this virtual channel covariance is estimated, the channel covariance can be directly constructed following .
It is important to note here that the virtual channel covariance matrix, , is normally a sparse matrix with a few non-zero entries. Therefore, it is much easier to learn the mapping from the omni-received uplink signature to the virtual channel covariance matrices as compared to the original channel covariance matrix, . Consequently, the objective of the machine learning model will be to predict the virtual channel covariance matrix given the omni-received signal.
Iv-B GAN-Based Channel Covariance Prediction
Our method is inspired by the high resolution image reconstruction which is well performed by GAN. In our task, we interpret the virtual channel covariance matrix, , as a 2D image. To make sure that the virtual channel covariance matrix, generated by the GAN network, is directly related to the low-dimensional omni-received signature, we adopt a conditional GAN network architecture [8, 9]. More specifically, in the learning phase, a generative model takes the omni-received pilot signals and a random vector as inputs, and generates an estimated virtual channel covariance matrix . The discriminative model then estimates the probability that its input virtual covariance matrix (generated by the generative model) is a real one, given the dataset. The overall loss function condition on is defined as
After the model is trained, we use the generator to directly predict the virtual channel covariance matrix given the uplink omni-received pilots .
Note that adopting this GAN network structure is motivated by the dimensions of the output virtual channel covariance matrix, which are much higher than the input training sequences. This is a scenario where traditional multi-layer perceptron networks may not work well . In the following subsection, we describe the adopted conditional GAN network architecture in more detail.
Iv-C Network Architecture
The considered network architecture which is composed of a generator and discriminator networks is depicted in Fig. 2. The generator network is denoted as while the discriminator is represented by . Here, denotes the dimension of the noise input to , and and are the dimensions of the training sequences and the covariance matrix. We input both the real and imaginary parts of the omni-received sequence . In the example of Fig. 2, we consider a setup with 4 coordinating BSs and with subcarriers. Therefore, the size of the input, accounting for both the real and imaginary parts, equals , in addition to a noise vector of size 100.
In the generator , we first generate a noise vector whose elements are drawn from , then concatenate the training sequences to . Following this, the estimation process is a deconvolutional network. By feed-forwarding the concatenated input, generates an estimated virtual channel covariance matrix via . Therefore, the estimated virtual covariance matrix is generated through the estimation process of conditioned on the training sequences and the random input vector. Note that the example in Fig. 2 assumes that each BS is equipped with 32-element ULA. Thus, the covariance matrices have dimensions .
In the discriminator , several layers of stride-2 convolution with spatial batch normalization followed by leaky ReLU are applied. Then, a conditional CNN layer (with spatial dimensions of in the example of Fig. 2) is added and the training sequences are concatenated in the depth dimension. Finally, a convolution followed by a rectification is performed to make the final output conditioned on the training sequence. Note that the conditional CNN layers combine the predicted virtual channel covariance matrix and its corresponding training sequences in a straightforward way by concatenating them and filtering the concatenated results. Such a structure can merge the information carried by both the covariance matrix and the training sequence [11, 9], which enhances the dependence of the discriminator’s output on the training sequences.
V Simulation Results
In this section, we describe in detail the simulation setup including the channel models, datasets generation, and GAN model parameters, as well as the simulation results.
System and channel models: We adopt the mmWave system and channel models in Section II, with the channel parameters (angles of arrival, path loss, etc.) generated using the commercial ray-tracing simulator Wireless InSite . The system considers 4 BSs serving one mobile over the 60GHz band. The 4 BSs are installed on 4 lamp posts at both sides of a street, as shown in Fig. 3. The 4 lamps are located on the corners of a 30m (x-axis, along the street) by 20m (y-axis, across the street) rectangle. Each BS has a ULA with antennas along the y-axis and is installed at height 6m while. For simplicity, the vehicular user is equipped with a single antenna at height 1m.
To simulate the channels between the BSs and the user at different locations of the street, we consider an x-y rectangular grid of candidate antenna/user locations with points. For every candidate antenna location, an uplink training signal is transmitted by the mobile user and is received simultaneously by the 4 BS using both omni-pattern (one active antenna) and the ULA. The omni-received signals are concatenated to form the omni-received uplink training sequence, , which is used as an input to the machine learning model. The signals received by the ULA are used to construct the virtual channel covariance matrices, , which are the outputs of the machine learning model. Note that every candidate user location results in one point in the machine learning dataset, which consists of a training sequence and a virtual channel covariance matrix. The number of the candidate antennas/users then decides the size of the database. The different channel parameters (AoAs, path gains, etc.) are generated by the ray-tracing simulator and are used to construct the virtual channel covariance matrices in MATLAB. We then train the GAN network at BS 1 which is placed at the top left corner of the rectangular street area to predict the channel covariance matrix at this BS given the training sequence, , collected from the 4 coordinating BSs.
Machine learning model: In the GAN network architecture, we set and . Therefore, the dimensions of the input, which concatenates the random vector and the real/imaginary training sequence, equals . We treat the virtual channel covariance matrix as a gray image with dimensions . Further, a normalization is performed such that both the omni-received sequences (the inputs) and the virtual covariance matrices (the outputs) are normalized by the maximum absolute value of their elements. Alternating steps of updating the generator and the discriminator network are used. The learning rate is set to 0.0002. We use the ADAM optimizer with momentum 0.5 and a batch size of 256. Finally, the network is trained for 200 epochs. Our machine learning model implementation was built by TensorFlow.
Performance evaluation: To evaluate the performance of the proposed solution for predicting the mmWave virtual channel covariance matrices, we use the normalized mean square error (NMSE), defined as
In Fig. 4, the average NMSE of the predicted virtual channel covariance matrices is plotted versus the size of the training dataset for different sizes of the BS antenna array. This figure shows that the performance of the proposed solution is better with large array sizes. This is intuitive because we treat the channel covariance matrices as images and use CNNs in the discriminator to exploit their multipath features. CNNs are normally able to exploit more features for larger images, which is the case with large antenna arrays in our problem.
Since we treat the virtual channel covariance matrices as images, it is interesting to visually compare the original and estimated virtual covariance matrices. In Fig. 5, we plot the results of the real parts of the original and estimated covariance matrices at BS 1. The white points in the main diagonal of the images represent strong receiving paths. The brightness of the points in the off-diagonal positions illustrates the level of correlation among the different paths. For example, the brightness of entry reflects the strength of correlation of the th and th paths.
Note that we only considered the 5 strongest paths for every BS-user channel when generating the dataset via the ray-tracing simulator. Therefore, the covariance matrices have a small number of paths (bright points). Fig. 5 shows that our GAN model can successfully estimate the virtual channel covariance matrices for different cases: when only one strong path exists (Fig. 5 (a)), two path exist (Fig. 5 (b), (c)), and more than two paths exist (Fig. 5 (d), (e)).
In this paper, we developed a novel mmWave channel covariance estimation/prediction solution based on recent deep learning techniques. The proposed solution learns the mapping between the uplink signals received simultaneously at multiple BSs using only omni-patterns and the covariance matrices. This solution, therefore, requires negligible time overhead in estimating the channel covariance matrices. In our machine learning model, we treat the covariance matrices as images and leverage conditional generative adversarial networks to learn the important features of these images. Simulations results, based on accurate ray-tracing and practical deployment scenarios, showed that the developed deep learning based solution efficiently predicts the mmWave channel covariance matrices with small mean-squared errors. In the future work, it is interesting to extend the current results to multi-user scenarios and to the cases where both the BSs and mobile users are equipped with antenna arrays.
-  A. Alkhateeb, S. Alex, P. Varkey, Y. Li, Q. Qu, and D. Tujkovic, “Deep learning coordinated beamforming for highly-mobile millimeter wave systems,” IEEE Access, vol. 6, pp. 37 328–37 348, 2018.
-  A. Adhikary, E. A. Safadi, M. K. Samimi, R. Wang, G. Caire, T. S. Rappaport, and A. F. Molisch, “Joint spatial division and multiplexing for mm-wave channels,” IEEE Journal on Selected Areas in Communications, vol. 32, no. 6, pp. 1239–1255, June 2014.
-  S. Park and R. W. Heath Jr, “Spatial channel covariance estimation for the hybrid MIMO architecture: A compressive sensing based approach,” arXiv preprint arXiv:1711.04207, 2017.
-  J. Lee, G.-T. Gil, and Y. H. Lee, “Channel estimation via orthogonal matching pursuit for hybrid MIMO systems in millimeter wave communications,” IEEE Transactions on Communications, vol. 64, no. 6, pp. 2370–2386, 2016.
-  R. W. Heath, N. González-Prelcic, S. Rangan, W. Roh, and A. M. Sayeed, “An overview of signal processing techniques for millimeter wave MIMO systems,” IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 3, pp. 436–453, April 2016.
-  A. Alkhateeb and R. W. Heath, “Frequency selective hybrid precoding for limited feedback millimeter wave systems,” IEEE Transactions on Communications, vol. 64, no. 5, pp. 1801–1818, 2016.
-  A. Alkhateeb, O. El Ayach, G. Leus, and R. Heath, “Channel estimation and hybrid precoding for millimeter wave cellular systems,” IEEE Journal of Selected Topics in Signal Processing, vol. 8, no. 5, pp. 831–846, Oct. 2014.
-  M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.
-  S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” arXiv preprint arXiv:1605.05396, 2016.
-  I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, http://www.deeplearningbook.org.
-  S. E. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee, “Learning what and where to draw,” in Advances in Neural Information Processing Systems, 2016, pp. 217–225.
-  Remcom, “Wireless insite,” http://www.remcom.com/wireless-insite.