Adaptive Sampling of RF Fingerprints for Fine-grained Indoor Localization
Indoor localization is a supporting technology for a broadening range of pervasive wireless applications. One promising approach is to locate users with radio frequency fingerprints. However, its wide adoption in real-world systems is challenged by the time- and manpower-consuming site survey process, which builds a fingerprint database a priori for localization. To address this problem, we visualize the 3-D RF fingerprint data as a function of locations (x-y) and indices of access points (fingerprint), as a tensor and use tensor algebraic methods for an adaptive tubal-sampling of this fingerprint space. In particular using a recently proposed tensor algebraic framework in  we capture the complexity of the fingerprint space as a low-dimensional tensor-column space. In this formulation the proposed scheme exploits adaptivity to identify reference points which are highly informative for learning this low-dimensional space. Further, under certain incoherency conditions we prove that the proposed scheme achieves bounded recovery error and near-optimal sampling complexity. In contrast to several existing work that rely on random sampling, this paper shows that adaptivity in sampling can lead to significant improvements in localization accuracy. The approach is validated on both data generated by the ray-tracing indoor model which accounts for the floor plan and the impact of walls and the real world data. Simulation results show that, while maintaining the same localization accuracy of existing approaches, the amount of samples can be cut down by for the high SNR case and for the low SNR case.
The availability of real-time high-accuracy location-awareness in indoor environments is a key enabler for a wide range of pervasive wireless applications, such as pervasive healthcare  and smart-space interactive environments [3, 4]. More recently, location-based services have been used in airports, shopping malls, supermarkets, stadiums, office buildings, and homes [5, 6]. The economic impact of the indoor localization market is forecasted by ABI Research to reach billion in 2018 .
Indoor localization systems generally follow three approaches, viz., cell-based approach, model-based approach, and fingerprint-based approach. For cell-based approach [8, 9], a user’s location is given by the access point to which it is connected and the localization error depends on the communication range and the distances between the access points. This approach, however, can only provide coarse grained localization. On the other hand model-based schemes [10, 11] exploit angle of arrival (AoA), time or time difference of arrival (ToA, TDoA), or received signal strength (RSS) from access points. However, such model-based approach is essentially limited by the following factors: 1) low transmission power, 2) high attenuation caused by walls and furnitures, 3) complicated surface reflections, and 4) unpredictable dynamics such as disturbance by human movements.
We will focus on the fingerprint-based approach, which can be classified into three categories, namely, RF (radio frequency) fingerprint-based, non-RF fingerprint-based, and cross-technology-based. For example, RADAR  is considered as the first RF fingerprint-based system; Google Indoor Map  and WiFiSLAM  are two widely used industrial apps, while the former uses RF fingerprints, the latter incorporates user trajectories, inertial information and accelerometer; IndoorAtlas  utilizes the magnetic field map; SurroundSense  exploits ambient attributes (sound, light, etc) including RF fingerprints. The advantages of RF fingerprint-based approach include: 1) It is a passive approach that exploits WiFi access points already in most buildings thus needs no extra infrastructure deployment; 2) The RSS values are provided by off-the-shelf WiFi- or Zigbee-compatible devices; 3) The flourishing smartphone market indicates its upcoming wide use; 4) It assumes no radio propagation model thus is more practical than the model-based approach.
In this paper, we consider the RF fingerprint-based approach. It is a two-phase approach, a training phase (site survey) and an operating phase (location query). The fingerprints obtained are averaged over time to counteract the effects of fading, and thus usually change in a building with change of walls and/or furniture. Assuming that the fingerprint data remains statistically stable, i.e., the mean RSS signal strengths from the access points don’t change rapidly and that the variance in the change is small and can be captured as a small additive noise. In the training phase, an engineer uses a smartphone to record RF fingerprints within a region of interest. Thereafter, a fingerprint database is built up at the server, in which each fingerprint is associated with the corresponding reference point. In the operating phase, a user submits a location query with her current fingerprint, then the server responds by matching the query fingerprint with candidate reference points in the database.
However, wide adoption of RF fingerprint-based approach is challenged by the time- and manpower-consuming site survey process as the engineer needs to sample a large number of reference points when the designer expects a significant changes of fingerprints. For example, a region of m m covers reference points with grid size m m. If a reference point takes about seconds (including moving to this point and measuring a stable fingerprint), then the site survey process takes about hours. Setting the grid size to a finer granularity will exacerbate this problem. In the end of 2014, Google Indoor Map 6.0 provides indoor localization and navigation only at some of the largest retailers, airports and transit stations in the U.S. and Japan , while its expansion is constrained by limited amount of fingerprint data of building interiors.
Recently, there are many works aiming to relieve the site survey burden, which we broadly classify into correlation-aware approach, crowdsourcing-based approach, and sparsity-aware approach. The correlation-aware approach leverages the fact that the fingerprints at nearby reference points are spatially correlated. For example,  utilized the krigging interpolation method while  adopted the kernel functions (continuous and differentiable discriminant functions) and proposed an approach based on discriminant minimization search. Such schemes try to use linear or non-linear functions to model the correlations, which are essentially model-based approaches and face similar limiting factors as discussed before.
Crowdsourcing-based approach removes the offline site survey and instead incorporates users’ online cooperation. For example, LiFS  leverages user reports to establish a one-to-one mapping between RF fingerprints and the digital floor plan map. However, the digital map is not always available. To overcome this Jigsaw  exploited image processing techniques to reconstruct floor plans based on the reported pictures and Zee  leveraged the inertial sensors (e.g., accelerometer, compass, gyroscope) to track users as they traverse an indoor environment while simultaneously performing WiFi scans. Since the crowdsourcing-based approach is fundamentally the application of machine learning techniques to large-scale data sets and will be effective only when users’ reports are sufficient to cover the whole space of human activity, it requires a large number of users to cooperate with the server. Furthermore, it imposes high energy consumption on smartphones.
Recent methods for efficient RF fingerprinting [22, 23, 24, 25, 26, 27] assume that the matrix of RSS values across channels/access points and locations is low rank and nuclear norm minimization  under random or deterministic non-adaptive element-wise sampling constraints is used for data completion. In  the authors assume that the signals are fast varying, model the signal variation as a first order dynamical system, and give approaches for dynamic matrix completion. Below we summarize the main contributions of our work and contrast them with the approaches taken so far.
I-a Summary of Contributions
In this paper our main technical contributions are as follows.
We model the RF fingerprint data as a low rank tensor using the notion of tensor rank proposed in [1, 30]. This is in contrast to existing works [22, 23, 24, 25, 26, 27] that assume a low rank matrix model. Furthermore, our algebraic framework is different from the traditional multilinear algebraic framework for tensor decompositions  that has been considered so far in the literature for problems of completing multidimensional arrays [32, 33, 34] with different notions for tensor rank.
We propose an adaptive sampling strategy that leads to a dramatic improvement in localization accuracy for the same sample complexity. Our approach adaptively samples a small subset of all reference points based on which, the fingerprint data is then reconstructed using tensor completion. In this context, a major difference from existing low-rank tensor completion [31, 35, 30, 36, 37] lies in that the sampling strategy in our paper is a vector-wise sampling, while existing low-rank tensor completion deals with entry-wise sampling. This is because when measurements is done at a location, there is data for all of the access points.
We also derive theoretical performance bounds and show that it is possible to recover data under weaker conditions than required for non-adaptive random sampling considered in . This is further evidenced by our numerical simulation on (i) a software model using concepts of ray-tracing and supervised learning with real data [38, 39], which accounts for shadowing of walls, wave guiding effects in corridors due to multiple reflections, diffractions around vertical wedges, and (ii) real data. We observe orders of magnitude improvements in the completion performance, see Fig. 6 and 7, over existing methods.
Applying the completed data for localization also results in orders of magnitude performance improvement in localization accuracy (see Fig. 8, 9 and 10) over other competing methods. Therefore our approach of exploiting adaptivity is efficient in collecting measurements while maximizing the localization accuracy.
The remainder of the paper is organized as follows. In Section II, we present the system model and the problem statement. Section III describes a random sampling approach as a baseline. Section IV provides details of our approach while the performance guarantees are given in Section V. Simulation results are presented in Section VI. Detailed proofs are given in the Appendix, and concluding remarks are made in Section VII.
Ii System Model and Problem Statement
A third-order tensor is represented by calligraphic letters, denoted as , and its -th entry is . A tube (or fiber) of a tensor is a 1-D section defined by fixing all indices but one, thus a tube is a vector. In this paper, we use tube to denote a fingerprint at reference point . Similarly, a slice of a tensor is a 2-D section defined by fixing all but two indices. frontal, lateral, horizontal slices are denoted as , respectively. We use lateral slice to denote an fingerprint matrix for the -th column of the grid map.
is a tensor obtained by taking the Fourier transform along the third mode of , i.e., . In MATLAB notation, , and one can also compute from via .
The transpose of tensor is the tensor obtained by transposing each of the frontal slices and then reversing the order of transposed frontal slices through , i.e., for , (the transpose of matrix ). For two tubes , denotes the circular convolution between these two vectors.
The block diagonal matrix is defined by placing the frontal slices in the diagonal, i.e., .
The algebraic development in  rests on defining a tensor-tensor product between two 3-D tensors, referred to as the t-product as defined below.
t-product. The t-product of and is a tensor of size whose -th tube is given by , for and .
Owing to the relation between the circular convolution and Discrete Fourier Transform, we note the following remark that is used throughout the paper.
For and , we have .
A third-order tensor of size can be viewed as an matrix of tubes that are in the third-dimension. So the t-product of two tensors can be regarded as multiplication of two matrices, except that the multiplication of two numbers is replaced by the circular convolution of two tubes. Further, this allows one to treat 3-D tensors as linear operators over 2-D matrices as analyzed in . Using this perspective one can define a SVD type decomposition, referred to as the tensor-SVD or t-SVD . To define the t-SVD we introduce a few definitions.
Identity tensor. The identity tensor is a tensor whose first frontal slice is the identity matrix and all other frontal slices are zero.
Orthogonal tensor. A tensor is orthogonal if it satisfies .
The inverse of a tensor is written as and satisfies .
f-diagonal tensor. A tensor is called f-diagonal if each frontal slice of the tensor is a diagonal matrix, i.e., for .
t-SVD. A tensor , can be decomposed as , where and are orthogonal tensors of sizes and respectively, i.e. and and is a rectangular f-diagonal tensor of size .
Tensor tubal-rank. The tensor tubal-rank of a third-order tensor is the number of non-zero fibers of in the t-SVD.
In this framework, the principle of dimensionality reduction follows from the following result from  111Note that in t-SVD is organized in a decreasing order, i.e., , which is implicitly defined in  as the algorithm for computing t-SVD is based on matrix SVD. Therefore, the best rank- approximation of tensors is similar to PCA (principal component analysis)..
Best rank- approximation. Let the t-SVD of be given by and for define , then , where .
We now define the notion of tensor column space. Under t-SVD in Definition 1, a tensor column subspace of is the space spanned by the lateral slices of under the t-product, i.e., the set generated by -linear combinations like so, , where denotes the tensor tubal-rank. We are now ready to present our main system model and assumptions.
Ii-a System Model
Suppose that the region of interest is a rectangle . Dividing into an grid map, with each grid of the same size. The grid points are called reference points. Let denote the grid map and has reference points in total. Within , there are randomly deployed access points. We neither have access to or control over those access points, nor know their exact locations. The engineer uses a smartphone to measure the RSS values from these access points. We use a third-order tensor to represent the RSS map of . Each reference point is associated with a received signal strength (RSS) vector , called a fingerprint, where is the RSS value of the -th access point. Note that the noise level is assumed to be equal to dBm, dBm [18, 19] if the signal of the -th access point cannot be detected. The fingerprint database stores the coordinates of all reference points and their corresponding RF fingerprints.
We use a third-order tensor to represent the RSS map of , and the t-SVD is . As mentioned in the Introduction, the RSS value is highly correlated across space for each access point and also across these access points. We model this correlation by assuming that has low tensor tubal-rank , in the sense defined before using the t-SVD.
We now outline the problem statement in detail. Note that as pointed out before, our sampling strategy will be to adaptively sample RF fingerprints, which correspond to the tensor fibers along the third dimension. See Figure 2.
Ii-B Problem Statement
Let denote the sampling budget, i.e., we are allowed to sample reference points. We model the site survey process as the following partial observation model:
where the -th entry of is equal to if and zero otherwise, being a subset of the grid map and of size , and is an tensor with i.i.d. elements, representing the additive Gaussian noise. Since the engineer usually averages the recorded RSS values to get a stable fingerprint in the site survey process, while users want to get quick response from the server, the noise in the query process is much higher than that in the site survey samples .
To cut down the grid map survey burden, we measure the RSS values of a small subset of reference points and then estimate from the samples . There are two facts that can be exploited: the prior information that tensor is low-tubal-rank, and the estimated should equal to on the set . Therefore, we estimate by solving the following optimization problem:
where is the decision variable, refers to the tensor tubal-rank, is the sampling budget, and is a regularization parameter. This approach aims to seek the simplest explanation fitting the samples.
Clearly the performance depends on the type of sampling strategy used and the algorithm for solving the optimization problem. In this paper we will consider two kinds of sampling, namely, uniform random tubal-sampling and adaptive tubal-sampling.
Iii Random tubal-sampling Approach
Before introducing the adaptive sampling approach, we first present a non-adaptive sampling approach, i.e., tensor completion via uniform tubal-sampling. We would like to point out again that tensor completion via uniform entry-wise sampling is well-studied [31, 35, 30], however, tubal-sampling is required in RF fingerprint-based indoor localization. By uniform tubal-sampling, we mean that in (2) the subset is simply chosen uniformly randomly (with replacement) from the grid map with . Similar to the matrix case, given a fixed solving the optimization problem (2) is NP-hard. In this case one can relax the tubal-rank measure to a tensor nuclear norm (TNN) measure as proposed in  and solve for the resulting convex optimization problem. The tensor nuclear norm as derived from the t-SVD is defined as follows.
This optimization problem can be solved using ADMM, with modifications to the algorithm in . Note that in  the authors consider a random element-wise sampling whereas here we are considering a random tubal-sampling. It turns out that under tubal sampling the tensor completion by solving the convex optimization of Equation (3) splits into matrix completion problems in the Fourier domain. This observation leads us to derive the performance bounds as an extension of the results in matrix completion using the following notion of incoherency conditions.
(Tensor Incoherency Condition) Given the t-SVD of a tensor with tubal-rank , is said to satisfy the tensor-incoherency conditions, if there exists such that for .
|(Tensor-column incoherence: )||(4)|
|(Tensor-row incoherence: )|
where denotes the usual Frobenius norm and are standard co-ordinate basis of respective lengths.
We have the following result stated without proof.
Under random tubal sampling, if then for for some constant and , solving the convex optimization problem of Equation (3) recovers the tensor with high probability (for sufficiently large ).
In contrast to this result we will see that adaptive sampling as we propose below requires almost the same sampling budget but only requires tensor column incoherence to be small (see Theorem 3), which is one of the major gains of adaptive tubal-sampling over random tubal-sampling as well as over random element-wise sampling considered in . This gain is similar to the matrix case (under element-wise adaptive sampling) as shown in . Adaptive sampling allows one to obtain more gains when the energy seems to be concentrated on few locations (for example, see Figure 4), which the random sampling can miss! Further, adaptive sampling reduces the number of measurements by a constant factor for the same accuracy. This is borne out by our experimental results. Although our approach is motivated by adaptive matrix sampling strategy in , we would like to point out that the performance bounds for the proposed adaptive strategy do not directly follow from the results in  and requires a careful treatment.
Iv Proposed Adaptive Sampling Approach
We begin by revisiting the problem and providing insights into the development of the proposed adaptive strategy.
The problem (2) contains two goals: (1) For a given low-tubal-rank tensor , to select a set with the smallest cardinality and the corresponding samples , preserving most information of tensor , i.e., one can recover from and . (2) For a given set and samples , to estimate a tensor that has the least tubal-rank. However, these two goals are intertwined together and one cannot expect a computationally feasible algorithm to get the optimal solution. Therefore, we set and seek to select a set and the corresponding samples that span the low-dimensional tensor-column subspace of . The focus of this section is to design an efficient sampling scheme and to provide a bound on the sampling budget .
To achieve this, we design a two-pass sampling scheme inspired by . The proposed approach exploits adaptivity to identify entries that are highly informative for learning the low-dimensional tensor-subspace of the fingerprint data. The st-pass sampling gathers general information about the region of interest, then the nd-pass sampling concentrates on those more informative reference points. In particular the total sampling budget is divided into and for these two sampling passes and is called the allocation ratio. In the 1st-pass sampling, we randomly sample out of reference points in each column of . In the 2nd-pass sampling, the remaining samples are allocated to those highly informative columns identified by the 1st-pass sampling. Finally, tensor completion on those RF fingerprints is performed to rebuild a fingerprint database.
The provable optimality of this scheme rests on these three observations.
is embedded in an -dimensional tensor-column subspace , ;
Learning requires to know only linearly independent lateral slices;222Note: A collection of lateral slices are said to be linearly independent (in the proposed setting) if .
Knowing , randomly sampling a few tubes of the -th column is enough to reliably estimate the lateral slice ;
However, we do not know the value of a priori nor the independency between any two lateral slices. Given a current estimate of the tensor-column subspace, following  one can adaptively sample each column according to the probability distribution where the probability of sampling the -th lateral slice is proportional to , i.e, . Updating the estimate of iteratively, when ( is a small constant) columns are sampled, we can expect that with high probability, , . Note that denotes projection onto the orthogonal space of ; in t-product form, , , and is invertible and can be computed according to Definition 4 and Remark 1.
The challenge is that we cannot have the exact sampling probability without sampling all reference points of the grid map . Nevertheless, exploiting the spatial correlation, one can estimate the sampling probability from missing data (sub-sampled data), as in Algorithm 1. Essentially Under the incoherency conditions of Equation (4) and for sufficiently large we show in Lemma 4 that is a good estimation of . Then using these estimates one can show that adaptive sampling scheme of the second pass of Algorithm 1 succeeds with high probability in estimating the correct tensor column-subspace. This is the content of Lemma 2 which analyzes a single step of second pass sampling using which the main Theorem 3 follows.
We now outline the details of the algorithm in the next section.
Iv-a Tensor Completion with Adaptive Tubal-Sampling
The pseudo-code of our adaptive tubal-sampling approach is shown in Algorithm 1. The inputs include the grid map , the sampling budget , the size of the tensor, , the allocation ratio , and the number of iterations . The algorithm consists of three steps. The st-pass sampling is a uniform tubal-sampling, while the nd-pass sampling outputs an estimate of the tensor-column subspace in rounds, as explained below.
Iv-A1 st-Pass Sampling
First, we gather general information of the whole region of interest, applying a uniform random sampling to avoid spatial bias. Denote these sampled reference points as , the sampled reference points in the -th column as with , and the corresponding fingerprints as .
Iv-A2 nd-Pass Sampling
Initialize with , . In each round, we estimate the sampling probability , and choose columns of according to the probability . Then, we calculate an intermediate subspace that is the space spanned by these columns and lies outside of , and then update the subspace . denotes these lateral slices, denotes these RF fingerprints in the -th round, corresponding to these columns of .
Let denote all sampled reference points (including ), denotes the sampled reference points in the -th column of , and denotes the tensor organized by the horizontal slices of indicated by . Define the projection operator . After rounds, we can obtain an fairly accurate estimation of . To understand the estimator, considering the noiseless case. Since lies in , we have , i.e., . And according to Definition 1, we know that . Therefore, using the estimate , we approximate each lateral slice with and concatenate these estimates to form .
V Performance Bounds
For performance guarantee, we are interested in the recovery error and required sampling budget. We prove that Algorithm 1 has bounded recovery error and achieves near-optimal sampling budget. Since we use the estimated sampling probability in the nd-pass sampling, we also prove that our estimates are relatively close to .
STEP 1 - First, we analyze a single round of the nd-pass sampling. Lemma 2 states that if the probability estimates are to within a constant tolerance of the true estimates (see Equation 5), then sampling columns of according to (estimated based on samples obtained in the st-pass sampling) will minimize the residual error within at rate ().. The second term in the right-hand side of (6) denotes the residual error outside of , which remains unreduced. Note that without any prior information (i.e., and ), sampling additional columns of will reduce the residual error at rate which is as in Algorithm 1. Therefore, Lemma 2 is the key to efficiently reduce recovery error. Note that denotes the projection onto the orthogonal complement of . See Appendix A for its proof.
Let with tensor tubal-rank , and represent the estimated tensor-column subspace in a round. Let , and be randomly selected lateral slices of (as indicated by ), sampled according to the distribution . If there exist constants , such that:
then, with probability we have:
where is a constant such that
denotes the space spanned by the slices of , and denotes a projection on to the best -dimensional subspace of .
Setting and sufficiently large, Lemma 3 states that our two-pass sampling scheme can approximate the low tubal-rank tensor with error comparable to that of the best rank- approximation . This indicates that the nd-pass sampling estimates with high accuracy.
Suppose that (5) holds with for some constant in each round. Let denote the sets of slices selected at each round and set . Then with probability we have:
The proof follows along the same lines as the proof for the adaptive matrix completion case in , by applying the matrix result to frontal slices of the tensor in the Fourier domain . ∎
Let and satisfies the tensor-column incoherence condition (4) with , with probability we have
as long as the expected sampling budget satisfies:
for some , and thus is of order with .
This result relies on three conditions: the incoherence of each lateral slice , the tensor-column incoherence of - Equation (4), and Lemma 12; those three conditions’ failure probabilities are less than , , and , respectively; therefore, Lemma 4 holds with probability . See Appendix C for a complete proof. ∎
For with tensor tubal-rank , its t-SVD in Definition 1 indicates that the degree of freedom (in terms of non-zero vectors) is which is of order with . Therefore, our sampling budget is near-optimal within a factor of .
STEP 3 - Finally, we analyze the estimation process (Last step of the Algorithm) in Lemma 5. It states that our estimator outputs each lateral slice with bounded error, which is comparable to the energy outside of , i.e., .
With our choice of (and correspondingly related via ), is smaller than as shown in Appendix -C. Therefore, , combining Lemma 3 and Lemma 4, (see also ), we have Lemma 6. Note that Lemma 3, Lemma 4, and Lemma 5 each has failure probability less than , and , therefore, Lemma 6 has success probability .
Assume that , has tensor tubal-rank , tensor-column incoherence , , and for all , . Then for , sample columns of each round, after rounds, compute as described. Then with probability ,
While Lemma 6 bounds the difference between and , we then bound the difference between and , which measures the recovery error of Algorithm 1. This error consists of two parts: the first term measures the performance of our estimation process, and the second term is essentially which measures the effect of noise in the samples . Combining these results we have the following main Theorem.
Under the partial observation model , where , . Assume that has tubal-rank , , and tensor-column incoherence . Then for , and
with probability , there exist two constants such that the estimation error of Algorithm 1 obeys,
As a consequence of this theorem, for example, assuming that the -norm of each fingerprint is approximately the same, say , then our algorithm guarantees that the recovery error of each fingerprint in -norm will be bounded by and the relative error is bounded by . Since are relatively large, and is provided in Lemma 4 to be in order of with , therefore, the relative error is small.
Vi Performance Evaluation
We are interested in two kinds of performance: recovery error and localization error. Varying the sampling rate as , we quantify the recovery error in terms of normalized square of error (NSE) for entries that are not sampled, i.e., recovery error for set . The NSE is defined as:
where is the estimated tensor, is the complement of set .
In the simulations, we uniformly select testing points within the selected region and then using the classic localization schemes to perform localization estimation. We measure the localization error as the Euclidean distance between the estimated location and the actual location of the testing point, i.e., .
For tensor recovery, we consider three algorithms, tensor completion (TC) under uniformly random tubal-sampling and using the algorithm proposed in [30, 36], using the face-wise matrix completion (MC) algorithm in , and tensor completion via matricization or flattening (MC-flat)  under uniform element-wise sampling of the 3D tensor, using the AltMin algorithm for matrix completion . We subsequently use the completed RSS map for localization and compare the error in location estimates.
Vi-a Experiment Setup - Model-based Data
We select a region of m m in a real office building, as shown in Fig. 4. It is divided into a grid map. There are access points randomly deployed within this region. The indoor radio channel is characterized by multi-path propagation with dominant propagation phenomena: the shadowing of walls, wave guiding effects in corridors due to multiple reflections, and diffractions around vertical wedges. The ray-tracing based model [38, 39] is adopted, which considers all these effects leading to highly accurate prediction results. Further, the model parameters were found by supervised learning with the real collected measurements using a professional software . We generated a RSS tensor as the ground truth for our simulations. Note that the RSS values are measured in dBm. For example, the RSS radio map for the -th and -th access points are shown in Fig. 5.
Radio Map Recovery Performance: Fig. 6 shows the RSS tensor recovery performance for varying sampling rate. Compared schemes are matrix completion and tensor completion via uniform sampling, and adaptive sampling with allocation ratio and . We find that all tensor approaches are better than matrix completion, this is because tensor exploits the cross correlations among access points while matrix completion only takes advantage of correlation within each access point. Both AS schemes outperform tensor completion via uniform sampling since adaptivity can guide the sampling process to concentrate on more informative entries. Allocating equal sampling budget for the st-pass and the nd-pass gives better performance than uneven allocation. This shows that the st-pass and the nd-pass have equal importance. The proposed scheme (AS with ) rebuilds a fingerprint data with error using less than samples.
Vi-B Experiment Setup - Real data
We collected a WiFi RSS data set in the same office. The data set contains selected locations and access points. Since the locations are not exactly on a grid, we set the grid size to be m m, and apply the KNN method to extract a full third-order tensor as the ground truth. To be specific, for each grid point, we set its RSS vector by averaging the RSS vectors from the nearest three () locations. The ground truth tensor has dimension . This tensor serves as a complement to our model-generated data while in the next section, we want to test the localization performance at a finer granularity and covering the whole region of interest.
Radio Map Recovery Performance: Fig. 7 shows the RSS tensor recovery performance for the real-world data set. First, compared with Fig. 6, we see that the recovery performance on real-world data is consistent with that of simulated data. Second, for real-world data set, tensor model is superior to matrix model. In our case, a major ingredient for the recovery improvement may be the large number of access points (i.e., ), compared with the dimension of the grid (i.e., ). Third, as expected, the propose adaptive scheme achieves better recovery performance.
Vi-C Localization Performance
An important factor that influences the localization error is the measurement noise. Note that for site survey, the engineer stays for a while to obtain a stable fingerprint. Therefore, we only consider measurement noise in the query fingerprint. The noise may come from the measuring process or the dynamics in the environment. High SNR and low SNR cases are considered. For the high SNR case we add dBm Gaussian noise, while dBm Gaussian noise is added for the low SNR case.
We choose three representative localization techniques for comparison, namely, weighted KNN, the kernel approach, and support vector machine (SVM). KNN (weighted KNN) is the most widely used technique since it is simple and is reported to have good performance in indoor localization systems . The kernel approach is an improved scheme over weighted KNN, which can be regarded as the basic principle of all machine learning-based localization approaches. The kernel function encapsulates the complicated relationship between RF fingerprints and physical positions. SVM is an efficient machine learning method widely used in fingerprint-based indoor localization. SVM uses kernel functions and tries to learn the complicated relationship between RF fingerprints and physical positions by regression.
In addition to localization based on the estimated tensor , we also test the above three localization techniques on the samples (without doing any reconstruction or estimation). Let DL denote direct localization (DL on uniformly sampled fingerprints, while DL- and DL- denote direct localization on adaptively sampled fingerprints with allocation ratios and , respectively.
Vi-C1 Weighted KNN
Let be a fixed positive integer which are usually set to be etc., consider a sampled fingerprint at reference point . Find within the fingerprint database the reference point locations whose fingerprints are nearest to . Then, estimate the location by weighted averaging as follows: