Inertial Sensor Arrays, Maximum Likelihood, and Cramér-Rao Bound

Inertial Sensor Arrays, Maximum Likelihood, and Cramér-Rao Bound

Isaac Skog , John-Olof Nilsson , Peter Händel , and Arye Nehorai  I. Skog, J-O. Nilsson, and P. Händel are with the Department of Signal Processing, ACCESS Linnaeus Centre, KTH Royal Institute of Technology, Stockholm, Sweden. (e-mail: skog@kth.se, jnil02@kth.se, ph@kth.se).A. Nehorai is with the Preston M. Green Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130 USA (e-mail: nehorai@ese.wustl.edu).The work of I. Skog, J. Nilsson, and P. Händel has partly been supported by the Swedish Governmental Agency for Innovation Systems (VINNOVA).The work of A. Nehorai was support by AFOSR Grant No. FA9550-11-1-0210.
Abstract

A maximum likelihood estimator for fusing the measurements in an inertial sensor array is presented. The maximum likelihood estimator is concentrated and an iterative solution method is presented for the resulting low-dimensional optimization problem. The Cramér-Rao bound for the corresponding measurement fusion problem is derived and used to assess the performance of the proposed method, as well as to analyze how the geometry of the array and sensor errors affect the accuracy of the measurement fusion. The angular velocity information gained from the accelerometers in the array is shown to be proportional to the square of the array dimension and to the square of the angular speed. In our simulations the proposed fusion method attains the Cramér-Rao bound and outperforms the current state-of-the-art method for measurement fusion in accelerometer arrays. Further, in contrast to the state-of-the-art method that requires a 3D array to work, the proposed method also works for 2D arrays. The theoretical findings are compared to results from real-world experiments with an in-house developed array that consists of 192 sensing elements.

I Introduction

\PARstart

Motion sensing is an essential capability in many systems to achieve a high level of autonomy. Nowadays, inertial motion sensors are ubiquitous in everything from industrial manufacturing equipment to consumer electronic devices. This widespread usage of inertial sensors has become possible thanks to the last decade’s micro-electrical-mechanical-system technology development, which has revolutionized the inertial sensor industry [1]. Today, inertial sensors can be manufactured at unprecedented volumes and at low prices [2]; six degrees-of-freedom inertial sensor assemblies where sold to large-volume customers at less than a dollar in 2013 [3]. Even though development has pushed the performance boundaries of the inertial sensor technology, contemporary ultralow-cost sensors still cannot fully meet the needs of many applications; especially applications where the sensor signals have to be integrated over time. These applications still suffer from the bias instability, nonlinearities, and thermal instability of the contemporary ultralow-cost sensors [1]. However, by capitalizing on the decreasing price, size, and power-consumption of the sensors, it is now feasible to construct arrays with hundreds of sensing elements, and digitally process these measurements to create virtual high-performance sensors. The benefit of using an array of sensors is not only an increased measurement accuracy, but also an increased operation reliability thanks to the possibility of carrying out sensor fault detection and isolation [4, 5]. Further, from the redundant measurements the covariance of the measurement errors can be estimated and used to determine the reliability of the measurements [6]. Moreover, as will be shown, the angular acceleration can be directly estimated and for some array configurations the dynamic measurement range can be extended beyond that of the individual sensors by utilizing the spatial separation of the sensors. An example of an in-house developed embedded system holding an inertial sensor array constructed from contemporary ultralow-cost sensors is shown in Fig. 1. The system has 192 inertial sensing elements, a micro-controller for information fusion, and a Bluetooth interface for communication. Refer to www.openshoe.org and [7] for details.

53 mm

33 mm

Sensor array with 16 chipsets on each side of the PCB

Fig. 1: An in-house designed embedded system with an inertial sensor array, which is available under an open-source licence. The array consists of 32 MPU9150 Invensense inertial sensor chipsets, each containing an accelerometer triad, a gyroscope triad, and a magnetometer triad.
Reproducible research: The layout files and software for the inertial sensor array used in the experiments are available under an open-source licence at www.openshoe.org.

Inertial sensors are primarily used to measure the motion of objects. Since a rigid body in a three-dimensional space has six degrees-of-freedom, three rotational and three translational, the motion of an object is commonly measured using a sensor assembly of three gyroscopes and three accelerometers, a.k.a. an inertial measurement unit (IMU). However, as a rigid object’s motion can also be described in terms of the translational motion of three non-collinear points on the object, it is also possible to measure the motion using only an assembly of spatially separated accelerometers, a.k.a. a gyroscope-free IMU; the object’s rotational motion is then described by the relative displacement of the points. Thus, with an inertial sensor assembly that consists of both gyroscopes and spatially distributed accelerometers, a.k.a. an inertial sensor array, rotational information can be obtained both from the gyroscopes and the accelerometers. The question is, how should the measurements in such sensor array be fused?

Array signal processing and measurement fusion in the context of sensor arrays measuring wave fields is a long-studied subject in the signal processing literature [8], and the within the area developed methods have been applied to various types of sensor arrays such as antenna arrays [9, 10], magnetic sensor arrays [11], acoustic sensor arrays [12, 13, 14], and chemical sensor arrays [15]. However, since an inertial sensor array do not measure a wave field, but something that can be referred to as a force field, the signal models and methods in the array processing literature are not directly applicable to the inertial sensor array measurement fusion problem. Nevertheless, based on results from classical mechanics about forces in rotating coordinate frames a signal model for the inertial sensor array measurements can be formulated, and results from the field of estimation theory applied to derive an measurement fusion method. Accordingly, we will propose a maximum likelihood approach for fusing the measurements in an inertial sensor array consisting of multiple accelerometer and gyroscope triads. With suitable projection vectors, the approach can also be generalized to a system of single sensitivity axis sensor units.

I-a State-of-the-art techniques

Next, a brief summary of existing work on inertial sensors arrays is presented. For an in-depth review of the topic the reader is referred to the literature survey presented in [16].

In the 1960s, it was shown that the specific force and angular acceleration of an object can be estimated with an assembly of nine accelerometers111With six accelerometers the angular velocity cannot be estimated from the measurements at a single time instant, however, a differential equation describing the rotation dynamics can be posed by which the angular velocity can be tracked over time, see e.g. [17, 18].[19]; the angular velocity can only be estimated up to a sign ambiguity since the centrifugal force depends quadratically on the angular speed [20, 21, 22]. Though the motion of an object can be tracked using only the estimated specific force and angular acceleration, the increased sensitivity to measurement errors caused by the extra integration needed to calculate the orientation of the tracked object renders it as an intractable solution. Therefore, a variety of methods to resolve the sign ambiguity of the angular velocity estimates by tracking the time development of the estimated quantities have been proposed [23, 24, 25, 26, 27, 28, 29, 20].

Instead of directly estimating the angular velocity and angular acceleration from the accelerometer measurements, it is common to first estimate the angular acceleration tensor (or variations thereof), and then calculate the angular velocity and angular acceleration [30, 28]. To simplify the estimation of the angular acceleration tensor, the fact that it only has six-degrees of freedom is generally neglected, so that it can be estimated via a linear transformation of the accelerometer measurements, see e.g. [31, 30, 24, 25, 26, 27, 17, 32, 33] or the Appendix. The advantages of this approach are its simplicity and that it is straight-forward, using the least squares method (see e.g. [28]), to include the measurements from additional redundant accelerometers. The disadvantages are mainly caused by the neglected constraints on the tensor, these are as follows: (i) a minimum of twelve, instead of nine, accelerometers are needed for the estimation problem to be well defined [30]; (ii) the estimation accuracy is reduced; and (iii) the locations of the sensors must span a 3D space instead of a 2D space [25]. The requirement that the locations of the sensors must span a 3D space significantly increases the size of such a system and causes a problem if a sensor array is to be constructed on a printed circuit board.

Since the angular velocity of all points on a rigid object are the same, no additional information, except that from redundant measurements, is obtained from having several spatially distributed gyroscopes in an inertial sensor array. Still, a non-negligible reduction in the measurement errors can be obtained from the redundant measurements, see e.g. the Allan variance analysis in [7]. A few methods to fuse the measurements from multiple gyroscopes using different filter frameworks and signal models are described in [34, 35, 36].

With respect to inertial arrays that consist of multiple IMUs, the measurement fusion problem has mainly been studied in the framework of global navigation satellite system aided inertial navigation systems [6, 37, 38, 39, 40]. In the literature, the proposed information fusion approaches can be broadly grouped into two categories. In the first category, see e.g. [6, 37, 38, 39], the IMU measurements are fused before they are used in the inertial navigation systems; commonly a weighted average is used and the spatial separation of the sensors is neglected. In the second category, see e.g. [40], they are fused after being processed by several parallel inertial navigation systems. Refer to [37] and the references therein for a discussion on the pros and cons of the two approaches, as well as an evaluation of different measurement fusion methods, including a few where the spatial separation of the sensors are taken into account.

I-B Presented work and findings

In this paper, the problem of fusing the measurements from an array of accelerometer and gyroscope triads is posed as a parameter estimation problem for the first time. Results from the field of estimation theory are used to derive a maximum likelihood based measurement fusion method. The Cramér-Rao bound (CRB) for the corresponding estimation problem is also derived and used to evaluate the performance of the proposed measurement fusion method. Simulations show that the proposed method attains the CRB and outperforms the current state-of-the-art method for measurement fusion in accelerometer arrays. Further, by studying the properties of the signal model it is shown that necessary and sufficient conditions for the measurement fusion problem to be well conditioned is that the array consists of at least one gyroscope triad and three accelerometer triads whose locations are non-collinear; current state-of-the-art information fusion methods require 3D arrays. Moreover, an analysis of the CRB reveals that: (i) the angular velocity information gained from the accelerometers is proportional to the square of the array dimensions and to the square of the angular speed; (ii) the accuracy with which the angular acceleration can be estimated decreases with the angular velocity; and (iii) there exists no measurement fusion method that can calculate an unbiased, finite variance, estimate of the angular velocity for small angular velocities from an array of only accelerometer. To support the theoretical findings, the accuracy of the proposed measurement method is also experimentally evaluated. The experimental results show that proposed method works, but it also shows that there is a discrepancy between the theoretical and experimental results. The discrepancy between the theoretical and experimental results are likely primarily due to uncertainties in the location of the sensing elements of the real-world array. To summarize, the proposed information fusion method outperforms the current state-of-the-art method and also works for 2D arrays, allowing for smaller and cheaper senor arrays to be constructed. However, more research on sensor array calibration is needed before the inertial sensor arrays can reach their full potential in practice.

Ii Measurement fusion

In this section a maximum likelihood estimator to fuse the measurement from an inertial sensor array consisting of accelerometer triads and gyroscope triads will be presented. The derivation of the estimator takes its starting point in the, from classical mechanics obtained, relationship between forces in rotating coordinate frames.

Ii-a Accelerometer signal model

Basic kinematics dictates that the specific force in one point of a rotating coordinate frame may be decomposed into that of another point, a centrifugal term, and an Euler term. Specifically, as illustrated in Fig. 2, the specific force , sensed by the :th accelerometer triad located at position in the array coordinate frame and which sensitivity axis are aligned with the coordinate axis of the array frame222A calibration method to align the coordinate axis of the sensors in an inertial sensor array can be found in [41]., is given by [42, p. 90]

(1)

Here denotes the specific force sensed at the origin of the array frame. Further, and denote the array frame’s angular velocity and angular acceleration w.r.t. inertial space, respectively. Moreover, denotes the cross product between vector and . This relates the measurements of different accelerometers of known locations within the array to a common specific force via the angular velocity and acceleration of the array.

IMU

IMU

IMU

IMU

IMU

IMU

IMU

IMU

IMU

IMU

IMU

IMU

IMU

IMU

IMU

IMU

Origin of array coordinate frame

Specific force

Euler force

Centrifugal force

:th accelerometer triad

Fig. 2: An illustration of the decomposition of the force sensed by an accelerometers in an inertial sensor array, overlaid on a picture of the printed circuit board of the inertial sensor array shown in Fig. 1. The specific force sensed by the :th accelerometer triad is the sum of the specific force at the origin of the array coordinate frame, the centrifugal force, and the Euler force.

Next, let denote the skew-symmetric matrix representations of the three element vector for which . Then (1) may be rewritten in terms of matrix multiplications as

(2)

where in the second equality it was used that . The measurement vector , consisting of the measurements from the accelerometers in the array, can thus be modeled as

(3)

where

and . Here and denote a column vector of length with all of the entries equal to one and an identity matrix of size , respectively. Further, denotes the measurement error of the accelerometers.

Ii-B Gyroscope signal model

The angular velocity sensed by the gyroscopes is independent of their location in the array. Consequently, the measurement vector , consisting of the measurements from all of the gyroscopes in the array, can be modeled as

(4)

where , and denotes the measurement error of the gyroscopes. That is, all gyroscopes measures the same angular velocity and are disturbed by additive measurement error.

Ii-C Array signal model

Concatenating all of the measurements into a single vector, the signal model for the full array becomes

(5)

where

Here denotes a zero matrix of size by . Thus, the signal model (5) for the inertial sensor array consists of a non-linear part , depending only on the angular velocity , and a linear part , depending only on the specific force and angular acceleration . This separation of dependencies will turn out to be useful in the estimation of the specific force and the angular velocity.

Ii-D Identifiability conditions

When , i.e., the array consists of only accelerometers, then and the parameters in the signal model is not identifiable. Further, will only have full rank if the array has at least three accelerometer triads whose locations span a 2D space. To see this, assume without loss of generality that the origin of the array coordinate system is defined so that , then

Next, note that for , and the null space vector of is . Thus, only if there are two accelerometer triads whose locations () and () are such that , then . That is, has full rank if and . Since the angular velocity is directly measured by the gyroscopes this implies that necessary and sufficient conditions for the parameters in the signal model to be identifiable is that: (i) the array has at least one gyroscope triad, and (ii) at least three accelerometer triads whose locations span a 2D space.

Ii-E Maximum likelihood measurement fusion

Assuming the measurement error to be zero-mean and Gaussian distributed with the known covariance matrix , the log-likelihood function for the signal model (5) is given by

(6)

where is a constant and . The maximum likelihood estimator for is thus given by

(7)

Since the signal model (5) is partially linear, we may concentrate the log-likelihood function by first fixing the parameters and maximizing the likelihood function w.r.t. the linear parameters , and then substitute the result back into the likelihood function [43]. For a fix value , the solution to (7) is given by the weighted least squares solution [44]. That is,

(8)

Substituting (8) back into (7) yields the concentrated maximum likelihood estimator

(9)

where

(10)

and

(11)

Thus, maximizing the concentrated likelihood problem is the equivalent to solving a nonlinear least squares problem. Once the angular velocity estimate has been found, then the maximum likelihood estimate can be calculated via (8).

As the error characteristics of ultralow-cost inertial sensors generally vary with motion dynamics and environmental factors such as temperature [45], the assumption that the error covariance matrix is known may not be realistic in all situations. The covariance matrix may then be parameterized and included in the likelihood function, see e.g., [46]. A few other approaches for estimating the measurement error statistics in inertial sensors arrays are discussed in [6]. However, throughout the paper we will assume to be known with sufficient accuracy.

Note that the proposed estimator, i.e., (8)-(11), can also be derived without the assumption of the Gaussian distributed measurement error using the least squares framework. The assumption was introduced to provide stringency with the assumptions used to derive the CRB presented in Section IV.

Ii-F Maximizing the concentrated likelihood function

The concentrated maximum likelihood estimation problem in (9) may be solved numerically using the Gauss-Newton algorithm [44]. That is, can be iteratively calculated as

(12)

where denotes the iteration index and is an initial estimate of the angular velocity. A good initial estimate to the Gauss-Newton algorithm is given by the weighted least squares estimates of the angular velocity calculated from the gyroscope readings, i.e., , where denotes the covariance matrix of the gyroscopes measurement error . The Jacobian of is given by

(13)

where .

The performance of the proposed measurement fusion method is evaluated in Section V, where it is compared to the CRB derived in Section IV. Next, we will describe how the dynamic range of the angular velocity measurements of the array can be extended beyond the dynamic range of the gyroscopes used in the array.

Iii Angular Velocity Measurement Range Extension

The measurement range of contemporary ultralow-cost gyroscopes is generally limited to some thousands of degrees per second, whereas accelerometers with measurement ranges of thousands of meters per second square are widely available. Thus, when designing sensor arrays to be used in high dynamic applications such as biomedical tracking systems for sport activities where the angular velocity may exceed thousands of degrees per second, see e.g., [47, 48], it may be difficult to find gyroscopes with a sufficient dynamic range. Consequently, the gyroscopes in the array sometimes get saturated. However, the proposed measurement fusion method can still be used to estimate the angular velocity by simply removing the saturated gyroscope readings from the measurement model and adapting the dimensions of the measurement error covariance matrix accordingly333Theoretically, simply removing the measurements that are believed to be saturated will create a bias in the estimated quantities, however our practical experience shows that this effect is negligible..

When all of the gyroscopes are saturated, then the measurement model (5) becomes the equivalent to the model in (3), for which the angular velocity is not identifiable since . However, since the sign of the elements in the angular velocity vector can be deduced from the saturated gyroscopes, the angular velocity is still identifiable. In Fig. 3 an example of the angular velocity measurement range extension is shown, when applied to data collected by the array in Fig. 1.

\psfrag

title[c][c][0.75][0]Accelerometer measurements versus time. \psfragys[c][c][0.7][0] \psfragx-axis[l][l][0.6][0]x-axes \psfragy-axis[l][l][0.6][0]y-axes \psfragz-axis[l][l][0.6][0]z-axes

(a) Overlaid measurements form all of the accelerometers in the array. The differences between the measurements are caused by the spatial separation of the sensors and can be used to estimate the angular velocity and angular acceleration of the array.
\psfrag

title[c][c][0.75][0]Gyroscope measurements versus time. \psfragys[c][c][0.7][0] \psfragx-axis[l][l][0.6][0]x-axes \psfragy-axis[l][l][0.6][0]y-axes \psfragz-axis[l][l][0.6][0]z-axes

(b) Overlaid measurements form all of the gyroscopes in the array. The gyroscopes saturates at 2000 .
\psfrag

title[c][c][0.75][0]Estimated angular velocity versus time. \psfragys[c][c][0.7][0] \psfragx-axis[l][l][0.6][0]x-axis \psfragy-axis[l][l][0.6][0]y-axis \psfragz-axis[l][l][0.6][0]z-axis

(c) Angular velocity estimated using the proposed method. The angular velocity can be estimated even though the gyroscopes are saturated.
Fig. 3: An illustration of the angular velocity measurement range extension using the proposed method. The measurements were recorded while twisting the array in Fig. 1 by hand. The differences between the accelerometer measurements, see Fig. 3(a), are used to estimate the angular velocity, see Fig. 3(c), even when the gyroscopes are saturated, see Fig. 3(b).

When the gyroscopes are saturated, finding a good initialization of the Gauss-Newton algorithm becomes trickier and the convergence of the algorithm can be slow. In applications where the inertial sensor array is used on a regular basis to measure the specific force and angular velocity, an initial estimate may be found by predicting the angular velocity from the at pervious time instant estimated angular velocity and angular acceleration [31]. Another method to find an initial estimate when the gyroscopes are saturated and a 3D array is used is through the angular acceleration tensor method that is summarized in the Appendix.

The performance of the proposed measurement fusion method, for the case when the gyroscopes are saturated, is evaluated in Section V. The performance is compared to the CRB derived in the next section.

Iv Cramér-Rao Bound

Since the measurement error in the signal model is assumed to be Gaussian distributed, a general expression for the CRB for the measurement fusion problem at hand is straight forward to derive starting from the results in [44, p.49]. Assuming the following parameter order , the CRB becomes

(14)

and the Fisher information matrix is given by

(15)

where . Here the notation implies that is a positive semi-definite matrix. Since the measurement model is linear in , the Fisher information matrix will not depend on the specific force and the angular acceleration. To gain a deeper understanding of the estimation problem at hand, we will study the CRB for a special set of array configurations next.

Iv-a Square arrays with uncorrelated measurement errors

Consider the case where the following three conditions apply: (i) the accelerometers are mounted in a planar square grid with a spacing in between each sensor and the origin of the array coordinate system is defined at the center of the grid; (ii) the measurement errors are uncorrelated; and (iii) all of the accelerometers and gyroscopes have the same error variance. Then , where denotes the matrix direct sum. Further, and denote the measurement error variance of the accelerometers and gyroscopes, respectively. Moreover, thanks to the symmetry of the array . The Fisher information matrix then takes the following form

(16)

where

(17)
(18)
(19)

Here denotes the Jacobian of . From the Fisher information matrix in (16) we can see that for the considered type of array, the CRB for the specific force is given by , where . That is, the accuracy with which the specific force can be estimated is inversely proportional to the number of accelerometer triads in the array, and independent of the angular velocity and geometry of the array. This is quite intuitively since for a symmetric array with a set of accelerometer triads having the same measurement error characteristics (measurement error covariance), the maximum likelihood estimator for the specific force is given by the arithmetic mean of the accelerometer measurements.

Using the Schur complement, the CRB for the angular velocity can be found to be

(20)

where

(21)

and

(22)
(23)
(24)

Next, using the symmetric properties of the considered array geometry and performing, tedious but straight forward, calculations give that

(25)

Note that the assumption about the accelerometers being mounted in a planar square grid implies that . From the expression for the Fisher information matrix in (25), we can see that when the array is stationary no rotational information is gained from the accelerometers, and the CRB becomes equivalent to the covariance of the arithmetic mean of the gyroscope measurement errors. When the array starts rotating, the accelerometers will start to provide rotational information and the accuracy with which the angular velocity can be estimated increases proportionally to the squared angular rates. Further, the rotational information gained from the accelerometers scale quadratically with the distance between the sensors.

When all gyroscopes are saturated, i.e., , where is the dynamic range of the gyroscopes, no information other than the sign of the angular velocity is provided by the gyroscopes and the angular velocity must be estimated sole from the accelerometers. As the sign information can be seen as a special case of inequality constraints, and inequality constraints generally does not have any effect on the CRB [49], the Fisher information for this case can be found by letting . The inverse of the Fisher information matrix (25) then takes the following simple form

(26)

where the non-zero of the diagonal elements have been left out for brevity. From this, we can see that in an array where the gyroscopes have an infinitely small dynamic range, i.e., the gyroscopes only provides information regarding the sign of the angular velocity, then the covariance of the angular velocity estimates tends toward infinity as the angular velocity goes toward zero. Hence, for so called gyroscope-free IMUs, i.e., an array of only accelerometers, there is no estimator that can provide an unbiased, finite variance, estimate of small angular velocities. Therefore, the practical use of gyroscope-free IMUs in inertial navigation systems for low-dynamical applications is questionable.

The CRB for the angular acceleration is given by

(27)

where

(28)

From (28) we can note that the accuracy with which the angular acceleration can be estimated, somewhat surprisingly, decreases with an increasing angular velocity. The best accuracy is achieved when the array is in linear motion () and is given by

(29)

V Simulation and Experiments

In this section the performance of the proposed maximum likelihood measurement fusion method will be evaluated through simulations and real-world experiments. The results will be compared to the derived CRB.

V-a Simulations

(a) Planar geometry
(b) Non-planar geometry
Fig. 4: Geometries of the accelerometer triads used in the Monte Carlo simulations.

To evaluate the accuracy of the estimator we have conducted two Monte-Carlo simulations, where each Monte-Carlo simulation runs the proposed measurement fusion method on data realizations. The array considered in the simulations consists of four accelerometer triads and four gyroscope triads, where the accelerometer triads are mounted as illustrated in Fig. 4(a). Note that all of the arrays that fulfill the assumptions made in Section IV-A can be transformed into an equivalent array with a geometry as in Fig. 4(a) plus possible additional sensors located at the origin. The measurement errors of the accelerometers and the gyroscopes were assumed to be uncorrelated and to have the standard deviation  [] and   [], respectively. Further, the gyroscopes were assumed to saturate at 2000 []. These parameter values were selected to reflect the performance that can be expected from ultralow-cost inertial sensors during high dynamic operations when scale-factor, g-sensitivity, and cross-axis sensitivity errors become the main error sources. The small value selected for the accelerometer measurement error variance is motivated by the fact that the accelerometers in the array can easily be calibrated using e.g., the method in [41]. A calibration of the gyroscopes is complicated and requires a rotational rig. If the accelerometers where uncalibrated, then the standard deviation of the accelerometer measurement errors would be a magnitude higher approximately; refer to [41] for typical error figures before and after sensor calibrations.

The root mean square errors (RMSE) calculated from the Monte-Carlo simulations, together with the corresponding square root of the CRBs, are shown in Fig. 5. In Fig. 5(a) the result when the array is rotated around the x-axis, i.e., an in-plan rotation, is shown. Clearly, no information about the angular rate around the z-axis (pointing out of the plan) is gained from the accelerometers, whereas the accelerometer measurements contribute to the estimation of the angular rates around the x-axis and y-axis. In Fig. 5(b) the result when the array is rotated around the z-axis, i.e., an out-of-plan rotation, is shown. In this case no information about the rotations around the x-axis and y-axis are gained from the accelerometers; only information about the rotation around the z-axis is gained. The estimation accuracy achieved by simply averaging the gyroscope measurements is illustrated by the grey horizontal line in the figures.

\psfrag

title[c][c][0.75][0]Angular velocity estimation (in-plan rotation). \psfragGyroscope[l][l][0.8][0]Gyroscope \psfragsaturation[l][l][0.8][0]x-axis satu- \psfraglevel[l][l][0.8][0]ration level

(a) Array rotating around x-axis, i.e., .
\psfrag

title[c][c][0.75][0]Angular velocity estimation (out-of-plan rotation). \psfragGyroscope[l][l][0.8][0]Gyroscope \psfragsaturation[l][l][0.8][0]z-axis satu- \psfraglevel[l][l][0.8][0]ration level

(b) Array rotating around z-axis, i.e., .
Fig. 5: The angular velocity estimation RMSE of the proposed measurement fusion method for a planar array with the geometry illustrated in Fig. 4(a). The estimation accuracy obtained by simply averaging the gyroscope measurements is given by the gray horizontal line.

To compare the proposed measurement fusion method with the in gyroscope-free IMUs commonly used angular acceleration tensor method, a Monte Carlo simulation using the array geometry illustrated in Fig. 4(b) was also conducted; the angular acceleration tensor method is summarized in the Appendix. The non-planar array geometry is needed for the tensor method to work. The result of the simulation is shown in Fig. 6, from which it can be seen that the tensor method does not achieve the CRB and is outperformed by the proposed method. Note that the tensor method only uses the measurements from the accelerometers and the sign information from the gyroscopes, and therefore the behavior of the two methods should only be compared for the part of the simulation where all of the gyroscopes are saturated, i.e., above .

\psfrag

ylabel[c][c][0.7][0]RMSE [] \psfragxlabel[c][c][0.7][0] [] \psfragGyroscope[l][l][0.8][0]Gyroscope \psfragsaturation[l][l][0.8][0]saturation \psfraglevel[l][l][0.8][0]levels \psfragbase level[c][c][0.8][0]Base level: \psfragx[l][l][0.8][0] Maximum likelihood \psfragy[l][l][0.8][0] Maximum likelihood \psfragz[l][l][0.8][0] Maximum likelihood \psfragx t[l][l][0.8][0] Tensor method \psfragy t[l][l][0.8][0] Tensor method \psfragz t[l][l][0.8][0] Tensor method \psfragAverage gyroscopes[l][l][0.8][0]Average gyroscopes \psfragCRB[l][l][0.8][0] \psfragCRB saturated gyroscope xxxx[l][l][0.8][0] saturated gyroscope \psfragtitle[c][c][0.75][0]Maximum likelihood versus tensor method.

Fig. 6: The angular velocity estimation RMSE of the proposed maximum likelihood measurement fusion method and the angular acceleration tensor method. The geometry of the considered array is illustrated in Fig. 4(b) and the array is rotating around all of the axes, i.e., . The estimation accuracy obtained by simply averaging the gyroscope measurements is given by the gray horizontal line.

V-B Experiments

To evaluate the proposed method with real-world data the in-house designed inertial sensor array, shown in Fig. 1, was mounted in a mechanical rotation rig and data was recorded at different angular speeds. At each angular speed, data corresponding to 100 time instants were recorded. The data was processed using the proposed maximum likelihood method with the same noise variance settings as used in the simulations and the RMSE of the estimated angular speed was calculated; the angular speed and not the angular velocity as an evaluation criterion was chosen because of the practical problem of accurately aligning the coordinate axes of the array with the rotation axis of the rotation rig. The result is displayed in Fig. 7, and it shows that the proposed method works but it does not achieve the accuracy predicted by the theoretical model. Several factors such as sensor scale factor errors, misalignment between the sample instances of the sensors, and uncertainties in the location of the sensing elements, have been identified as likely sources for the discrepancy. For illustrational purposes the result of a Monte-Carlo simulation resembling the experimental setup, and where random errors with a standard deviation of 0.1 mm were added to the sensor locations, is also shown in Fig. 7.444Placement errors in the order of 0.1 mm are likely to arise in the PCB manufacturing and population process. More experimental results can be found in [50], where the performance of an earlier revision of the array when used in a foot-mounted inertial navigation system were evaluated.

\psfrag

Gyroscope[l][l][0.7][0]Gyroscope \psfragsaturation[l][l][0.7][0]saturation \psfraglevel[l][l][0.7][0]level \psfragbase level[c][c][0.7][0]Base level: \psfragexperimentaaa[c][c][0.65][0]Experiment \psfragsimulationaaa[c][c][0.65][0]Simulation \psfragx[c][c][0.7][0] [] \psfragy[c][c][0.7][0]RMSE [] \psfragtitle[c][c][0.75][0]Experimental calculated angular speed accuracy.

Fig. 7: RMSE of the from real-world data calculated angular speed. Also shown, the RMSE of the estimated angular speed when calculated from simulated data that includes random sensor location errors.

Vi Summary, Conclusions, and Future Research

Approximately 5 billion smartphones were sold in the five year span 2011-2015, of which most where equipped with some type of inertial motion sensors. Thus, the smartphone industry has become a driving force in the development of ultralow-cost inertial sensors, and six-degree-of-freedom inertial sensor assemblies are sold to large-volume costumers for less than a dollar. Unfortunately, these ultralow-cost sensors do not yet meet the needs of more demanding applications like inertial navigation and biomedical motion tracking systems. However, by adapting a wisdom of the crowd’s thinking and design arrays consisting of hundreds of sensing elements, one can capitalize on the decreasing cost, size, and power-consumption of the sensors to construct virtual high-performance low-cost inertial sensors. Accordingly, we have proposed a maximum-likelihood method to fuse the measurements in arrays consisting of multiple accelerometer and gyroscope triads. The proposed method has been evaluated through simulations as well as real-world tests with an in-house designed array with 192 sensing elements. The simulation results show that the proposed method attains the CRB and it outperforms the current state-of-the-art method. Further, compared to the state-of-the-art method the proposed method also works with 2D arrays, which is fundamental for the production of arrays on a single printed circuit board. Moreover, by utilizing the spatial separation between the accelerometers, the angular velocity measurement range can be extended beyond that of the gyroscopes. The experimental results show that the proposed methods work, but do not achieve the accuracy predicted by the theoretical model. Uncertainties in the position of the sensing elements have been identified as a likely source for the discrepancy. Further research is needed to verify the source of the problem, and new inertial sensor array calibration methods must be developed. Moreover, information fusion methods that also considers the time development of the sensor signals and allows for simultaneous calibration and motion tracking should be investigated. Another open research question is to design a measurement error covariance estimation method that enables automatic weighting of the sensor measurements.

The angular acceleration tensor method used in the simulations are here summarized. For a more detailed description see [28]. Introducing the angular acceleration tensor and using the properties of the cross product, the model for the specific force (2) can be rewritten as

(30)

The output of the accelerometer triads can be described by the following linear matrix equation

(31)

where

(32)
(33)

Here and denote the measurement and measurement error of the :th accelerometer triad. Neglecting the fact that the tensor only has six degrees of freedom, the least square estimate of the matrix is given by

(34)

Noteworthy is that since is a 3-by-4 matrix, the measurements from at least four accelerometer triads are needed. Further, for to have a full row rank and the estimation problem to be well defined, the locations of the accelerometer triads must span a three dimensional space. From the estimated tensor , the angular acceleration can be calculated as

(35)

where . The angular velocity can, up to a sign ambiguity, be calculated from the left hand side of the equality

(36)

Here denotes the trace operator.

References

  • [1] D. Shaeffer, “MEMS inertial sensors: A tutorial overview,” IEEE Commun. Mag., vol. 51, no. 4, pp. 100–109, Apr. 2013.
  • [2] M. Perlmutter and L. Robin, “High-performance, low cost inertial MEMS: A market in motion!” in Proc. IEEE/ION Position Location and Navigation Symposium (PLANS), Myrtle Beach, SC, Apr. 2012, pp. 225–229.
  • [3] G. Girardin and E. Mounier, “6- & 9-axis sensors consumer inertial combos,” Yole Développement, Tech. Rep., 2014. [Online]. Available: http://www.yole.fr/iso_upload/Samples/Yole_6_and_9-Axis_Sensors_Consumer_Inertial_Combos.pdf
  • [4] D. Bittner, J. Christian, R. Bishop, and D. May, “Fault detection, isolation, and recovery techniques for large clusters of inertial measurement units,” in Proc. IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, May 2014, pp. 219–229.
  • [5] J. W. Song and C. G. Park, “Optimal configuration of redundant inertial sensors considering lever arm effect,” IEEE Sensors J., vol. 16, no. 9, pp. 3171–3180, May 2016.
  • [6] A. Waegli, J. Skaloud, S. Guerrier, M. Parés, and I. Colomina, “Noise reduction and estimation in multiple micro-electro-mechanical inertial systems,” Measurement Science and Technology, vol. 21, 2010.
  • [7] I. Skog, J.-O. Nilsson, and P. Händel, “An open-source multi inertial measurement unit (MIMU) platform,” in Proc. International Symposium on Inertial Sensors and Systems (ISISS), Laguna Beach, CA, USA, Feb. 2014.
  • [8] H. Krim and M. Viberg, “Two decades of array signal processing research: the parametric approach,” IEEE Signal Process. Mag., vol. 13, no. 4, pp. 67–94, Jul. 1996.
  • [9] P. Stoica and A. Nehorai, “MUSIC, maximum likelihood, and Cramer-Rao bound,” IEEE Trans. Acoust., Speech, Signal Process., vol. 37, no. 5, pp. 720–741, May 1989.
  • [10] M. Viberg and B. Ottersten, “Sensor array processing based on subspace fitting,” IEEE Trans. Signal Process., vol. 39, no. 5, pp. 1110–1121, May 1991.
  • [11] A. Nehorai and E. Paldi, “Vector-sensor array processing for electromagnetic source localization,” IEEE Trans. Signal Process., vol. 42, no. 2, pp. 376–398, Feb. 1994.
  • [12] ——, “Acoustic vector-sensor array processing,” IEEE Trans. Signal Process., vol. 42, no. 9, pp. 2481–2491, Sep. 1994.
  • [13] A. Abdi and H. Guo, “Signal correlation modeling in acoustic vector sensor arrays,” IEEE Trans. Signal Process., vol. 57, no. 3, pp. 892–903, Mar. 2009.
  • [14] N. Zou and A. Nehorai, “Circular acoustic vector-sensor array for mode beamforming,” IEEE Trans. Signal Process., vol. 57, no. 8, pp. 3041–3052, Aug. 2009.
  • [15] A. Jeremic and A. Nehorai, “Landmine detection and localization using chemical sensor array processing,” IEEE Trans. Signal Process., vol. 48, no. 5, pp. 1295–1305, May 2000.
  • [16] J.-O. Nilsson and I. Skog, “Inertial sensor arrays – A literature review,” in Proc. European Navigation Conference, Helsinki, Finland, May 2016.
  • [17] T. R. Williams, D. W. Raboud, and K. R. Fyfe, “Minimal spatial accelerometer configurations,” J. Dyn. Sys., Meas., Control, vol. 135, p. 021016, 2013.
  • [18] C.-W. Tan and S. Park, “Design of accelerometer-based inertial navigation systems,” IEEE Trans. Instrum. Meas., vol. 54, no. 6, pp. 2520–2530, Dec. 2005.
  • [19] A. R. Schuler, A. Grammatikos, and K. A. Fegley, “Measuring rotational motion with linear accelerometers,” IEEE Trans. Aerosp. Electron. Syst., vol. AES-3, no. 3, pp. 465–472, May 1967.
  • [20] P. Schopp, H. Graf, M. Maurer, M. Romanovas, L. Klingbeil, and Y. Manoli, “Observing relative motion with three accelerometer triads,” IEEE Trans. Instrum. Meas., vol. 63, no. 12, pp. 3137–3151, Dec. 2014.
  • [21] S. Sukkarieh, P. Gibbens, B. Grocholsky, K. Willis, and H. F. Durrant-Whyte, “A low-cost, redundant inertial measurement unit for unmanned air vehicles,” Int. J. Robot. Res., vol. 19, pp. 1089–1103, 2000.
  • [22] G. Chatterjee, L. Latorre, F. Mailly, P. Nouet, N. Hachelef, and C. Oudea, “Smart-mems based inertial measurement units: gyro-free approach to improve the grade,” Microsystem Technologies, pp. 1–10, Dec. 2015.
  • [23] H. Naseri and M. Homaeinezhad, “Improving measurement quality of a MEMS-based gyro-free inertial navigation system,” Sensors and Actuators A: Physical, vol. 207, pp. 10–19, Mar. 2014.
  • [24] P. He and P. Cardou, “Estimating the angular velocity from body-fixed accelerometers,” J. Dyn. Sys., Meas., Control, vol. 134, no. 6, p. 061015, 2012.
  • [25] S. O. Madgwick, A. J. Harrison, P. M. Sharkey, R. Vaidyanathan, and W. S. Harwin, “Measuring motion with kinematically redundant accelerometer arrays: Theory, simulation and implementation,” Mechatronics, vol. 23, no. 5, pp. 518–529, 2013.
  • [26] S. Park and S. K. Hong, “Angular rate estimation using a distributed set of accelerometers,” Sensors, vol. 11, no. 11, pp. 10 444–10 457, 2011.
  • [27] P. Schopp, L. Klingbeil, C. Peters, and Y. Manoli, “Design, geometry evaluation, and calibration of a gyroscope-free inertial measurement unit,” Sensors and Actuators A: Physical, vol. 162, no. 2, pp. 379 – 387, Aug. 2010.
  • [28] K. Parsa, J. Angeles, and A. K. Misra, “Rigid-body pose and twist estimation using an accelerometer array,” Archive of Applied Mechanics, vol. 74, no. 3-4, pp. 223–236, 2004.
  • [29] P. Cardou and J. Angeles, “Angular velocity estimation from the angular acceleration matrix,” Journal of Applied Mechanics, vol. 75, 2008.
  • [30] Z. Qin, L. Baron, and L. Birglen, “Robust design of inertial measurement units based on accelerometers,” J. Dyn. Sys., Meas., Control, vol. 131, no. 3, p. 031010, 2009.
  • [31] K. Parsa, T. A. Lasky, and B. Ravani, “Design and implementation of a mechatronic, all-accelerometer inertial measurement unit,” IEEE/ASME Trans. Mechatronics, vol. 12, no. 6, pp. 640–650, Dec. 2007.
  • [32] C. Liu, S. Zhang, S. Yu, X. Yuan, and S. Liu, “Design and analysis of gyro-free inertial measurement units with different configurations,” Sensors and Actuators A: Physical, vol. 214, pp. 175–186, Aug. 2014.
  • [33] E. Edwan, S. Knedlik, and O. Loffeld, “Constrained angular motion estimation in a gyro-free IMU,” IEEE Trans. Aerosp. Electron. Syst., vol. 47, no. 1, pp. 596–610, January 2011.
  • [34] C. Jiang, L. Xue, H. Chang, G. Yuan, and W. Yuan, “Signal processing of MEMS gyroscope arrays to improve accuracy using a 1st order Markov for rate signal modeling,” Sensors, vol. 12, no. 2, pp. 1720–1737, 2012.
  • [35] Y. Yuksel and N. El-Sheimy, “An optimal sensor fusion method for skew redundant inertial measurement units,” Journal of Applied Geodesy, vol. 5, pp. 99–115, 2011.
  • [36] L. Xue, C.-Y. Jiang, H.-L. Chang, Y. Yang, W. Qin, and W.-Z. Yuan, “A novel Kalman filter for combining outputs of MEMS gyroscope array,” Measurement, vol. 45, no. 4, pp. 745 – 754, 2012.
  • [37] J. Bancroft and G. Lachapelle, “Data fusion algorithms for multiple inertial measurement units,” Sensors, vol. 12, pp. 3720–3738, 2011.
  • [38] M. Jafari, T. Najafabadi, B. Moshiri, S. Tabatabaei, and M. Sahebjameyan, “PEM stochastic modeling for MEMS inertial sensors in conventional and redundant IMUs,” IEEE Sensors J., vol. 14, no. 6, pp. 2019–2027, June 2014.
  • [39] S. Guerrier, “Improving accuracy with multiple sensors: Study of redundant MEMS-IMU/GPS configurations,” in Proc. of the 22nd International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS), Savannah, GA, Sep. 2009.
  • [40] J. Bancroft, “Multiple IMU integration for vehicular navigation,” in Proc. of the 22nd International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS), Savannah, GA, Sep. 2009.
  • [41] J. O. Nilsson, I. Skog, and P. Händel, “Aligning the forces – eliminating the misalignments in IMU arrays,” IEEE Trans. Instrum. Meas., vol. 63, no. 10, pp. 2498–2500, Oct. 2014.
  • [42] C. Jekeli, Inertial Navigation Systems with Geodetic Applications.   Walter de Gruyter, 2001.
  • [43] P. Stoica and A. Nehorai, “On the concentrated stochastic likelihood function in array signal processing,” Circuits, Systems and Signal Processing, vol. 14, no. 5, pp. 669–674, 1995.
  • [44] S. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory.   Prentice Hall, 1993.
  • [45] M. El-Diasty, A. El-Rabbany, and S. Pagiatakis, “Temperature variation effects on stochastic characteristics for low-cost MEMS-based inertial sensor error,” Meas. Sci. Technol., vol. 18, no. 11, pp. 3321–3328, Sep. 2007.
  • [46] T. Li and A. Nehorai, “Maximum likelihood direction finding in spatially colored noise fields using sparse sensor arrays,” IEEE Trans. Signal Process., vol. 59, no. 3, pp. 1048–1062, Mar. 2011.
  • [47] M. Lapinski, E. Berkson, T. Gill, M. Reinold, and J. Paradiso, “A distributed wearable, wireless sensor system for evaluating professional baseball pitchers and batters,” in Proc. International Symposium on Wearable Computers (ISWC), Linz, Austria, Sep. 2009, pp. 131–138.
  • [48] D. Camarillo, P. Shull, J. Mattson, R. Shultz, and D. Garza, “An instrumented mouthguard for measuring linear and angular head impact kinematics in american football,” Annals of Biomedical Engineering, vol. 41, no. 9, pp. 1939–1949, 2013.
  • [49] J. Gorman and A. Hero, “Lower bounds for parametric estimation with constraints,” IEEE Trans. Inf. Theory, vol. 36, no. 6, pp. 1285–1301, Nov. 1990.
  • [50] I. Skog, J.-O. Nilsson, and P. Händel, “Pedestrian tracking using an IMU array,” in Proc. IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, Jan. 2014.

Isaac Skog(S’09-M’10) received the BSc and MSc degrees in Electrical Engineering from the KTH Royal Institute of Technology, Stockholm, Sweden, in 2003 and 2005, respectively. In 2010, he received the Ph.D. degree in Signal Processing with a thesis on low-cost navigation systems. In 2009, he spent 5 months at the Mobile Multi-Sensor System research team, University of Calgary, Canada, as a visiting scholar and in 2011 he spent 4 months at the Indian Institute of Science (IISc), Bangalore, India, as a visiting scholar. He is currently a Researcher at KTH coordinating the KTH Insurance Telematics Lab. He was a recipient of a Best Survey Paper Award by the IEEE Intelligent Transportation Systems Society in 2013.

John-Olof Nilsson (M’14) received the M.Sc. degree in Engineering Physics from Royal Institute of Technology, Stockholm, Sweden, in 2008. In 2013, he received the Ph.D. degree in Signal Processing with a thesis on infrastructure-free pedestrian localization. In 2011, he spent 4 months at the Indian Institute of Science (IISc), Bangalore, India as a visiting scholar and in 2014 he spent 3 months at the Indian Institute of Technology (IIT) Kanpur, India, as a visiting Scholar. He was a recipient of the Best Demonstration Award at the IEEE 2014 International Conference on Indoor Positioning and Indoor Navigation, Busan, Korea.

Peter Händel(S’88-M’94-SM’98) received a Ph.D. degree from Uppsala University, Uppsala, Sweden, in 1993. From 1987 to 1993, he was with Uppsala University. From 1993 to 1997, he was with Ericsson AB, Kista, Sweden. From 1996 to 1997, he was a Visiting Scholar with the Tampere University of Technology, Tampere, Finland. Since 1997, he has been with the KTH Royal Institute of Technology, Stockholm, Sweden, where he is currently a Professor of Signal Processing and the Head of the Department of Signal Processing. From 2000 to 2006, he held an adjunct position at the Swedish Defence Research Agency. He has been a Guest Professor at the Indian Institute of Science (IISc), Bangalore, India, and at the University of Gävle, Sweden. He is a co-founder of Movelo AB. Dr. Händel has served as an associate editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSING. He was a recipient of a Best Survey Paper Award by the IEEE Intelligent Transportation Systems Society in 2013.

Arye Nehorai (S’80–M’83–SM’90–F’94) is the Eugene and Martha Lohman Professor and Chair of the Preston M. Green Department of Electrical and Systems Engineering (ESE), Professor in the Department of Biomedical Engineering (by courtesy) and in the Division of Biology and Biomedical Studies (DBBS) at Washington University in St. Louis (WUSTL). He serves as the Director of the Center for Sensor Signal and Information Processing at WUSTL. Under his leadership as department chair, the undergraduate enrollment has more than tripled in the last four years. Earlier, he was a faculty member at Yale University and the University of Illinois in Chicago. He received both B.Sc. and M.Sc. degrees from the Technion, Israel and a Ph.D. from Stanford University, California.

Dr. Nehorai served as Editor-in-Chief of the IEEE Transactions on Signal Processing from 2000 to 2002. From 2003 to 2005 he was the Vice President (Publications) of the IEEE Signal Processing Society (SPS), the Chair of the Publications Board, and a member of the Executive Committee of this Society. He was the founding editor of the special columns on Leadership Reflections in the IEEE Signal Processing Magazine from 2003 to 2006. Dr. Nehorai received the 2006 IEEE SPS Technical Achievement Award and the 2010 IEEE SPS Meritorious Service Award. He was elected Distinguished Lecturer of the IEEE SPS for a term lasting from 2004 to 2005. He received several best paper awards in IEEE journals and conferences. In 2001 he was named University Scholar of the University of Illinois. Dr. Nehorai was the Principal Investigator of the Multidisciplinary University Research Initiative (MURI) project entitled Adaptive Waveform Diversity for Full Spectral Dominance from 2005 to 2010. He has been a Fellow of the IEEE since 1994, a Fellow of the Royal Statistical Society since 1996, and a Fellow of AAAS since 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
110740
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description