Model-Based Learning of Turbulent Flowsusing a Mobile Robot

# Model-Based Learning of Turbulent Flows using a Mobile Robot

## Abstract

We consider the problem of model-based learning of turbulent flows using mobile robots. Specifically, we use empirical data to improve on numerical solutions obtained from Reynolds-Averaged Navier Stokes (RANS) models. RANS models are computationally efficient but rely on assumptions that require experimental validation. Here we construct statistical models of the flow properties using Gaussian processes (GPs) and rely on the numerical solutions to inform their mean. Utilizing Bayesian inference, we incorporate measurements of the time-averaged velocity and turbulent intensity into these GPs. We account for model ambiguity and parameter uncertainty, via Bayesian model selection, and for measurement noise by systematically incorporating it in the GPs. To collect the measurements, we control a custom-built mobile robot through a sequence of waypoints that maximize the information content of the measurements. The end result is a posterior distribution of the flow field that better approximates the real flow and quantifies the uncertainty in its properties. We experimentally demonstrate a considerable improvement in the prediction of these properties compared to numerical solutions.

Model-based Learning, Active Sensing, Mobile Sensors, Gaussian Processes, Turbulent Flow.

## I Introduction

Knowledge of turbulent flow properties, e.g., velocity and turbulent intensity, is of paramount importance for many engineering applications. At large scales, these properties are used for the study of ocean currents and their effects on aquatic life [1, 2, 3], meteorology [4], bathymetry [5], and localization of atmospheric pollutants [6], to name a few. At smaller scales, knowledge of flow fields is important in applications ranging from optimal HVAC of residential buildings for human comfort [7] to design of drag-efficient bodies in aerospace and automotive industries [8]. At even smaller scales, the characteristics of velocity fluctuations in vessels are important for vascular pathology and diagnosis [9] or for the control of bacteria-inspired uniflagellar robots [10]. Another important application that requires global knowledge of the velocity field is chemical source identification in advection-diffusion systems [11, 12, 13].

The most cost-effective way to estimate a flow field is using numerical simulation and the Navier-Stokes (NS) equations. These equations are a set of partial differential equations (PDEs) derived from first principles, i.e., conservation of mass, momentum, and energy, that describe the relationship between flow properties [14]. They can be simplified depending on the nature of the flow, e.g., laminar versus turbulent or incompressible versus compressible, to yield simpler, more manageable equations. Nevertheless, when the flow is turbulent, solving the NS equations becomes very challenging due to the various time and length scales that are present in the problem. Turbulent flow is characterized by random fluctuations in the flow properties and is identified by high Reynolds numbers [15].1 In many of the applications discussed before, especially when air is the fluid of interest, even for very small velocities the flow is turbulent. In this case, direct numerical solution (DNS) of the NS equations is still limited to simple problems [16]. A more effective approach is to solve the time-averaged models called Reynolds-averaged Navier Stokes equations, or RANS for short. RANS models are useful since in many engineering applications we are interested in averaged properties of the flow, e.g., the mean velocity components, and not the instantaneous fluctuations. Note that despite its chaotic nature, turbulent flow has structure; it consists of eddies that are responsible for generation and dissipation of kinetic energy. Understanding these structures is essential for proper modeling of turbulence; see [17] for more details.

Averaging the flow properties in RANS models introduces additional unknowns, called Reynolds stresses, in the momentum equation that require proper modeling to complete the set of equations that are solved. This is known as the ‘closure problem’. Depending on how this problem is addressed, various RANS models exist that are categorized into two large classes, eddy viscosity models (EVMs) and Reynolds stress models (RSMs); see [17] for more details. The former assume that the Reynolds stress is proportional to the strain defining in this way, an effective turbulent viscosity, while in the latter a transport equation is derived for the Reynolds stress components and is solved together with the averaged flow equations. EVMs have an assumption of isotropy built into them meaning that the Reynolds stresses are direction-independent. Although this assumption is not valid for flow fields that have strong asymmetries, there are many benchmark problems for which the results from EVMs are in better agreement with empirical data than RSMs; see e.g. [18]. In general, solutions provided by these RANS models are not compatible with each other or with empirical observations and require experimental validation [19]. Furthermore, precise knowledge of the boundary conditions (BCs) and domain geometry is often unavailable which contributes to even more inaccuracy in the predicted flow properties. Finally, appropriate meshing is necessary to properly capture the boundary layer2 and obtain convergent solutions from RANS models, which is not a straight-forward process.

In order to address the shortcomings of RANS models and complement the associated numerical methods, various experimental techniques have been developed that directly measure the flow properties. These methods can also be used as stand-alone approaches alleviating the need for accurate knowledge of the BCs and geometry. A first approach of this kind utilizes hot-wire constant temperature anemometers. A hot-wire anemometer is a very thin wire that is placed in the flow and can measure flow fluctuations up to very high frequencies. The fluctuations are directly related to the heat transfer rate from the wire [20]. Hot-wire sensors are very cost effective but require placing equipment in the flow that might interfere with the flow pattern and is not always possible. A next generation of approaches are optical methods that seed particles in the flow and optically track them. Two of the most important variants of optical methods are particle imaging velocimetry (PIV) and laser-Doppler anemometry (LDA). In PIV, two consecutive snapshots of the particles in the flow are obtained and their spatial correlation is used to determine the shift and the velocity field [21]. On the other hand, in LDA a pair of intersecting beams create a fringe in the desired measurement location. The light from a particle passing through this fringe pulsates and the frequency of this pulse together with the fringe spacing determine the flow speed [22]. Optical methods although non-intrusive, require seeding particles with proper density and are expensive. Another sensing approach utilizes microelectromechanical systems (MEMS) for flow measurements. Because of their extremely small size and low cost, these devices are a very attractive alternative to aforementioned methods [23].

Empirical data from experiments are prone to noise and error and are not always available for various reasons, e.g., cost and accessibility, scale of the domain, and environmental conditions. On the other hand, as discussed before, numerical simulations are inaccurate and require validation but are cost-effective and can provide global information over the whole domain. In this paper we propose a model-based learning method that combines numerical solutions with empirical data to improve on the estimation accuracy of flow properties. Specifically, we construct a statistical model of the time-averaged flow properties using Gaussian processes (GPs) and employ the physics of the problem, captured by RANS models, to inform their prior mean. We construct GPs for different RANS models and realizations of the random BCs and other uncertain parameters. We also derive expressions for measurement noise and systematically incorporate it into our GP models. To collect the empirical data, we use a custom-built mobile robot sensor that collects and analyzes instantaneous velocity readings and extracts time-averaged velocity and turbulent intensity measurements. Specifically, we formulate an online path planning method that guides the robot through measurement locations that maximize the amount of information collected about the flow field. Then, using Bayesian inference and hierarchical model selection, we obtain the posterior probability of the GPs and the flow properties for each model after conditioning on the empirical data. The proposed robotic approach to learning the properties of turbulent flows allows for fewer, more informative measurements to be collected. Therefore, it is more efficient, autonomous, and can be used for learning in domains that are not already equipped with stationary sensor grids.

## Ii Related Work

### Ii-a Machine Learning in Fluid Dynamics

Machine learning (ML) algorithms have been used to help characterize flow fields. One way to do this is by model reduction, which speeds up the numerical solutions, [24, 25, 26, 27]. Particularly, the authors in [24] utilize neural networks to reduce the order of the physical model and introduce a correction term in the Taylor expansion of the near wall equations that improves the approximations considerably. The work in [25] uses deep learning to construct an efficient, stable, approximate projection operator that replaces the iterative solver needed in the numerical solution of the NS equations. [26] proposes an alternative approach to traditional numerical methods utilizing a regression forest to predict the behavior of fluid particles over time. The features for the regression problem are selected by simulating the NS equations. A different class of methods utilize ML to improve accuracy instead of speed. For instance, [28] presents a modification term in the closure model of the RANS equations which is learned through data collected experimentally or using DNSs. The modified closure model replaces the original one in the RANS solvers. In a different approach, the authors in [19] utilize classification algorithms to detect regions in which the assumptions of the RANS models are violated. They consider three of the main underlying assumptions: (i) the isotropy of the eddy viscosity, (ii) the linearity of the Boussinesq hypothesis, and (iii) the non-negativity of the eddy viscosity.

The approaches discussed above mainly focus on application of ML to improve on the speed or accuracy of the RANS models at the solver level without considering various sources of uncertainty that are not captured in these models, e.g., imperfect knowledge of the BCs and geometry. To the contrary, here we focus on validation, selection, and modification of the solutions obtained from RANS models using a data-driven approach. Particularly, we employ Gaussain processes (GPs) to model the flow field and propose a supervised learning strategy to incorporate empirical data into numerical RANS solutions in real-time. GPs provide a natural way to incorporate measurements into numerical solutions and to construct posterior distributions in closed-form. Note that GPs have been previously used to model turbulent flows in other contexts. For instance, [28] uses GPs to modify closure models and improve accuracy while [29] models turbulent fields via GPs and then uses these GPs to obtain probabilistic models for the dispersion of particles in the flow field. Since turbulence is a non-Gaussian phenomenon [30], assuming Gaussianity is a major approximation in [29]. Nevertheless, here we focus on mean flow properties which are normally distributed regardless of the underlying distribution of the instantaneous properties [31].

### Ii-B Active Learning for Gaussian Processes

Active learning using GPs has been investigated extensively in the robotics literature. This is because of the closed form of the posterior distribution, given a conjugate prior, that makes GPs ideal for inference and planning. Collecting informative data in active learning applications depends on defining appropriate optimality indices for planning with entropy and mutual information (MI) being among the most popular ones. Particularly, MI as a set function is submodular and under certain assumptions on the discretization of the domain, it is ‘approximately’ monotone [32]. This means that it can be optimized using a sequential greedy approach while ensuring sub-optimality lower-bounds; see [33]. When used for planning, such optimality indices can guide mobile robots through sequences of measurements with maximum information content. For example, [34] proposes a suboptimal, non-greedy trajectory planning method for a GP using MI, while [2] relies on Markov decision processes and MI to plan paths for autonomous under-water vehicles. On the other hand, the work in [35] collects measurements at the next most uncertain location which is equivalent to optimizing the entropy. In this paper, we employ a similar approach where the mobile sensor plans its path to minimize the entropy of unobserved areas of the domain.

Active learning using GPs has applications ranging from estimation of nonlinear dynamics to spatiotemporal fields. For instance, [35] presents an active learning approach to determine the region of attraction of a nonlinear dynamical system. The authors decompose the unknown dynamics into a known mean and an unknown perturbation which is modeled using a GP. Also, [36, 37] use similar approaches to estimate and control nonlinear dynamics using robust and model predictive control, respectively. The authors in [38] present an active sensing approach using cameras to associate targets with their corresponding nonlinear dynamics where the target behaviors are modeled by Dirichlet process-GPs. On the other hand, the work in [1] uses GPs to estimate the uncertainty in predictive ocean current models and relies on the resulting confidence measures to plan the path of an autonomous under-water vehicle. GPs have also been used for estimation of spatiotemporal fields, e.g., velocity and temperature. For instance, [39] proposes a learning strategy based on subspace identification to learn the eigenvalues of a linear time-invariant dynamic system which defines the temporal evolution of a field. The work in [40] addresses the problem of trajectory planning for a mobile sensor to estimate the state of a GP using a Kalman filter. Closely related are also methods for robotic state estimation and planning with Gaussian noise; see [41, 42, 43].

The literature discussed above typically employs simple, explicit models of small dimensions and does not consider model ambiguity or parameter uncertainty. Instead, here we consider much more complex models of continuous flow fields that are implicit solutions of the NS equations, a system of PDEs. Solutions to the NS equations depend on uncertain parameters like BCs and geometry. We propose a model-based approach to learn the posterior distribution of these solutions along with the flow properties.

### Ii-C Contributions

The contributions of this work can be summarized as follows: (i) To the best of our knowledge, this is the first model-based framework that can be used to learn flow fields using empirical data. To this date, ML methods have only been used to speed up numerical simulations using RANS models or to increase their accuracy. Instead, the proposed framework learns solutions of the NS equations by systematically combining theoretical models and empirical data. Compared to empirical methods to estimate flow fields, this is the first time that robots are used for data collection, while compared to relevant active learning methods in robotics, the flow models we consider here are much more complex, uncertain, and contain ambiguities that can only be resolved using empirical validation. (ii) We design a cost-effective flow sensor mounted on a custom-built mobile robot that collects and analyzes instantaneous velocity readings. We also formulate a planning algorithm for the mobile robot based on the entropy optimality index to collect measurements with maximum information content and show that it out-performs a baseline lattice placement approach and similar planning methods that possess sub-optimality bounds. (iii) The proposed framework is able to select the most accurate models, obtain the posterior distribution of the flow properties and, most importantly, predict these properties at new locations with reasonable uncertainty bounds. We experimentally demonstrate the predictive ability of the posterior field and show that it outperforms the numerical solutions.

The remainder of the paper is organized as follows. In Section III, we discuss properties of turbulent flows and develop a statistical model to capture the velocity field and corresponding turbulent intensity field. Section IV describes the design of the mobile sensor, the proposed algorithms to collect and process instantaneous velocity readings, the formulation of the measurement errors, and finally the planning method for the mobile sensor to collect flow measurements. In Section V, we present experiments that demonstrate the ability of the posterior field to better predict the flow properties at unobserved locations. Finally, Section VI concludes the paper.

## Iii Statistical Model of Turbulent Flows

In this section, we discuss the theoretical aspects of turbulence that are needed to model the flow field. We also present a statistical model of turbulent flows and discuss its components.

### Iii-a Reynolds-Averaged Navier Stokes Models

Since turbulence is a 3D phenomenon, consider a 3D domain of interest and the velocity vector field defined over this domain where and . Following the analysis used in RANS models, we decompose the velocity field into its time-averaged value and a perturbation as

 \bbq(x,t)=\bbq(x)+\bbepsilon(x,t), (1)

where

 \bbq(x)=limT→∞1T∫t1+Tt1\bbq(x,t)dt.

Assuming that the flow is stationary or statistically steady, the integral will be independent of ; set without loss of generality.

Since turbulent flows are inherently random, we can view or equivalently as a random vector (RV). Let denote -th realization of this RV at location and time and consider the ensemble average defined as

 \hhatbbq(x,t)=1\hhatn\hhatn∑j=1\bbqj(x,t),

where is the total number of realizations. {definition}[Ergodicity] The stationary RV is ergodic if the time average converges to the ensemble average as the time interval extends to infinity, i.e.,

 \bbq(x)=limT→∞1T∫T0\bbq(x,t)dt=\hhatbbq(x,t).

In the following development, we assume that turbulence is ergodic. This is a common assumption that allows us to obtain time-series measurements of the instantaneous velocity field at a given location and use the time-averaged velocity as a surrogate for the ensemble average . Note that ergodicity implies that is independent of time.

Next, consider the root mean square (RMS) value of the fluctuations of the -th velocity component defined as

 \bbqk,rms(x)=(limT→∞1T∫T0\abs\bbepsilonk(x,t)2dt)0.5.

Noting that and using the ergodicity assumption, we have

 \var[\bbqk(x,t)]≜\mbE\set[\bbqk(x,t)−\hhatbbqk(x,t)]2=\bbq2k,rms(x),

where denotes the variance of a random variable (RV). The average normalized RMS value is called turbulent intensity and is defined as

 i(x)=\norm\bbqrms(x)√3qref, (2)

where denotes the Euclidean norm and is a normalizing velocity.3 This value is an isotropic measure of the average fluctuations caused by turbulence at point .

Finally, we consider the integral time scale of the turbulent field along the -th velocity direction, defined as

 \bbt∗k(x)=1\bbrhok(0)∫∞0\bbrhok(τ)dτ, (3)

where is the autocorrelation of the velocity given by

 \bbrhok(τ)=limT→∞1T∫T0\bbepsilonk(x,t)\bbepsilonk(x,t+τ)dt,

and is the autocorrelation lag. We use the integral time scale to determine the sampling frequency in Section III-C3. For more details on turbulence theory, see [17].

### Iii-B Statistical Flow Model

The RANS models use the decomposition given in (1) to obtain turbulence models for the mean flow properties. As discussed in Section I, depending on how they address the closure problem, various RANS models exist. The numerical results returned by these models depend on the geometry of the domain, the values of the boundary conditions (BCs), and the model that is used, among other things. Since there is high uncertainty in modeling the geometry and BCs and since the RANS models may return inconsistent solutions, numerical methods often require experimental validation. On the other hand, empirical data is prone to noise and this uncertainty needs to be considered before deciding on a model that best matches the real flow field. Bayesian inference provides a natural approach to combine theoretical and empirical knowledge in a systematic manner. In this paper, we utilize Gaussian processes (GPs)4 to model uncertain flow fields; see [44]. Using Bayesian inference with GPs and for a Gaussian likelihood function, we can derive the posterior distributions of the flow properties conditioned on available data in closed-form which is computationally very advantageous. Before mathematically defining these GP models, in the following proposition we argue that the time-averaged velocity components are normally distributed regardless of the underlying distribution of the instantaneous velocity field. {prop} [Gaussianity of Mean Velocity Components] The distributions of each sample time-averaged velocity component converges to a normal distribution as the number of samples increases to infinity.

###### Proof.

This directly follows from the central limit theorem (CLT) [45]. To see this, let denote the first instantaneous velocity component with ensemble mean and variance ; note that these quantities are fixed but unknown. According to the ergodicity assumption, the time-averaged velocity component is equivalent to this ensemble average. Assume that independent samples of this velocity component are given over time, where , and define the sample average as

 u(x)=1nn∑k=1u(x,tk).

Since the flow is statistically steady, at a given spatial location , the samples are identically distributed. Moreover, since is a physical quantity, its variance is bounded. Then, from the CLT it follows that the distribution of the sample time-averaged velocity component approaches a normal distribution as . More specifically,

 √n(u(x)−\hhatbbq1(x,t))→\ccalN(0,\var[\bbq1(x,t)]).

Thus, for large enough values of , we approximately have

 u(x)∼\ccalN(\hhatbbq1(x,t),\var[\bbq1(x,t)]n).

A similar result can be obtained for other velocity components. ∎

Note the subtle difference between the continuous time-averages of Section III-A and the sample time-averages in Proposition 4. The former is by definition equivalent to the ensemble average, but the latter requires the samples to be independent for its distribution to be centered at the ensemble average.5 In Section III-C3 we discuss how to obtain such independent samples. According to Proposition 4, we can use GPs to capture the distributions of the mean velocity components assuming that we collect a large number of samples. For simplicity, we also model the turbulent intensity as another GP. Note that for engineering applications, the mean flow properties are of practical significance and not the instantaneous properties.

We construct the proposed GP models based on numerical solutions as opposed to purely statistical models, i.e., empirical data are used only to modify model-based solutions and not to replace them. Particularly, let denote the first mean velocity component at point . Given the mean function obtained from the numerical solution of a RANS model and a kernel function , we model the prior distribution of , before collecting any measurements, using the GP

 u(x)∼\ccalG\ccalP(μu(x),\bbarkappau(x,x′)). (4)

Specifically, we define this kernel function as

 \bbarkappau(x,x′)=\bbarsigma2u(x,x′)ρ(x,x′), (5)

where the standard deviation encapsulates the prior uncertainty in the first component of the mean velocity field and is the correlation function. Explicitly, we define the standard deviation as

 \bbarsigma2u(x,x′)=\bbarsigma2u,0+1n0q2refi(x)i(x′), (6)

where the constant is a measure of confidence in the numerical solution that is selected depending on the RANS model that is utilized and the convergence metrics provided by the solver. Moreover, the turbulent intensity , defined by (2), captures the variability caused by turbulence at a point while is a nominal number of samples that scales this variability for the averaged velocity component; see also Proposition 4. We define the correlation function in (5) as a compactly supported polynomial

 ρ(x,x′)=[(1−\normx−x′ℓ)+]2, (7)

where is the correlation characteristic length and we define the operator . The correlation function (7) implies that, two points with distance larger than are uncorrelated, which results in a sparse covariance matrix.

{prop}

[Validity of Kernel Function] The kernel function (5) with standard deviation (6) and correlation function (7) constitutes a valid kernel function.

###### Proof.

This follows from the fact that the kernel function (5) is the composition of multiple valid kernel function. First, note that the correlation function (7) is a valid kernel itself. Regarding the standard deviation (6), note that multiplication of any function with itself, i.e., , constitutes a valid kernel. Moreover, scaling and addition with positive constants and , respectively, preserves the validity of the kernel. Thus the standard deviation (6) is a valid kernel as well. Finally, the multiplication of two valid kernels results in a valid kernel meaning that the kernel function (5) is valid as desired. See [44, Sec 4.2] for details. ∎

It is impossible in practice to obtain noiseless readings as assumed in Proposition 4. Thus, we consider a measurement model for with additive Gaussian noise . Specifically, let denote a measurement of the first mean velocity component at a location given by

 yu(x)=u(x)+ϵu(x). (8)

Then is also a GP

 yu(x)∼\ccalG\ccalP(μu(x),κu(x,x′)) (9)

with the following kernel function

 κu(x,x′)=\bbarkappau(x,x′)+σ2u(x)δ(x−x′), (10)

where is the Kroneker delta function. Given a vector of measurements at a set of locations , we can obtain the predictive distribution of at a point , conditioned on measurements , which is a Gaussian distribution whose mean and variance are given in closed-form by

 μ(x|\ccalX) = μ(x)+\barbSigmax\ccalX\bbSigma−1\ccalX\ccalX(\bby−\bbmu\ccalX), (11a) γ2(x|\ccalX) = \bbarkappa(x,x)−\barbSigmax\ccalX\bbSigma−1\ccalX\ccalX\barbSigma\ccalXx, (11b)

where denotes the mean function evaluated at measurement locations and the entries of the covariance matrices and are computed using (5) and (10), respectively. Note that for simplicity, we have dropped the subscript from the relevant quantities in (11). Note also that by Proposition III-B, the matrix is positive-definite and invertible.

We define similar GPs for each of the other mean velocity components and , and the turbulent intensity field with appropriate subscripts. Specifically, for the turbulent intensity , we define a similar GP

 i(x)∼\ccalG\ccalP(μi(x),\bbarkappai(x,x′)), (12)

where we select the prior mean from the numerical solution of a RANS model as before and define the standard deviation in the kernel function (5) by the constant , i.e., we set . Note that unlike the mean velocity components that by Proposition 4 are normally distributed, modeling as a GP is an assumption that allows us to take advantage of the closed-form posteriors (11).

{rem}

[Correlation among Velocity Components] From the Navier-Stokes PDEs [14], it follows that the velocity components are correlated. However, since the prior means that we use here are obtained from RANS models that already capture this correlation, for simplicity in the pursuant developments we assume that the velocity components are independent and thus, uncorrelated.

### Iii-C Measurement Model

Next, we discuss equation (8) used to obtain measurements of the first mean velocity component and similar measurement models for the other velocity components and turbulent intensity. Specifically, at a given point , we collect a set of instantaneous velocity readings over a period of time and calculate the sample mean and variance. Note that by the ergodicity assumption, these time samples are equivalent to multiple realizations of the RV ; see Section III-A.

#### Mean Velocity Components

Consider the instantaneous first component of the velocity and a sensor model that has additive noise with standard deviation . An observation of this velocity at a given point and time is given by

 yu(x,t)=u(x,t)+ϵu,s(x,t), (13)

where is the instantaneous sensor noise. Assume that we collect uncorrelated samples of at time instances for . Then, the sample mean of these observations given by

 yu(x)=1nn∑k=1yu(x,tk) (14)

is a measurement of the first mean velocity component .

Since the samples are uncorrelated, the variance of is given by

 \var[yu(x)] =1n2n∑k=1\var[yu(x,t)]=1n\var[yu(x,t)]. (15)

An unbiased estimator of is the sample variance

 ^\var[yu(x,t)]=1n−1n∑k=1[yu(x,tk)−yu(x)]2. (16)

Substituting this estimator in (15), we get an estimator for the variance of the mean velocity measurement as

 ^\var[yu(x)] =1n(n−1)n∑k=1[yu(x,tk)−yu(x)]2. (17)

As we increase the number of samples , becomes a more accurate estimator of the ensemble average .

Given the observation , we construct the measurement model (8) for the mean first component of the velocity. Note from Proposition 4 that the distribution of is Gaussian as long as the sensor noise in (13) is Gaussian. In Section IV-B, we argue that for typical off-the-shelf sensors, a Gaussian model is a good approximation for the sensor noise. Note that in addition to the uncertainty caused by the sensor noise , captured in the sample mean variance (16), other sources of uncertainty may contribute to . In Section IV-B, we derive an estimator for this variance taking into account these additional sources of uncertainty, due to the use of a mobile robot to collect the measurements.

Next, we take a closer look at and the terms that contribute to it. Since the variance of the sum of uncorrelated RVs is the sum of their variances, from (13) we get . Furthermore, since the samples are uncorrelated, we have

 n∑k=1\var[yu(x,tk)]=n∑k=1\set\var[u(x,tk)]+γ2u,s(x,tk),

and thus,

 \var[yu(x,t)]=\var[u(x,t)]+\bbargamma2u,s(x), (18)

where

 \bbargamma2u,s(x)=1nn∑k=1γ2u,s(x,tk) (19)

is the mean variance of the sensor noise. We derive an expression for the instantaneous variance in Section IV-B. Notice from equation (18) that the variance of is due to turbulent fluctuations and sensor noise . The former is a variation caused by turbulence and inherent to the RV whereas the latter is due to inaccuracy in sensors and further contributes to the uncertainty of the mean velocity component measurement .

#### Turbulent Intensity

Referring to (2), turbulent intensity at a point is given as

 i(x)∝\norm\bbqrms(x)=(3∑k=1\bbq2k,rms(x))0.5.

Note that by the ergodicity assumption and similarly for the other velocity components; see Section III-A. Then, using equations (18) and (16), we have

 \hhati(x)=1√3qref (^\var[yu]−\bbargamma2u,s(x)+^\var[yv]−\bbargamma2v,s(x) +^\var[yw]−\bbargamma2w,s(x))0.5, (20)

where variances of the instantaneous velocity measurements and and the corresponding mean sensor noise variances and are computed using equations similar to (16) and (19), respectively.

Let denote an additive error in the turbulent intensity estimate. Then, we define the corresponding measurement as

 yi(x)=i(x)+ϵi(x). (21)

Note that, unlike the mean velocity measurements, it is not straight-forward to theoretically obtain the variance of the turbulent intensity measurement. This is due to the nonlinearity in definition (2). In general, estimating this variance requires the knowledge of higher order moments of the random velocity components; cf. [31]. Specifically under a Gaussianity assumption, they depend on the mean values and which are not necessarily negligible. Consequently, we need to incorporate this uncertainty into our statistical model.

Here, we utilize the Bootstrap resampling method to directly estimate the variance . Assume that the samples are independent and consider the measurement set ; define and similarly. Furthermore, consider batches of size obtained by randomly drawing the same samples from , , and with replacement. Using (III-C2), we obtain estimates corresponding to the batches . Then, the desired variance can be estimated as

 ^σ2i(x)=1nb−1nb∑j=1(\hhatij−\bbari)2, (22)

where is the mean of the batch estimates ; see [46] for more details.

#### Sampling Frequency

In the preceding analysis, a key assumption was that the samples are independent. Since fluid particles cannot have unbounded acceleration, their velocities vary continuously and thus consecutive samples are dependent. As a general rule of thumb, in order for the samples to be independent, the interval between consecutive sample times and must be larger than where

 t∗(x)=maxk\bbt∗k(x)

is the maximum of the integral time scales (3) along different velocity directions at point .

In practice, we only have a finite set of samples of the instantaneous velocity vector over time and can only approximate (3). Let denote the discrete lag. Then, the sample autocorrelation of first velocity component is given by

 ^ρu(l)=1nn−l∑k=1[yu(x,tk)−yu(x)][yu(x,tk+l)−yu(x)].

This approximation becomes less accurate as increases since the number of samples used to calculate the summation decreases. Furthermore, the integral of the sample autocorrelation over the range is constant and equal to ; see [47]. This means that we cannot directly use (3) which requires integration over the whole time range. The most common approach is to integrate up to the first zero-crossing; see [48].

### Iii-D Hierarchical Bayesian Model Selection

Thus far, we have assumed that a single numerical solution is used to obtain the proposed statistical model. In this section, we show how to incorporate multiple numerical solutions in this statistical model. This is necessary since different RANS models yield different and even contradictory approximations of the flow field. This effect is exacerbated by uncertainty in the parameters contained in RANS models; see Section I. On the other hand, incorporating solutions from multiple RANS models in the proposed statistical model of the flow allows for a better informed prior mean and ultimately for model selection and validation using empirical data.

Specifically, consider a RV that collects all secondary variables that affect the numerical solution including uncertain BCs. Given an arbitrary distribution for each variable, we can construct a discrete distribution that approximates the continuous distributions using, e.g., a stochastic reduced order model (SROM); cf. [49] for details. Let denote the -th numerical solution corresponding to one combination of all these secondary RVs and let the collection denote the discrete SROM distribution, where denotes the probability of model and is the number of discrete models. Given a vector of measurements at a set of locations , the posterior distribution over these discrete models can be easily obtained using Bayes’ rule as

 \tdpi(\ccalMj|\ccalXk)=α\bbarpi(\bbyk|\ccalMj)\tdpi(\ccalMj), (23)

where is the normalizing constant and is the likelihood of the data given model . Note that we have the marginal distributions in closed-form from the definition of the GP as

 \bbarpi(\bbyk|\ccalMj)= det(2π\bbSigmaj)−0.5 exp(−12(\bbyk−\bbmuj)T\bbSigma−1j(\bbyk−\bbmuj)),

where and are short-hand notation for the mean and covariance of the GP constructed using model , see the discussion after equation (11). Since the sum of the discrete posterior model probabilities equals one, i.e., , we compute the normalizing constant from (23) as

 α=⎛⎝\bbarn∑j=1\bbarpi(\bbyk|\ccalMj)\tdpi(\ccalMj)⎞⎠−1. (24)

Given , we can compute the posterior distribution over the discrete models from (23). Then, the desired mean velocity component fields and the turbulent intensity field are the marginal distributions , and after integrating over the discrete models. These marginal distributions are GP mixtures with their mean and variance given by

 μu(x|\ccalXk) =\bbarn∑j=1pj,kμu(x|\ccalXk,\ccalMj), (25a) γ2u(x|\ccalXk) =\bbarn∑j=1pj,kγ2u(x|\ccalXk,\ccalMj) (25b) +\bbarn∑j=1pj,k[μu(x|\ccalXk,\ccalMj)−μu(x|\ccalXk)]2,

where denotes the posterior model probabilities obtained from (23). Equation (25b) follows from the principal that the variance of a RV is the mean of conditional variances plus the variance of the conditional means. The expressions for and are identical.

## Iv Learning Flow Fields using a Mobile Robot Sensor

In the previous section, we proposed a statistical model of turbulent flows using GPs. Next, we develop a mobile robot sensor to measure the instantaneous velocity vector and extract the necessary measurements discussed in Section III-C. We also characterize the uncertainty associated with collecting these measurements by a mobile robot and formulate a path planning problem for it to maximize the information content of these measurements.

### Iv-a Mobile Robot Design

For simplicity, in what follows we focus on the in-plane components of the velocity field and take measurements on a plane parallel to the ground. Extension to the 3D case is straightforward. Particularly, we use the D6F-W01A1 flow sensor from OMRON; see Section V for more details. Figure 1 shows the directivity pattern for this sensor measured empirically.

In this figure, the normal velocity component is given as a function of the angle of the flow from the axis of the sensor. It can be seen that there exists an angle above which the measurements deviate from the mathematical formula and for an even larger angle, no flow is detected at all. The numerical values for these angles are roughly and for the sensor used here. This directivity pattern can be used to determine the maximum angle spacing between sensors mounted on a plane perpendicular to the robot’s -axis, so that any velocity vector on the plane can be measured with adequate accuracy. This spacing is , which means that eight sensors are sufficient to sweep the plane.

Figure 2 shows the mobile sensor comprised of these eight sensors separated by giving rise to a sensor rig that can completely cover the flow in the plane of measurements.

In this configuration, one sensor in the rig will have the highest reading at a given time with one of its neighboring sensors having the next highest reading. Let and denote the magnitude and angle of the instantaneous flow vector and and denote the -th sensor’s reading and heading angle in the global coordinate system where . Assume that sensors and have the two highest readings. Then, we have

 sj=qcos(θ−βj) and sl=qcos(θ−βl)⇒ (26)

These equations are used to get the four-quadrant angle in global coordinates. Given , the magnitude is given by . Assume that sensor has the highest reading at a given time, then and or and . Any deviation from these conditions indicates inaccurate readings due to sensor noise or delays caused by the serial communication between the sensors and microcontroller onboard; see Section V.

Algorithm 1 is used by the mobile robot sensor to collect the desired measurements.

Given the measurement location and the heading of the mobile sensor , the algorithm collects instantaneous samples of the velocity vector field and computes the mean velocity components and as well as the turbulent intensity . Particularly, in line 4 it finds the sensor indices and with the two highest readings, respectively. As discussed above, these two sensors should be next to each other. If not, the algorithm issues a warning indicating an inaccurate measurement in line 5. In line 8, it computes the flow angle using the four-quadrant inverse Tangent function and equation (26). This flow angle is validated in line 9. In lines 12-14, the algorithm computes the magnitude of the velocity and its components. The vectors and store samples at different times, i.e., is the -th sample of the instantaneous velocity component ; see also equation (13). In lines 16 and 17, mean denotes the mean function. Finally, in lines 19 and 20, the algorithm computes the turbulent intensity measurement and its corresponding variance.

### Iv-B Measurement Noise

In Section III-C, we defined the sensor noise for the instantaneous first velocity component by and the corresponding measurement noise by ; see equations (13) and (8), respectively. In the following, we derive an explicit expression for . We also discuss the contribution of the robot heading and location errors to . Since the relation between these noise terms and the measurements is nonlinear, we employ linearization so that the resulting distributions can be approximated by Gaussians. Generally speaking, let denote an observation which depends on a vector of uncertain parameters . Then, after linearization where , , and denotes the nominal value of uncertain parameters. For independent parameters , we have where is the constant diagonal covariance matrix. Then

 y∼\ccalN(y0,∇yT0\bbGamma∇y0),

by the properties of the Gaussian distribution for linear mappings. Specifically, the linearized variance of the measurement given the uncertainty in its parameters, can be calculated as

 ∇yT0\bbGamma∇y0=np∑j=1(∂y∂\bbpj∣∣\bbpj,0)2γ2j, (27)

where denotes the variance of the uncertain parameter . Consequently, since the sensor noise and heading error are independent, we can add their contributions to .

First, we consider the sensor noise for the flow sensor discussed in Section IV-A. As before, we assume this noise is additive and Gaussian. For sensors with a full-scale accuracy rating, this assumption is reasonable as the standard deviation is fixed and independent of the signal value. Let FS denote the full-scale error and denote the standard deviation of the sensor noise. The cumulative probability of a Gaussian distribution within is almost equal to one. Thus, we set

 γs=FS3. (28)

In order to quantify the effect of sensor noise on the readings of the velocity components, referring to equation (26), we have

 yu(x,tk)=qcosθ=a(−sjsinβl+slsinβj), (29)

where depends on the angle spacing between sensors and and denote the sensor headings. Then, and

 γ2u,s(x,t)=a2(sin2βj,0+sin2βl,0)γ2s (30)

from (27), where and denote the nominal headings of the sensors in the global coordinate system. Similarly,

 yv(x,tk)=qsinθ=a(sjcosβl−slcosβj), (31)

and . Then,

 γ2v,s(x,t)=a2(cos2βj,0+cos2βl,0)γ2s. (32)

Note that equations (29) and (31) are linear in the instantaneous sensor readings. Thus, the contribution of sensor noise to uncertainty in the instantaneous velocity readings is independent of those readings and only depends on the robot heading .

Next, we consider uncertainty caused by localization error, i.e., error in the heading and position of the mobile robot. This error is due to structural and actuation imprecisions and can be partially compensated for by relying on motion capture systems to correct the nominal values; see Section V. Specifically, let denote the distribution of the robot heading around the nominal value obtained from a motion capture system. Note that where is the relative angle of sensor in the local coordinate system of the robot; the same is true for . Then, from (29) we have

 ∂yu∂β=a(−sjcosβl+slcosβj)=−yv(x,tk), (33)

where the last equality holds from (31). Evaluating (31) at the nominal values and using (27), the contribution of the heading error to uncertainty in the measurement of the instantaneous first velocity component becomes

 γ2u,β(x,tk)=y2v(x,tk)γ2β. (34)

Following an argument similar to the derivation of (18), we can obtain an expression for . Then, noting from (27) that the contributions of independent sources of uncertainty, i.e., sensor and heading errors, to the variance of a measurement of the first mean velocity component are additive, we obtain an estimator for as

 ^σ2u(x)=^\var[yu(x)]+\bbargamma2u,β(x), (35)

where, is given by (17) and can be obtained using an equation similar to (19). Note that unlike whose contribution to the uncertainty in can be reduced by increasing the sample size , the contribution of the heading error is independent of and can only be reduced by conducting multiple independent measurements. Similarly, for the second component of the velocity

 ∂yv∂β=a(−sjsinβl+slsinβj)=yu(x,tk). (36)

Then using (27) and (36), we have

 γ2v,β(x,tk)=y2u(x,tk)γ2β (37)

and

 ^σ2v(x)=^\var[yv(x)]+\bbargamma2v,β(x). (38)

Finally, let denote a D distribution for the measurement location where is the nominal location and denotes its SROM discretization; see [49] for details. Without loss of generality, we assume , i.e., belongs to the set of SROM samples. From (9), the measurement at a point is normally distributed as . Then, we can marginalize the location to obtain the expected measurement distribution as

 \bbarpi(y)=~n∑k=1\tdpk\bbarpi(y|\tdxk).

This distribution is a Gaussian mixture (GM) and we cannot properly model it as a normal distribution. Nevertheless, the probability of the measurement location being far from the mean (nominal) value drops exponentially. This means that corresponding to is larger than the rest of the weights and the distribution is close to a unimodal distribution. Noting that we can obtain the mean and variance of in closed-form, a Gaussian distribution can match up to two moments of the underlying GM. Particularly,

 ~μ(x) =~n∑k=1\tdpkμ(\tdxk), (39a) ~σ2(x) =~n∑k=1\tdpk[κ(\tdxk,\tdxk)+(μ(\tdxk)−~μ(x))2]. (39b)

Considering equations (39), the following points are relevant: (i) For simplicity, we do not consider covariance between different measurements in this computation. This is reasonable as long as , where is the characteristic length of the correlation function (7). (ii) Equations (39) are only relevant at measurement locations. Thus, in equations (11), we replace the entry of at location with the corresponding mean value from (39a) and similarly, the diagonal entry of corresponding to with from (39b). (iii) The kernel function, defined in (10), depends on the heading error variances and of the velocity components through (35) and (38). These values depend on empirical data that are only available at the true measurement location and not at SROM samples . Thus, we use the same value for and at all SROM samples.

### Iv-C Path Planning for Mobile Robot

Next, we formulate a path planning problem for the mobile sensor to collect measurements with maximum information content. Specifically, consider the domain and its D projection on the plane of mobile sensor discussed in Section IV-A. Let denote a discretization of and denote a discrete subset of points from that the mobile sensor can collect a measurement from. Our goal is to select measurement locations from the set , which we collect in the set , so that the entropy of the velocity components at unobserved locations , given the measurements , is minimized. With a slight abuse of notation, let denote this entropy. Then, we are interested in the following optimization problem

 \ccalX∗=\argmin\ccalX⊂\ccalS,\abs\ccalX=mH(Ω∖\ccalX|\ccalX),

where denotes the cardinality of the measurement set . Noting that , we can equivalently write

 \ccalX∗=\argmax\ccalX⊂\ccalS,\abs\ccalX=mH(\ccalX). (40)

The optimization problem (40) is combinatorial and NP-complete; see [32]. This makes finding the optimal set computationally expensive as the size of the set grows. Thus, in the following we propose a greedy approach that selects the next best measurement location given the current ones. Particularly, let denote the set of currently selected measurement locations, where . By the chain rule of the entropy

 H(\ccalXk+1)=H(xk+1|\ccalXk)+⋯+H(x2|\ccalX1)+H(x1).

Then, given , we can find the next best measurement location by solving the following optimization problem

 x∗k+1=\argmaxx∈\ccalS∖\ccalXkH(x|\ccalXk). (41)

To obtain an expression for the objective in (41), we note that for the continuous RV , the differential entropy is defined as

 H(u(x))=−∫\bbarpi(u)log(\bbarpi(u))du.

Particularly, since according to (11) is normally distributed, i.e., , the value of the entropy is independent of the mean and is given in closed-form as

 H(u(x|\ccalXk))=log(cγu(x|\ccalXk)),

where and is defined in (11b). A similar expression holds true for the second mean velocity component . Then, since from Remark III-B the velocity components and are assumed independent, we have

 H(u,v) =H(u|v)+H(v)=H(u)+H(v) =log[c2γu(x)