Analyzing Ozone Concentration by Bayesian Spatiotemporal Quantile Regression
Abstract
Ground level Ozone is one of the six common airpollutants on which the EPA has set national air quality standards. In order to capture the spatiotemporal trend of 1hour and 8hour average ozone concentration in the US, we develop a method for spatiotemporal simultaneous quantile regression. Unlike existing procedures, in the proposed method, smoothing across the sites is incorporated within modeling assumptions thus allowing borrowing of information across locations, an essential step when the number of samples in each location is low. The quantile function has been assumed to be linear in time and smooth over space and at any given site is given by a convex combination of two monotone increasing functions and not depending on time. A Bspline basis expansion with increasing coefficients varying smoothly over the space is used to put a prior and a Bayesian analysis is performed. We analyze the average daily 1hour maximum and 8hour maximum ozone concentration level data of US and California during 20062015 using the proposed method. It is noted that in the last ten years, there is an overall decreasing trend in both 1hour maximum and 8hour maximum ozone concentration level over the most parts of the US. In California, an overall a decreasing trend of 1hour maximum ozone level is observed while no particular overall trend has been observed in the case of 8hour maximum ozone level.
Analyzing Ozone Concentration by Bayesian Spatiotemporal Quantile Regression
P. Das^{1}^{1}1Correspondence to: P. Das, Deaprtment of Statistics, North Carolina State University, 2311 Stinson Drive, Raleigh, NC 27695, USA. Email: pdas@ncsu.edu and S. Ghosal
Keywords: Bspline prior; Spatiotemporal quantile regression; Block MetropolisHastings; US ozone data.
North Carolina State University, NC 27695, USA
1 Introduction
In the field of analysis of weather data, greenhouse gas emission data, biological experimental data, MRI scan data, it is of interest to estimate the spatiotemporal dependence structure of the response variable at different levels such as upper and lower extremes. Traditional spatiotemporal mean regression may be inappropriate in this context. For example, if the response variable is highly skewed over the time, spatiotemporal mean regression does not represent a typical relation.
Quantile regression was introduced in Koenkar and Bassett (1978). Generalizations and extensions of quantile regression were proposed from both frequentist and Bayesian perspective (Yu and Moyeed (2001), Kottas and Gelfand (2001), Gelfand and Kottas (2003), Kottas and Krnjajic (2009)). A comprehensive account of various aspects of quantile regression can be found in Koenkar (2005). The above methods were proposed for a single quantile level. The main drawback of separate quantile regression for different levels is that an estimated lower quantile curve may cross an estimated upper quantile curve violating the monotonicity of the quantiles. In addition, a single quantile regression does not allow for joint inference.
Methods addressing the monotonicity issue in estimating multiple quantile regression have also been proposed (He (1997), Neocleous and Portnoy (2008), Takeuchi (2004), Takeuchi et al. (2006), Vapnik (1995), Shim and Lee (2010), Shim et al. (2009), Bondell et al. (2010), Dunson and Taylor (2005)). The noncrossing quantile regression method proposed in Wu and Liu (2009) sequentially updates the quantile curves under the constraint that the higher estimated quantile curves stay above the the lower ones. The problem with this method and the many of the above mentioned noncrossing quantile regression methods is that the estimated values of the quantile curves are dependent on the grid of quantiles where the curves are estimated. For example, using most of the abovementioned methods, if we estimate the quantile lines for quantile grids and , the estimates of by the two methods would come different, which is not desirable.
Instead of fitting the quantile regression lines for a fixed set of quantiles, a more informative picture emerges by estimating the entire quantile curve (Tokdar and Kadane (2012), Das and Ghosal (2016)). Let denote the th quantile of a response at , being the predictor. From Theorem 1 of Tokdar and Kadane (2012) it follows that a linear specification is monotonically increasing in for every if and only if
where and are some constants and are monotonically increasing function in . A monotonic transformation of the explanatory and the response variables to unit intervals reduces to this
(1) 
where and are monotonically increasing bijection from the unit interval to itself. Tokdar and Kadane (2012) took transformed Gaussian process prior to estimate and , while Das and Ghosal (2016) used Bspline basis for that purpose and argued the advantages of using Bspline basis over transformed Gaussian process prior.
Consider the scenario where time is the explanatory variable and the values of the response variables corresponding to timepoints are available for different spatial locations. The methods of estimating a single quantile regression curve has been extended to analyze spatial data (Lee and Neocleous (2010), Oh et al. (2011), Sobotka and Kneib (2012)). Various methods for modeling spatially varying distribution function have been proposed in last decade (Dunson and Park (2008), Gelfand et al. (2005), Reich and Fuentes (2007), Griffin and Steel (2006)). As mentioned in Reich (2012), these methods treat the conditional distribution of the response at each spatial location as unknown quantities and are estimated from the data.
One of the first work addressing the noncrossing issue of the spatial quantile regression was proposed in Reich et al. (2011). They proposed a twostage method and ensured monotonicity of the estimated quantile functions using the Bernstein polynomial basis function. Reich (2012) assumed separate noncrossing quantile functions for each spatial location which evolves over time. Spatial smoothing was performed using Gaussian process priors. Separate analysis for each spatial location may be sensitive to small sample sizes at individual locations. While this method allows estimation at all quantile levels, postestimation processing step makes the quantification of uncertainty difficult. Instead of estimating the quantile function for a set of gridpoints, the estimation of the entire quantile function is more informative. To address the small area estimation problem, instead of assuming unrelated quantile functions for each spatial locations, we assume that the quantile function is smooth over space and linear in time. For any given spatial location, the estimated quantile function has the same form as in Equation (1). Similar to Das and Ghosal (2016), the monotonicity constraint on the curves and are obtained through Bspline basis expansion with coefficients (which are functions of the spatial location) increasing and lying in the unit interval. The coefficients of the Bspline basis expansion over the dimensional space are modeled by the tensor product of univariate Bsplines basis functions. A closed form expression for the likelihood function is obtained in terms of the parameters in independent simplexes whose dimensions depend of the degree of the used Bspline basis functions. Unlike the methods proposed in Reich et al. (2011) and Reich (2012), the main advantage of this method is that once we estimate the parameters of the model, estimated response variable can be found for any spatial location, time and quantiles without further kriging or interpolation, and hence allows proper uncertainty quantification within the Bayesian paradigm. Most importantly, the approach incorporates small area estimation issues automatically in its modeling approach.
2 Ozone in troposphere
Ozone can be naturally found in the troposphere and other parts of the atmosphere. While, more ozone concentration in the stratosphere is desirable as it helps absorb the ultraviolet radiation, higher ozone concentration level in the troposphere is harmful for living beings. Ozone occurs naturally in the troposphere in low concentration. There are mainly two sources of tropospheric ozone. A significant amount of ozone is released by plants and soil. Other than that, sometimes ozone migrates down to the troposphere from the stratosphere. However, the extent of groundlevel naturally occurring ozone concentration is not considered a threat to living beings and the environment.
On the other hand, ozone is a byproduct of many human activities. Increasing automobiles and industries are the main source the “bad” ozone at the groundlevel. As mentioned in an online article^{†}^{†}†Source http://www.windows2universe.org/earth/Atmosphere/ozone_tropo.html&edu=high, current groundlevel ozone concentration has doubled since 1900. Although no single source emits ozone directly, it is generated when the hydrocarbons and nitrogenoxides, emitted by automobiles, fuel power plants and other industrial machineries, interact with each other in the presence of sunlight, specifically the ultraviolet (UV) ray. In general, ozone level reaches its highest level during the summer season. Typically, in the span of a day, ozone level reaches its highest level during mid and late afternoon.
Recently tropospheric ozone has been related to many health problems and environmental issues. Exposure to higher level of ozone might cause respiratory problems. In plants, it slows down growth and photosynthesis and damages internal cells. High groundlevel ozone also damages textile dyes, rubbers and fibers and a few genre of paints.
Because of its unstable nature, there is no existing way to move the tropospheric ozone to the stratosphere which can be thought as the ideal way to deal with this environmental hazard. Under the Clean Air act, the US Environment Protection Agency (EPA) considers tropospheric ozone as one of the six pollutants considered to be harmful to human health and environment. Proposed ozone standards can be found in this site^{‡}^{‡}‡Source https://www.epa.gov/sites/production/files/201604/documents/20151001basicsfs.pdf (accessed 11262016) which says an area would meet the ozone standards if the 4th highest maximum daily 8hour ozone concentration each year, averaged over three years, is 70 ppb (parts per billion) or below. In this paper, we develop a spatiotemporal quantile regression model for measuring the ozone concentration level over the US and California during the period 20062015.
Let denote the th quantile of the response variable i.e., ozone concentration level at location and timepoint . We assume a linear dependence structure of the dependent variable on time and the quantile function is smooth over the space. In our model, the quantile function is given by
(2) 
3 Proposed Bayesian Method
Our explanatory variable is time. Let and denote the values of and at location for , and let denote the sample size at location . By a monotonic transformation, , and each coordinates of the spatial locations are transformed to take values in the unit intervals. From Theorem 1 of Tokdar and Kadane (2012), it follows that a linear specification is monotonically increasing in for every if and only if
(3) 
where are monotonically increasing in . Let for , denote the spatial location (site) where the response variable is measured. We assume that the dependence structure of the quantile function at any site is given by
(4) 
where are monotonic functions from to . For any fixed , equation (4) can be reframed as
(5) 
where and denote the slope and the intercept of type quantile regression.
A Bspline basis expansion is one of the most convenient approaches for estimating a function on bounded interval. To estimate and in Equation (4), we use a Bspline basis expansion. Taking the coefficients of the Bsplines basis functions in increasing order we ensure the monotonicity of these two abovementioned functions and any monotone increasing function can be approximated through a monotone linear combination of Bsplines (Boor (2001)). Das and Ghosal (2016) argued the advantages of using Bspline over Gaussian process for estimating the quantile functions. More specifically, using Bsplines allows their equation (10) to be solved analytically, significantly reducing the cost of likelihood evaluation compared with Tokdar and Kadane (2012). We use quadratic Bspline because then equation (10), which is crucial for the likelihood evaluation, reduces to a quadratic equation.
3.1 Prior
Let be the equidistant knots on the interval such that for . For Bspline of degree , the number of basis functions is . Let be the basis functions of Bsplines of degree on on the abovementioned equidistant knots. The basis expansion of the quantile functions at site is given by the relations
(6) 
where the coefficients and are dependent on the spatial location of the datapoints. To put priors on and , we use fold tensor product of Bspline basis functions. Thus for the spatial location , the basis expansion is given by
where are Bspline basis functions of degree on with equidistant knots and for . To ensure the constraints of and mentioned in Equation (6), it is sufficient to ensure the following constraints
(8) 
for .
Suppose that we have data for spatial sites and at the th site, denote the values of the explanatory variable (time) and the response variable, being the number of datapoints for , the loglikelihood is given by (see Appendix for the derivation)
(9) 
where
for . and is the solution of
(10) 
The parameters of the loglikelihood are given by Equation (8). We note that once we fix the values of , the vector of spacings of the coefficients lie on the unit simplex. The same is true for . Define
(11) 
for and . Hence for each combination of , we have
(12) 
Thus we note that the parameters of the loglikelihood can be divided into unit simplex blocks. We consider uniform Dirichlet prior on the unit simplex blocks and for . As mentioned in the earlier section, is taken to be 2. The degree of the basis functions corresponding to each spatial coordinates has been considered to be cubic (i.e., ).
4 Simulation study
In this section we compare the estimation performance estimation of the proposed method, i.e., Spline Spatiotemporal Quantile Regression (SSTQR) with piecewise Gaussian Basis function Spatiotemporal Quantile Regression (GBSTQR) (Reich (2012)) based on simulation. For the proposed SSTQR method, we consider both likelihood based and Bayesian approaches. Finding the maximum likelihood estimate (MLE) can be seen as an optimization problem of maximizing an objective function of a constrained parameter space which is given by a collection of unit simplex blocks. Since the evaluation of the likelihood involves integration and linear search, it is hard to check whether the likelihood function is convex or not. Instead of using convex optimization algorithms, global or nonconvex optimization algorithms are more reasonable in this scenario. Das (2016b) proposed an efficient global optimization technique on a hyperrectangular parameter space which has been shown to work faster and than existing global optimization techniques, namely Genetic Algorithm (Fraser (1957), Bethke (1980), Goldberg (1989)) and Simulated annealing (Kirkpatrick et al. (1983), Granville et al. (1994)) yielding better solutions. Das (2016c) modified that algorithm for the case where the sample space is given by an unit simplex. Following that paper, Das (2016a) proposed ‘Greedy Coordinate Descent of Varying Step sizes on Multiple Simplexes’ (GCDVSMS) algorithm which efficiently minimizes any nonconvex (or, maximizes any nonconcave) objective function of parameters given by a collection of unit simplex blocks. The main idea of GCDVSMS algorithm is making jumps of varying stepsizes within each unit simplex blocks parallelly and searching for the most favorable direction of movement. In this algorithm, every time a local solution is found, coordinatewise jumps of various stepsizes are performed until a better solution is found. To find the MLE in this case, GCDVSMS algorithm is used. The values of the tuning parameters in GCDVSMS algorithm are chosen as follows; initial global step size , step decay rate for the first run , step decay rate for other runs , step size threshold , sparsity threshold , the convergence criteria controlling parameters tol_fun_1=tol_fun_2, maximum number of iterations inside each run , maximum number of allowed runs .
We consider the spatial locations are uniformly distributed over the unit square i.e., . Number of spatial locations is taken to be . For simulation purpose, three different numbers of equidistant temporal datapoints for each site have been considered which are (note that, for ). Let denote the spatial locations of the available sites of data where is a twotuple such that for . Consider the spatial quantile function where
(13) 
Note that for any given , and are strictly increasing function from to satisfying and . Since the quantile function is the inverse of the cumulative distribution function, taking , has conditional quantile function . For each locations , we considered the same set of equidistant timepoints such that and for . We simulate the response variable using the equation
where for . Using both methods, for each of the locations with the number of datapoints , the quantile curves are estimated for the quantiles where and . The abovementioned simulation study is repeated times under different random number generating seeds. Let at the th simulation study with fixed number of datapoints () at each site, the estimated intercept and slope at th quantile and location are and respectively for , and . denotes the true value of the th quantile at location at timepoint and is the estimated value of it based on th simulation study. Then the average of mean squared error (MSE) of the slope, intercept and the quantile value at a given timepoint are given by

Methods  MSE  

SSTQR (ML)  0.0265  0.0437  0.0194  0.0154  0.0192  
SSTQR (Bayes)  0.0410  0.0162  0.0103  0.0076  0.0122  
GBSTQR  0.2014  0.0349  0.0471  0.1159  0.2209  
SSTQR (ML)  0.0248  0.0340  0.0187  0.0146  0.0166  
SSTQR (Bayes)  0.0290  0.0127  0.0081  0.0055  0.0082  
GBSTQR  0.6000  0.0241  0.0658  0.2337  0.5097  
QSSTQR (ML)  0.0186  0.0239  0.0145  0.0119  0.0136  
SSTQR (Bayes)  0.0154  0.0100  0.0072  0.0052  0.0061  
GBSTQR  0.0685  0.0171  0.0165  0.0398  0.0754 
For SSTQR (Bayes) method, 10000 iterations are performed and the first 1000 iterations are disregarded as burnin. In the Table (1), the comparison of average MSE of estimated slope, intercept and estimated quantile curve value at three timepoints over all locations are provided for both of the abovementioned methods. Since we are using quadratic Bspline for the quantile basis functions and cubic Bspline for the coordinatewise spatial basis functions, , . To select the optimum number of knots, we use the Akaike information criterion (AIC). In this simulation study, we consider the cases and selected the model with best value of AIC.
It is noted that both SSTQR (Bayes) and SSTQR (ML) perform generally better than GBSTQR. SSTQR (Bayes) performs slightly better than SSTQR (ML) which beats the GBSTQR method in performance. It should be also noted that the optimization in the ML estimation can be considerably more challenging than obtaining samples since the negative likelihood is possibly nonconvex with multiple local maxima. To find the ML estimate, we solve 2501024 dimensional constrained blackbox optimization problem (for ).
5 Analysis of ozone concentration data of the US
Ozone concentration data of the US over the last several years can be found at the US EPA website^{§}^{§}§Source https://www3.epa.gov/airquality/airdata/ad_data.html. The yearly averages of the daily maximum of observed hourly ozone concentration values between 9:00 AM and 8:00 PM (Daily 1hour maximum average ozone concentration) and the daily maximum of 8 hour running average of observed hourly ozone concentration values (Daily 8hour maximum average ozone concentration) have been collected at around 1629 sites all over the US in the last several years.
As mentioned in the EPA website^{¶}^{¶}¶Sourcehttps://www3.epa.gov/region1/airquality/avg8hr.html, in 1997, EPA replaced the previous 1hour ozone standard with 8hour standard (which is supposed to be more protective) at a level of 84 parts per billion (ppb). 8hour primary standard is considered to be met at any given site if the 3year mean of the annual fourthhighest daily maximum 8hour average ozone concentration is less than or equal to the proposed level, i.e., 84 ppb as per EPA standard announced in 1997. Later in 2008, EPA changed the 8hour ozone concentration standard to 75 ppb and in 2015, it was decreased to 70 ppb due to emergence of more scientific evidence regarding the effects of ozone on public health and welfare. As mentioned in a news article Barboza (2015), environmentalists and health advocacy groups including the American Lung Association endorsed 8hour ozone standard to be 60 ppb. But Barboza (2015) mentioned as per EPA data from 2014, more than 40 million people, or in other words, about 1 in every 8 people of the US lives in the counties with air pollution levels exceeding the previous ozone standard of 75 ppb.
Here an analysis of the spatiotemporal trend of daily 1hour maximum average and daily 8hour maximum average ozone concentration for the last 10 years, i.e., during the period 20062015, has been provided using the proposed method. It should be noted that in the given dataset, each site does not have the data for all the years considered while some of the sites have multiple observations recorded at the same year measured using different measuring instruments. With the proposed method, these cases can be handled automatically without any further required modifications. For the analysis, the Markov Chain Monte Carlo (MCMC) has been initialized at the SSTQR (ML) estimate obtained using the GCDVSMS algorithm proposed in Das (2016a) with the values of the tuning parameters as mentioned in the earlier section. For this analysis, 10000 iterations have been performed disregarding the first 1000 iterations as burnin. The number of knots of the Bspline basis functions has been chosen based on the AIC criterion as mentioned in Section 4.
For the years 2006, 2010 and 2015 the daily 1hour maximum and 8hour maximum average ozone concentration are plotted over the US at the median level. It is noted that in the most of the parts of the US the daily 1hour maximum average ozone concentration has decreased over time. Specifically the decreasing trend of 1hour concentration is noticeable in the West North Central, Upper Midwest, Northeast and southern part of the South climate regions (see Figure 1, 2). The daily 8hour maximum average ozone concentration level also has an overall decreasing trend over the time period considered. The 8hour concentration has decreased quite a bit in the upper Northwest and West North Central regions. Besides, it has also decreased in the Northeast, South and Southeast regions (see Figures 4, 5). In Figure 6 we also show the quantile levels across the US where the 4th highest daily 8hour maximum ozone concentration is 70 ppb. It is noted that over the last ten years the 4th highest daily 8hour maximum ozone concentration in the US has generally a decreasing trend since 70 ppb ozone concentration is attained at relatively higher quantiles in the latter years.
6 Analysis of ozone concentration data of California
As mentioned in Barboza (2015), California has the worst smog in the US and it does not meet the existing smog limits. It also says that the air quality is the worst in the inland valleys, where pollution from vehicles and factories yields ozone in the presence of sunlight and that ozone is blown and trapped against the mountains. Some of the areas in California are so polluted that as per EPA, they are expected to meet the standards only by 2037, needing 12 more years to recover unlike the rest of the nation. According to the state Air Resources Board, about onethird population of California live in communities where the pollution level exceeds federal standards. As mentioned in the California Environmental Protection Agency site^{∥}^{∥}∥Sourcehttps://www.arb.ca.gov/research/aaqs/caaqs/ozone/ozone.htm, in April 2005, the Air Resources Board retained the previous 1hour ozone standard of 90 ppb and set a new 8hour standard of 70 ppb. This ozone standard review was also mandated by the Children’s Environmental Health Protection Act.
In this section, the spatiotemporal trend of daily 1hour maximum and 8hour maximum average ozone concentration data of California are analyzed over the time period 20062015 and spatial plots are shown for the years 2006, 2010 and 2015 at th quantile level. For those three years, we also show the quantile levels across California where the 4th highest daily 8hour maximum ozone concentration is 70 ppb and 4th highest daily 1hour maximum ozone concentration is 90 ppb. Similar to the previous section, the SSTQR (Bayes) method is used for the analysis and SSTQR (ML) estimate is used as the starting point of the MCMC chain. In this case also 10000 iterations are performed and the first 1000 iterations are disregarded as burnin. It is noted that overall there is a decreasing trend of daily 1hour maximum average concentration level at both the quantile levels considered. Specifically in the Northern California, Sacramento Region, Central Valley, High Sierra Desert, Los Angeles and San Diego (see Figures 7, 8) the 1hour maximum average ozone concentration has noticeably decreased over time. It is also noted that in the Inland Empire region, it has a slightly increasing trend over time. As mentioned earlier, unlike in other parts of the US, Air Resources Board retained 1hour ozone standard in California and its 8hour standard has been 70 ppb for the last ten years while till September 2015, 8hour standard for the rest of the country has been greater than or equal to 75 ppm. Thus to comply with the stricter rules, tighter controls on factories, vehicles, power plants and other emitters of smogforming pollutants, the overall daily 1hour maximum average ozone concentration of California shows a decreasing trend.
The change in daily 8hour maximum average ozone concentration varies a lot spatially over the time and there no particular temporal trend is observed for California as whole (see Figure 9). In the High Sierra Desert and northeast part of Northern California, the 8hour maximum average ozone concentration has a decreasing trend over time at both the quantiles. On the other hand, in Sacramento Region, Inland Empire and San Diego, it has an increasing trend over the time. Not much temporal variations have been observed in the Bay Area and the Central Coast for both 1hour maximum and 8hour maximum average ozone concentration levels. In High Sierra Desert and Inland Empire, this ozone concentration level is changing rapidly.
In Figures 9(a), 9(b) and 9(c) it is noted that in Northern California, Bay Area, Sacramento Region and in the upper part of the Central Valley, the 4th highest daily 8hour maximum ozone concentration threshold, i.e., 70 ppb is achieved at very lower quantile levels. In Los Angeles and San Diego area this threshold value is achieved at a higher quantile level. In Figures 9(d), 9(e) and 9(f) we note that at the junction of Bay Area, Sacramento Region, south Northern California and upper Central Valley the 4th highest daily 1hour maximum ozone concentration threshold, i.e., 90 ppb is achieved at a lower quantile level compared to other parts of California. It is also noted that with time, the ozone concentration is decreasing in this region. In southern part of California, the threshold of daily 1hour maximum ozone concentration is met at higher quantile level indicating that southern California is performing better in maintaining the ozone concentration level below recommended threshold level. But in San Diego area a strictly increasing pattern of ozone concentration is noted.
7 Discussion
As long as spatiotemporal regression is concerned, one of the major issues is the presence of missing temporal data at some locations. One way to solve this problem is to replace the missing data by estimated value based on the available data. It makes more sense to fit the model on the available data without using extrapolation or estimated missing values of data. In our regression model, the number of timepoints of available data at various sites can be different (see equation (9)). This problem does not arise in the proposed method.
When the sample size is small at a location, in the method of Reich (2012), the estimated separate temporal quantile regression curve at that location will be widely affected by sampling fluctuations. If a lot of temporal data are missing at some locations, or when our data size is available only at a few temporal points, in spite of fitting separate quantile curves for all sites, it is desirable to fit a quantile curve considering the available data of all the sites together. Thus, estimation at the neighborhood of the locations with small temporal data can be improved by borrowing strength from neighboring sites. Hence the proposed method yields better estimates and is less affected by missing data and scarcity often.
In some approaches to spatiotemporal regression and timeseries analysis, the temporal data are required to be obtained periodically. However in the proposed method, the temporal data are not required to be periodic. Thus we conclude that the proposed approach gives a natural Bayesian method for estimation and uncertainty quantification for simultaneous spatial quantile regression. The proposed method allows the data to be flexible with respect to missingness, monotonicity and lowsample size at individual locations. The method improves the quality of the inference by borrowing strength across all locations and all quantile levels by incorporating a natural nonparametric smoothing in its prior construction. By using Bspline functions in its smoothing, the method allows a relatively more efficient approach to likelihood evaluation compared with an analogous procedure based on Gaussian procedures.
As mentioned in Barboza (2015), according to the EPA ozone level has decreased by about onethird over the US since 1980 by imposing regulations targeting emissions from cars, factories, consumer products and other sources of pollutants. In this analysis, it is observed that overall there is a decreasing trend in the daily 1hour and 8hour maximum average ozone concentration levels in the US during the period 20062015. Specifically, in the lest ten years, the ozone concentration has decreased considerably in the northern part of the US. In California, it is noted that there is an overall decreasing trend in the 1hour concentration while there is no specific trend for the 8hour concentration and the trend varies a lot spatially. According to Barboza (2015), except for a few places in California, the rest of the US is expected to comply with the EPA ozone standards by 2025. Some of the most polluted areas in Southern California and the San Joaquin Valley are expected to comply with it by 2037.
Acknowledgements
This project was partially supported by NSF grant number DMS 1510238.
References
 Barboza (2015) Barboza T. 2015. New attack on California’s dirty air, Los Angeles Times, Available at http://www.latimes.com/local/lanow/lamelnwhatnewsmogrulesmeanforcaliforniaairpollution20150930story.html.
 Bethke (1980) Bethke A. 1980. Genetic algorithms as function optimizers, Available at https://deepblue.lib.umich.edu/handle/2027.42/3572.
 Bondell et al. (2010) Bondell HD, Reich RJ, Wang H. 2010. Noncrossing quantile regression curve estimation, Biometrika, 97: 825–838.
 Chib and Greenberg (1995) Chib S, Greenberg E. 1995. Understanding the MetropolisHastings algorithm, The American Statistician, 49(4): 327–335.
 Das (2016a) Das P. 2016. Blackbox optimization on multiple simplex constrained blocks, arXiv:1609.02249v1.
 Das (2016b) Das P. 2016. Derivativefree efficient global optimization of function of parameters from bounded intervals, arXiv:1604.08616v1.
 Das (2016c) Das P. 2016. Derivativefree efficient global optimization on highdimensional simplex, arXiv:1604.08636v1.
 Das and Ghosal (2016) Das P, Ghosal S. 2016. Bayesian quantile regression using random Bspline series prior,Computational Statistics and Data Analysis (To appear, DOI : 10.1016/j.csda.2016.11.014) arXiv:1609.02950v1.
 Boor (2001) Boor C. 2001. A Practical Guide to Splines, Revised Edition, NewYork : Springer.
 Dunson and Taylor (2005) Dunson DB, Taylor J. 2005. Approximate Bayesian inference for quantiles, Journal of Nonparametric Statistics, 17(3): 385–400.
 Dunson and Park (2008) Dunson DB, Park J. 2008. Kernel stickbreaking processes, Biometrika, 95: 307–323.
 Fraser (1957) Fraser A. 1957. Simulation of genetic systems by automatic digital computers, Australian Journal of Biological Sciences, 10: 484–491.
 Gelfand and Kottas (2003) Gelfand AE, Kottas A. 2003. Bayesian semiparametric regression model for median residual life, Scandinavian Journal of Statistics, 30(4): 651–665.
 Gelfand et al. (2005) Gelfand AE, Kottas A, MacEachern SN. 2005. Bayesian nonparametric spatial modeling with Dirichlet process mixing, Journal of the American Statistical Association, 100: 1021–1035.
 Goldberg (1989) Goldberg D. 1989. Genetic Algorithms in Search, Optimization and Machine Learning, AddisonWesley Publishing Company.
 Granville et al. (1994) Granville V, Krivanek M, Rasson J. 1994. Simulated annealing: A proof of convergence, IEEE Transactions on Pattern Analysis and Machine Intelligence, 16: 652–656.
 Griffin and Steel (2006) Griffin JE, Steel MFJ. 1994. Orderbased dependent Dirichlet processes, Journal of the American Statistical Association, 101: 179–194.
 He (1997) He X. 1997. Quantile curves without crossing, American Statistician, 51: 186–192.
 Kirkpatrick et al. (1983) Kirkpatrick S, Gelatt Jr C, Vecchi M. 1983. Optimization by Simulated Annealing, Australian Journal of Biological Sciences, 220: 671–680.
 Koenkar (2005) Koenkar R. 2005. Quantile Regression, London: Cambridge University Press.
 Koenkar and Bassett (1978) Koenkar R, Bassett G. 1978. Regression quantiles, Econometrica, 46: 33–50.
 Kottas and Gelfand (2001) Kottas A, Gelfand A. 2001. Bayesian semiparametric median regression modeling, Journal of the American Statistical Association, 96: 1458–1468.
 Kottas and Krnjajic (2009) Kottas A, Krnjajic M. 2009. Bayesian semiparametric modelling in quantile regression, Scandinavian Journal of Statistics, 36: 297–319.
 Lee and Neocleous (2010) Lee D, Neocleous T. 2010. Bayesian quantile regression for count data with application to environmental epidemiology, Journal of the Royal Statistical Society: Series C, 59: 905–920.
 Neocleous and Portnoy (2008) Neocleous T, Portnoy S. 2008. Quantile curves without crossing, Statistics and Probability Letters, 78: 1226–1229.
 Oh et al. (2011) Oh H, Lee T, Nychka D. 2011. Fast nonparametric quantile regression with arbitrary smoothing methods, Journal of Computational and Graphical Statistics, 20: 510–526.
 Reich (2012) Reich BJ. 2012. Spatiotemporal quantile regression for detecting distributional changes in environmental processes, Journal of Royal Statistical Society Series C (Applied Statistics), 61(4): 535–553.
 Reich and Fuentes (2007) Reich BJ, Fuentes M. 2007. A multivariate semiparametric Bayesian spatial modeling framework for hurricane surface wind fields, the Annals of Applied Statistics, 1: 249–264.
 Reich et al. (2011) Reich B, Fuentes M, Dunson D. 2011. Bayesian spatial quantile regression, Journal of the American Statistical Association, 106(493): 6–20.
 Shim et al. (2009) Shim J, Hwang C, Seok K. 2009. Noncrossing quantile regression via doubly penalized kernel machine, Computational Statistics, 24: 83–94.
 Shim and Lee (2010) Shim J, Lee J. 2010. Restricted support vector quantile regression without crossing, Journal of the Korean Data and Information Science Society, 21(6): 1319–1325.
 Sobotka and Kneib (2012) Sobotka F, Kneib T. 2012. Geoadditive expectile regression, Computational Statistics and Data Analysis, 56(4): 755–767.
 Takeuchi (2004) Takeuchi I. 2004. Noncrossing quantile regression curves by support vector and its effcient implementation, Proceedings of 2004 IEEE IJCNN, 1: 401–406.
 Takeuchi et al. (2006) Takeuchi T, Le Q, Sears T, Smola AJ. 2006. Noparametric quantile estimation, Journal of Machine Learning Research, 7: 1231–1264.
 Tokdar and Kadane (2012) Tokdar S, Kadane JB. 2012. Simultaneous linear quantile regression : a semiparametric Bayesian approach, Bayesian Analysis, 7(1): 51–72.
 Vapnik (1995) Vapnik V. 1995. The Nature of Statistical Learning Theory, New York: SpringlerVerlag.
 Wu and Liu (2009) Wu Y, Liu Y. 2009. Stepwise multiple quantile regression estimation using noncrossing constraints, Statistics and Its Inference, 2: 299–310.
 Yu and Moyeed (2001) Yu K, Moyeed RA. 2001. Bayesian quantile regression, Statistics and Probability Letters, 54: 437–447.
Appendix
Appendix A Likelihood computation
Recall, are the equidistant knots on the interval such that for and are the basis functions of Bsplines of degree on on the abovementioned equidistant knots. Then Equation (4) can be written as
(A.1) 
We evaluate the likelihood by a similar approach of Tokdar and Kadane (2012) and Das and Ghosal (2016). Suppose that we have data for spatial sites and at the th site, denote the values of the explanatory variable (time) and the response variable, being the number of datapoints for , the likelihood is given by where
Now,
(A.2) 
where is the solution of
(A.3) 
For any given location , since and are strictly monotonic, their convex combination is also strictly monotonic. Hence there will exist a unique solution of equation (A.3). Using the properties of derivative of Bspline (Boor (2001)) we have
(A.4) 
where
for . Now, using equation (A.2) and (A.4), we have
(A.5) 
Hence the total loglikelihood is given by
(A.6) 
We note that the parameters of the likelihood are the coefficients of the Bspline basis expansion of and which are
(A.7) 
for .
Appendix B Block MetropolisHastings MCMC algorithm
Recall that
(A.8) 
for and . It is noted that and are on the unit simplex for any given . We use Block MetropolisHastings Monte Carlo Markov Chain (MCMC) algorithm (see Chib and Greenberg (1995)) for sampling from the posterior distribution. Note that, the number of unit simplex blocks is . In MCMC, a move is initiated on each unit simplex block in a loop. During the updating stage of a single unitsimplex block, similar to the updating strategy used in Das and Ghosal (2016), independent sequences and are generated from for some . For given , define and for . The proposal moves and are given by