SDIMM+ height characterization of daytime seeing using solar granulation
Key Words.:
Solar telescopes – adaptive optics – Site characterizationAbstract
Context:To evaluate site quality and to develop multiconjugative adaptive optics systems for future large solar telescopes, characterization of contributions to seeing from heights up to at least 12 km above the telescope is needed.
Aims:We describe a method for evaluating contributions to seeing from different layers along the lineofsight to the Sun. The method is based on Shack Hartmann wavefront sensor data recorded over a large fieldofview with solar granulation and uses only measurements of differential image displacements from individual exposures, such that the measurements are not degraded by residual tiptilt errors.
Methods:The covariance of differential image displacements at variable field angles provides a natural extension of the work of Sarazin and Roddier to include measurements that are also sensitive to the height distribution of seeing. By extending the numerical calculations of Fried to include differential image displacements at distances much smaller and much larger than the subaperture diameter, the wavefront sensor data can be fitted to a welldefined model of seeing. The resulting leastsquares fit problem can be solved with conventional methods. The method is tested with simple simulations and applied to wavefront data from the Swedish 1m Solar Telescope on La Palma, Spain.
Results:We show that good inversions are possible with 9–10 layers, three of which are within the first 1.5 km, and a maximum distance of 16–30 km, but with poor height resolution in the range 10–30 km.
Conclusions:We conclude that the proposed method allows good measurements when Fried’s parameter is larger than about 7.5 cm for the ground layer and that these measurements should provide valuable information for site selection and multiconjugate development for the future European Solar Telescope. A major limitation is the large field of view presently used for wavefront sensing, leading to uncomfortably large uncertainties in at 30 km distance.
1 Introduction
Presently, the most commonly used method for quantifying astronomical seeing is the differential image motion monitor (DIMM), proposed for ESO site testing by Sarazin & Roddier (1990) and relying on theoretical calculations and suggestions of Fried (1975). The DIMM built for ESO consists of a single 35 cm telescope with a pupil mask to allow images of bright stars to simultaneously be monitored through two small subapertures separated spatially on a CCD by means of a beam splitter. The advantage of this instrument is that differential image displacements can be measured without impact from telescope tracking errors due to e.g. wind load. By comparing to theoretical calculations of Fried, this allows accurate seeing characterization in terms of the socalled Fried parameter or an equivalent ‘seeing’ disk diameter, related to via , where is the wavelength. In the following, we refer all values of to a wavelength of 500 nm.
An instrument similar to the DIMM, the SDIMM (Lui & Beckers, 2001; Beckers, 2002) was built for site testing for the 4m Advanced Technology Solar Telescope (ATST), but using onedimensional differential motion of the solar limb, measured through two 5 cm subapertures.
A unique advantage of solar observations is the availability of fine structure for wavefront sensing nearly everywhere on the solar surface. This allows large solar telescopes to operate with adaptive optics systems that lock on the same target as observed with science cameras. A particular problem of wavefront sensing with solar telescopes, however, is that lowcontrast granulation structures must be usable as targets and this necessitates the use of a fairly large fieldofview (FOV) for wavefront sensing. An important consequence of this is that contributions from higher layers are averaged over an area that increases with height. This degrades the sensitivity to highaltitude seeing.
Within the framework of a design study for the European Solar Telescope (EST), a program for studying two potential sites, one on La Palma and one on Tenerife, has been initiated. The goal is to characterize the height distribution of contributions to seeing at the two sites and to define requirements for a multiconjugate adaptive optics (MCAO) system for the EST in good seeing conditions. At present, very little work has been published on characterization of highaltitude seeing based on wavefront sensors operating on solar telescopes. Measurements of scintillation of sunlight with a linear array of detectors have been shown to be sensitive to the height distribution of seeing contributions (Seykora, 1993; Beckers, 1993, 2002). Because of the integration of contributions to scintillation over the large solid angle subtended by the solar disk, an array of detectors with fairly large baseline is needed to achieve sensitivity up to a height of only 500 meters (Beckers, 1993). Instrumentation that utilizes this technique, referred to as ‘SHAdow BAnd Ranging’ (SHABAR; Beckers, 2001) was built to characterize near ground seeing in connection with site testing for the ATST (Hill et al., 2006). A similar instrument, but with much longer baseline (3.2 m) than that used for the ATST, is under construction for EST site testing (Collados, private communication).
The first published ShackHartmann (SH) based measurements of highaltitude seeing with solar telescopes are those of Waldmann et al. (2008; 2010 in preparation), based on the covariance of local image displacements from different subapertures of a widefield wavefront sensor (WFWFS). However, methods that are based on covariances (or correlations) of absolute image displacements are sensitive to tracking errors and vibrations in the telescope (Fried, 1975). Waldmann et al. (2008; 2010, in preparation) therefore also implemented modeling of compensation of such image displacements with a tiptilt mirror and adaptive optics (AO), assumed to correct Zernike aberrations. A technique related to that proposed below is the SLODAR (Wilson, 2002) for nighttime measurements of seeing. The principle of this instrument is to use the crosscorrelation of image motion measured for binary stars with a SH wavefront sensor through different subapertures. To eliminate the impact of telescope guiding errors, the average image motion of both stars measured with all subapertures is subtracted from the image motions of individual subapertures for each exposure separately. However, this procedure also removes the common atmospheric wavefront tilt, averaged over the entire telescope aperture for each of the two stars observed. This introduces an anisoplanatic component of the wavefront (Butterley et al., 2006). As shown by the same authors, this can be compensated for in the analysis of the data, but at the price of making the analysis considerably more complex than in the originally proposed method by Wilson (2002).
In the present paper, we propose a method that is also insensitive to image motion from the telescope or residual errors from a tiptilt mirror but quite simple to model. The present method relies on measurements of the covariance of differential image displacements (between two subapertures) at different field angles. As we shall see, this approach can be considered as a natural extension of the DIMM and SDIMM approach to include measurements that are sensitive to the height variation of seeing and we therefore refer to the proposed instrument as SDIMM+. We show that SDIMM+ data can be inverted using calculations made by Fried (1975) to model contributions from different layers along the line of sight (LOS). The paper is organized as follows: In Sect. 2, we describe the assumptions made, the method and extend the calculations made by Fried (1975). In Sect. 3, we describe the optical setup and the algorithms used to measure differential image displacements and to compensate for noise bias. In Sect. 4, we describe the results of the first data obtained with the SDIMM+ installed at the Swedish 1m Solar Telescope (SST) on La Palma and in Sect. 5 we give concluding remarks about the proposed method and its limitations.
2 Method
We make the following assumptions:

A measurement of image position is equivalent to a measurement of the slope of the wavefront over the subaperture.

A measurement of image position averaged over the FOV used is equivalent to averaging the wavefront Zernike tip or tilt over the solid angle corresponding to the subaperture and the FOV.

The atmosphere is modeled as consisting of discrete (thin) layers.

The contributions from different layers are statistically independent of each other.

The turbulent fluctuations in refractive index giving rise to the inferred wavefront slopes are statistically homogeneous and isotropic within a given layer. These fluctuations can be characterized in terms of a structure function based on Kolmogorov turbulence.

Propagation and saturation effects are negligible. For a discussion of these effects on DIMM measurements, see Tokovinin & Kornilov (2007).
The present method relies on using individual exposures to measure relative image displacements of the same target observed through different subapertures. These restrictions are essential:By measuring differential image displacements from an individual exposed image, errors from telescope movements are eliminated; by using the same target observed through different subapertures we can use crosscorrelation techniques with arbitrary solar fine structure to measure relative image displacements. We shall use the covariance of two such
differential measurements, taken at different field angles, to achieve our goal of characterizing seeing. Furthermore, we shall restrict the present analysis to measuring only relative image displacements that are either along (longitudinal or xcomponent) or perpendicular (transverse or ycomponent) to the line connecting the centers of the two subapertures. Because of the assumed isotropy and homogeneity of the turbulent seeing fluctuations, the problem is then onedimensional such that the measured relative image displacements can only depend on the distance between the two measurement points. Without loss of generality, we can assume that one of the subapertures is located at the origin, whereas the second subaperture is located at a distance from that subaperture. Furthermore, we assume that the field angle of the first measurement is zero, while that of the second measurement is and that this tilt is along the xcomponent of the image displacements. For the first measurement, the measured quantity
then corresponds to added contributions from the layers, located at heights
(1) 
and that of the second measurement corresponds to is then
(2) 
Because of the assumed independence of contributions from different layers, the covariance between and is given by
(3) 
where denotes averages over many exposed frames. Evaluating the four terms one by one and taking advantage of the assumed homogeneity of the statistical averages at each height , we can rewrite Eq. (3) as a combination of three variances of differential image displacements
(4) 
This form of the equation clearly shows the connection to DIMM (Sarazin & Roddier, 1990) and SDIMM measurements. When , the last term disappears, the first two terms are equal and we have a conventional DIMM, except that the contributions from the high layers are reduced by the averaging of image displacements from the relatively large FOV (see below).
Sarazin & Roddier (1990) gave an approximate equation for estimating the variance of differential image displacements recorded with two subapertures of diameter , separated by a distance :
(5) 
for longitudinal image displacements and
(6) 
for transverse image displacements. Here, is the subaperture diameter and is Fried’s parameter. Adopting their notation and that of Fried (1975), we rewrite these equations as
(7)  
(8) 
where it is assumed that the two subapertures are separated along the xaxis. The function is normalized such that it approaches unity when approaches infinity and is symmetric and . With an assumed zero inner scale and an infinite outer scale for the turbulence, the function can only depend on separation , divided by the diameter of the averaging (round) aperture. The approximations given by Sarazin and Roddier are reasonably accurate only for . However, to implement the present method we need more accurate estimates of also when approaches zero and we need to take into account the averaging effect of using a large FOV for wavefront sensing (see below). We have extended the calculations of Fried (1975, his Eq. (32)) of and to include also and as large as 50, corresponding to a distance of 5 m with 10 cm subapertures. The results of these calculations agree perfectly with those of Fried when but systematically deviate with increasing and differ by 10% when . As suspected by Sarazin & Roddier (1990), this suggests numerical inaccuracies in the calculation of Fried. We confirm this by noting that the calculations of Fried show a slight decrease of longitudinal differential image displacements when , which is clearly not physical. We obtain similar behavior in our calculations when the step size of one of the integrating variables ( in Fried’s Eq. 32) is too large and have decreased that step size by approximately a factor of 80 compared to calculations essentially reproducing the results of Fried. With this improvement, the calculations show monotonous increase of and with as long as which is a much larger range than needed for the present analysis. We therefore believe that our calculations must be more accurate than those of Fried (1975).
For wavefront sensing with an extended target, the averaging area corresponds to the subpupil area only close to the telescope. The effective averaging area expands from the pupil and up by an amount that increases with the FOV used for wavefront sensing. A reasonable estimate of the effective diameter can be obtained by calculating the convolution of the binary aperture and the FOV used for wavefront sensing projected at the height . At small heights, the convolved function will be unity over an area corresponding to the subpupil and gradually fall off outside the subpupil. By calculating the area of the convolved binary pupil and FOV and equating that to , an effective diameter can be defined. A good approximation is to set to the maximum of and , where is the (average) diameter of the FOV. At large heights, is much larger than the subpupil diameter . At a distance of 30 km and with a FOV diameter of arcsec, we have m, which is 8 times larger than the subaperture diameter m used for recording data. The averaging effect of the large FOV used thus has strong effect on the determination of from differential image displacements measurements for the higher layers. In particular, we should not expect the method to work well, or even at all, when is larger than . This ‘cone effect’ is similar to that of measurements of the integrated scintillation from the entire solar disk (Beckers, 1993) but of much reduced magnitude since the SH measurements use a typical FOV of 5–6 arcsec diameter, whereas the solar disk subtends a diameter 30 times larger.
We conclude that we can use the theory developed by Fried (1975) for modeling differential image displacements measured with the proposed method by simply replacing with and the pupil diameter with an effective diameter . To make the averaging area roundish, we simply apply an approximately round binary mask when measuring image displacements from the granulation images with crosscorrelation techniques. Combining Eqs. (4), (7) and (8), we now obtain
(9) 
(10) 
where
(11) 
(12) 
The coefficients are given by
(13) 
expressed in terms of Fried’s parameter . We can also express the results in terms of the turbulent strength of each layer, ,
(14) 
where is the atmospheric structure constant and is the thickness of each (thin) layer. We obtain the relation
(15) 
Figure 2 shows the expected theoretical covariance functions given by Eqs. (11) and (12) for single seeing layers at different heights. We have made these calculations by assuming a 5.5 arcsec diameter for the FOV used by wavefront sensing. We assumed ten subapertures of 9.8 cm diameter each across the 98 cm SST pupil diameter, giving a total of 85 well illuminated subapertures within the pupil. The maximum field angle of 46.4 arcsec corresponds to the WFWFS built for the Swedish 1m Solar Telescope. The calculations were made for heights 0.0, 0.5, 1.5, 2.5, 3.5, 4.5, 6.0, 9.5, 16 and 30 km. From these covariance functions we conclude that the SDIMM+ should be able to distinguish seeing at the pupil () and at a height of about 500 m above the pupil and that the angular resolution is adequate to allow measurements up to a height of about 20–30 km. However, the similarity between the covariance functions at 16 and 30 km height clearly demonstrates poor height resolution at these heights. This is a direct consequence of the large FOV used for wavefront sensing. As can be seen in the figure, the minimum height for which can be measured is set by the maximum field angle while the maximum height for which meaningful inversions can be made is set by the diameter of the FOV (Wilson, 2002). A particular feature in the covariance functions shown is the tilted dark line, marking minimum covariance. This corresponds to , i.e., where the beam defined by crosses the beam defined by such that — see Fig. 1. This tilted dark line in principle allows direct measurement of the height of a single strong seeing layer, if that layer has a small vertical extent.
3 Implementation
3.1 Wavefront sensor description
The widefield wavefront sensor (WFWFS) is mounted immediately under the vacuum system of the SST. Light to the WFWFS is fed from a FOV adjacent to the science FOV and deflected horizontally such that the WFWFS beam does not pass the SST tiptilt and AO system. The WFWFS optics consists of a field stop, a collimator lens and an array with 85 hexagonal microlenses within the 98cm pupil diameter, the layout of which is shown in Fig. 3. The microlenses have an equivalent diameter close to 9.8 cm. We record data through a 10 nm FWHM 500 nm CWL interference filter using a Roper Scientific 4020 CCD with m pixels, operating at a frame rate of approximately 9 Hz and with an exposure time of 3.0 msec. The image scale is 0.344 arcsec per pixel.
3.2 Image shift measurements
The lower panel in Fig. 3 shows a subimage with granulation, recorded with the WFWFS through one of its 85 subapertures. The subimage shown consists of 170x150 pixels, corresponding to 58.4x51.6 arcsec. In the left and righthand parts of that panel are indicated a roundish mask, outlining two subfields at different field angles , each with a diameter of approximately 16 pixels, or arcsec. The granulation pattern within a mask of this and another subimage from another subaperture is used to measure differential image shifts between two subapertures, as follows:
With the mask at the first subimage fixed, the mask at the second subimage is moved by (m,n) pixels in the (x,y) image plane, while accumulating the squared intensity difference between the two images,
(16) 
where and correspond to the two granulation images and is unity within its 16 pixel diameter and zero outside. This defines at a number of discrete pixels surrounding the minimum corresponding to the best fit differential image shift. To improve the estimated image shift, the quadratic interpolation algorithm proposed by Yi et al. (1992, Eq. (10)), is used. Tests with this image shift measurement algorithm indicate that an RMS error of about 0.02–0.05 arcsec is achievable when is in the range 720 cm (Löfdahl, in preparation). Following accepted nomenclature, we refer to the procedure outlined above as a “cross correlation” technique and as “correlation function”, although strictly speaking, we are not measuring image shifts with a cross correlation technique.
3.3 Reference image frame selection and averaging
The low contrast (typically 3% RMS in good seeing) granulation images observed through 9.8 cm subaperture and the relatively large (about 2 arcsec) granules, combined with the need to use a small FOV for wavefront sensing represents a significant challenge for wavefront sensing. Smearing of the granulation images by telescopic or atmospheric aberrations may cause large errors in the measured differential image shifts, or even complete failures. To reduce the risk of image shift measurements that fail completely, we do not perform image shift measurements directly between two arbitrary subimages. Instead, we use reference images, selected as the subimages with the highest RMS contrast. In Fig. 3, top panel, is indicated two subapertures (at locations and ) corresponding to such selected subimages and their relation to the two subapertures ( and ) for which differential image shifts are measured. In the mid panel of Fig. 3 the corresponding subimages and the subfields at two field angles, outlined with the roundish mask (), are indicated schematically. The arrows in the top two panels of Fig. 3 indicate how image shifts are measured. We first measure the differential image shift between reference subfield 1 (at ) and the subfield at s=0 and also the image shift between reference subfield 1 and the subfield at . We then calculate the difference between these two measured image shifts. We can express this operation as
(17) 
where is the position of the reference image measured with a subaperture at an (arbitrary) location on the pupil and indicates a measurement obtained with the crosscorrelation technique outlined above. This cancels out the shift of the crosscorrelation reference image, removes effects of any telescope movements (tiptilt errors) and produces the relative image positions needed for the analysis. To further reduce noise, we repeat this process 4 times, using the best 4 subimages as references, and average the results of these 4 measurements. We emphasize that this corresponds to averaging 4 measurements of the same quantity and that this is done separately for each pair of subapertures and subfields at each field angle .
3.4 Zeropoint references and time averaging
The differential image shift measurements obtained with crosscorrelation techniques suffer from a lack of absolute zeropoint reference. Such an absolute reference is in practice impossible to define precisely and furthermore not needed. Instead, we rely on the differential image shifts to average to zero over a sufficiently long time interval. The WFWFS data collection system is set to record bursts of 1000 frames, corresponding to approximately 110 sec wavefront data. In the data reduction, we assume that the seeing induced differential image shift averaged over 110 sec is zero and subtract the average shift measured from the 1000 frames of the burst. This averaging and bias subtraction is done separately for each pair of subfields () within each pair of subimages at and and for each of the 4 cross correlation reference images. This ensures that the measured covariances do not contain products of averages. Subtracting bias individually for data from the 4 cross correlation references is not needed for the covariance functions (we could just subtract the average shift for the image shifts averaged over the four cross correlation reference images), but is needed to correctly estimate the noise bias, as described in the following section.
3.5 Noise bias estimation and compensation
A fundamental limitation of this technique is the strong groundlayer seeing during daytime and the relatively weak contributions from higher layers. A weak seeing layer with cm contributes times less variance than a seeing layer with cm. The effective wavefront sensing FOV diameter is 80 cm at 30 km distance (Sect. 2). According to Eq. (13), the variance contributed from seeing at 30 km distance is thus reduced by approximately a factor by the averaging effect of the 5.5 arcsec FOV used to measure image displacements. Measurements of weak highaltitude seeing are therefore quite sensitive to random errors in the measured image positions, in the following referred to simply as “noise”. In particular, this noise produces bias in the measured covariances that in turn leads to systematic errors, unless the noise bias is compensated for.
To reduce noise, we repeat each differential image shift measurement 4 times with 4 different crosscorrelation reference images (Sect. 3.3). The corresponding image shifts are written for 1–4 as
(18) 
Here is the true shift and is the noise of measurement . The four measurements are averaged to reduce noise before calculating the covariance matrix. In addition, these four measurements are used to estimate and compensate for covariance noise bias. The measured covariance in the presence of noise is
(19) 
where is the noise averaged from the four measurements,
(20) 
is the covariance matrix in the absence of noise and the covariance noise bias is
(21) 
To evaluate the expression for this noise bias, we assume that the noise of each measurement is uncorrelated with that of other measurements, , unless . We then obtain
(22) 
To estimate the noise bias directly from the data, we subtract the first measurement from the corresponding average of all four measurements and calculate the covariance. We obtain the quantity
(23) 
Evaluating this expression, we obtain
(24) 
Forming similar expressions , with i=2,3,4, for the remaining three measurements and adding these, we obtain
(25) 
Comparing to Eq. (22), this gives the desired estimate of the noise bias as
(26) 
This noise bias is estimated at each individually, for each data set separately and subtracted from the measured covariance matrix .
We note that Eq. (23) is insensitive to any random errors that are common to (the same for) all 4 measurements. Thus, the proposed noise estimation method is useful to indicate the magnitude of random errors in the measured image shifts, in particular in bad seeing, but is not sufficiently accurate to provide robust estimates of the noise bias. Simulations are needed for proper understanding of noise propagation effects.
3.6 Leastsquares solution method
To obtain the unknown coefficients in Eqs. (9) and (10), we can solve a conventional linear leastsquares fit problem by minimizing the badness parameter , given by
(27) 
with respect to the coefficients . Here, the weight equals the number of independent measurements for each (Fig. 2, lowermost panel). As is evident from the layout of the microlens array (Fig. 3), a large number of independent samples is possible for small separations , but only a few samples are possible for large . Similarly, a large number of samples are possible for small, but not large, field angles . The weights in Eq. (27) properly balance the leastsquares fit to take into account the strongly varying number of measurements for different and the corresponding noise. This is of particular importance for (weak) highaltitude seeing, the signatures of which are at small angles , where is large,
Minimizing leads to a linear matrix equation for . However, this permits solutions with negative values for , which is clearly not physical. In order to restrict the solutions to yield positive values for , we make the variable substitution
(28) 
(Collados, private communication) and solve the corresponding nonlinear leastsquares fit problem with respect to the parameters . Fits to data obtained so far indicates excellent convergence properties of the implemented nonlinear method.
3.7 Height grid optimization
Good height grids can be found by calculating the inverse of the matrix , corresponding to the linear solution for , and choosing a height grid
that minimizes its noise sensitivity (sum of squared elements of the inverse of ). Such optimizations show that we should be able to determine contributions from the pupil plane plus about 8–9 layers above the pupil with the lowermost layer above the pupil located at a height of 500 m. The maximum height can be 30 km with a FOV of arcsec, however the height below that must be located around 16 km to not cause high noise amplification. The height resolution with which we can determine seeing contributions from layers above 10 km with this large FOV is thus strongly limited. Only with a smaller FOV can the height resolution at large distances be improved. For the inversions discussed in this paper, we used the height grid defined by = 0.0, 0.5, 1.5, 2.5, 3.5, 4.5, 6.0, 9.5, 16 and 30 km. We tested this configuration with input seeing layers at heights in steps of 250 m from 0 to 30 km height, then solved for contributions from the 10layer height grid defined above. For input heights that matched one of the 10layer heights, the inversion recovers the input height and contribution perfectly. When the input height is in between two of the heights in the inversion model, the inversion responds by distributing the correct ’s between the two surrounding layers such that the relative distributions are in rough proportion to the difference between the true height and the two surrounding heights in the inversion model. By constraining the values to be positive, negative overshoot in adjacent layers is prevented. This is illustrated in Fig. 4, which shows the response of the inversion to a thin seeing layer located at variable height
We conclude that the method should work well with good input data.
4 First Results
We recorded 20 bursts of wavefront data, each consisting of 1000 exposed frames, between 8:40 and 15:00 UTC on June 26, 2009. Each burst of 1000 frames took approximately 110 sec to record. After estimating and removing noise bias from each 1000 frame burst, the noise compensated data was analyzed in blocks of 250 frames, corresponding to about 27 sec of seeing data. While recording this wavefront data (without AO), the SST was used to record science data with its AO system in closed loop. The initial seeing quality was considered good. Around 11 UTC, the seeing started to deteriorate rapidly and science observations were stopped at 11:03 UTC.
Due to problems with data acquisition, several data sets (16–19) contained corrupt images and were not processed.
A fundamental aspect of the system is under what seeing conditions reliable inversions are possible. Several inversions return very low values for in the uppermost layers. Figure 5 shows for the groundlayer () versus the integrated for the 9.5–30 km layers. The dashed vertical line corresponds to cm for , the dashed horizontal line to cm at the 9.5–30 km layers. This figure shows that the smallest values for at the highest layers are all associated with poor groundlayer seeing. We conclude that inversions that return values at smaller than 7.5 cm, or about 75% of the subaperture diameter, should be rejected and excluded these data sets from further analysis.
We now take a closer look at the data and the inversions. The covariance functions, defined in Eq. (3), were evaluated at steps of 5 pixels (1.72 arcsec) with the maximum field angle limited to 46.4 arcsec. Noise bias was subtracted from the data, as described in Sect. 3.5. The soobtained covariance functions for 5 data sets recorded between 08:40 and 11:01 UTC (zenith distance in the range 30–61.0 deg) are shown in Fig. 6 together with the modeled covariance functions. The data and fits shown in this figure and in Fig. 7 are based on covariance functions calculated along rows of microlenses and corresponding subfields only (see Fig. 3), in Fig. 8 are shown plots of observed and fitted data based on covariance functions calculated along both rows and columns of microlenses and subfields. The height grid used with these inversions has nodes at 0.0, 0.5, 1.5, 2.5, 3.5, 4.5, 6.0, 9.5, 16 and 30 km. Comparing Fig. 6 to Fig. 2, the top two measured covariance functions indicate clear signatures of a ground layer plus one dominating highaltitude layer (note the slanted dark line in these panels). In Fig. 7 are shown plots of these measured and modeled covariance functions as functions of for field angles of 0 and 36.1 arcsec, where the second field angle is large enough to give very small influence from highaltitude seeing. It is evident that the fits are in general good, but that the measured transverse covariance is often too strong relative to that of the longitudinal covariance.
The jagged appearance of the plots in Fig. 8 are due to differences in measured covariances from rows and columns. This appears to indicate turbulence that is not isotropic above the ground layer. We have made inversions with and without the column covariances included and established that the overall conclusions drawn in this paper are robust, even though systematic differences between the two types of inversions exist.
Set  Long  Transv  Long  Transv  Long  Transv 

00:1  25  18  156  119  8213  4389 
03:0  26  18  137  102  4375  3058 
04:2  4  3  25  20  3192  5242 
07:3  28  19  163  116  3541  1315 
09:3  24  19  131  97  1642  650 
15:3  77  58  310  317  927  2426 
20:0  2  2  5  5  11  9 
22:1  58  42  286  199  1347  795 
4.1 Noise measurements
In Fig. 9 (left panel) is shown the variation of the covariance (upper curve) and noise covariance (lower curve, multiplied by a factor 10) with , at a separation m for data set 3:0. Note in particular, that the noise covariance at is about 6% of the measured variance at , which is similar to the expected contribution from a seeing layer with cm at km when cm at (see Sect. 3.5). Note also that the noise covariance drops off with angle in a way that is reminiscent of highaltitude seeing. However, the noise covariance drops by a factor of nearly 8 at arcsec, such that the noise covariance is negligible already at this field angle separation. This rapid decorrelation of the noise occurs on a scale that is similar to the diameter of individual granules, rather than on a scale that corresponds to the diameter of the FOV. Table 2 summarizes the S/N, defined as the ratio of the measured covariance (corrected for noise bias) to the noise covariance, for a few selected data sets. In this Table, covariances used to calculate S/N have been averaged over all in the range 0.098–0.784 m. The small S/N for data set 04:2 primarily comes from the excellent groundlayer seeing ( cm) for this data set. Data set 20:0 corresponds to one of the data sets rejected due to bad groundlayer seeing. Data set 22:1 corresponds to relatively poor seeing, cm for and cm integrated over the atmosphere, but evidently the wavefront data correspond to excellent S/N at all angles .
The mid panel of Fig. 9 shows the variation of the image position noise, calculated as the square root of the noise covariance at m and , with at . Quite clearly, the image position noise increases dramatically when for the ground layer is less than about 7 cm. This renders wavefront sensor measurements essentially useless when cm, as concluded already from Fig. 5. Finally, the last panel in Fig. 9 shows a relation between the image position noise and at the highest layers, clearly demonstrating that anisoplanatism from highaltitude seeing causes problems with the large FOV used for wavefront sensing. When is larger than about 45 cm, the RMS noise is about 0.05 arcsec but when is 35 cm, the RMS noise is doubled. Since our FOV corresponds to a diameter of 40 cm at 16 km, where most of the highlayer seeing originates, our results confirm the expectation that the FOV should be smaller than at the height for which is determined, else wavefront sensor noise increases rapidly with decreasing .
We finally caution that noise bias subtraction has a significant effect on measurements of seeing for the highest layers and that our confidence in these estimates relies on the method for noise compensation outlined in Sect. 3.5. It is therefore of considerable importance to study noise estimation by means of simulations.
4.2 Discussion of results
Figures 10 and 11 summarize some of the results. These two figures are based on fits to measured covariances along both horizontal rows and vertical columns, corrected for noise bias (Sect 3.5).
The top row of Fig. 10 shows the variation of the turbulence strength with time for the three nearground layers (, 0.5 and 1.5 km). The dominant layer is obviously at , and its variation with time clearly indicates gradually degrading seeing. The large scatter is real and illustrates the intermittent nature of the seeing, when averaged over relatively short time intervals (here, 27 sec), for these particular observations. The seeing at 500 m altitude shows a similar trend of degrading with time, but with turbulence strength that is typically 8 times weaker than those at . The seeing layer at 1.5 km is even weaker than that at 0.5 km.
The time evolution of seeing at the two highest layers are shown in the lower row of plots in Fig. 10. Except for a few data sets, the seeing contributions are consistently small from the 30 km layer. During this period, the highaltitude seeing primarily comes from the 16 km layer. The combined for the 9.5, 16 and 30 km layers averages at 37 cm. The accuracy of these estimates of highaltitude seeing is difficult to assess without independent verification. Images recorded with the SST (diameter 98 cm) generally show smallscale geometrical distortion but only minor smallscale differential blurring over the FOV, except when the Sun is at large zenith distance or when observations are made at short wavelengths around 400 nm. This is consistent with values of about one third of the SST diameter or larger, in agreement with the values found from this data. As regards the actual height of this seeing layer, upperair sounding data above Tenerife at 0:00 and 12:00 UTC for this day (http://weather.uwyo.edu/upperair/sounding.html, GuimarTenerife) show a temperature rise above 17–18 km altitude, indicating the location of the tropopause, and enhanced wind speeds between roughly 10.5 and 15.5 km, peaking at 14 km, altitude. If the latter layer is where the highaltitude seeing originates and if this is at the same height above La Palma as above Tenerife, then the height of that seeing layer should be about 11.5 km above the telescope. This corresponds to a distance of 23 km for the first data sets recorded at a zenith distance of 60 deg. This is midways between the two uppermost nodes in our inversion model.
However, the seeing estimates at 30 km are very uncertain and our confidence in these estimates rely on the applicability of the weights , defined in Sect. 3, and on the noise bias estimation and subtraction method outlined in Sect. 3.1. Setting all weights equal to unity reduces the average estimates of at 30 km by about a factor of two, but has only a minor effect on at all other heights. Similarly, reducing the weight at by a factor of two also strongly increases the estimated turbulence strength at 30 km but has a small effect at other heights. These and other tests indicate uncomfortably large uncertainties of our estimates of seeing at 30 km distance, most likely primarily related to the large FOV used.
An indication, based on SST observations, of seeing contributions from intermediate heights is the absence of noticeable largescale variations of image quality over a science FOV of typically 1 arcmin. This suggests that the dominant seeing is close to the ground layer and at high altitude and that is significantly larger than 30 cm for intermediate layers. Our data are consistent with this. The solid curve in Fig. 11 shows the turbulence strength as function of height, averaged over all data from 26 June 2009. Contributions to the seeing from heights in the range 1.5–6 km are obviously small for this data set. A conspicuous feature in Fig. 11 is the increased turbulence strength at 3.5 km, suggesting a weak seeing layer. This feature may not be real, but an artifact of too dense grid points in this height range. We repeated the inversions after replacing the two nodes at 2.5 and 3.5 km with a single node at 3 km, but leaving the remaining nodes unchanged. The result is indicated with a dashed line in Fig. 11 and shows a smoother variation of with height.
4.3 Extension of the method
In the present paper, we have calculated covariance functions from measured positions along rows and columns of subimages and subfields. This was done in order to model the positions in terms of purely longitudinal and transverse image displacements. The advantage of this approach is simplicity as regards modeling but the disadvantage is that covariances can only be calculated from pairs of subimages and subfields that are on the same row or column of subapertures. A more appealing approach is to process all data without these restrictions. This can be done by calculating Fried’s function for arbitrary angles , defining the angle between a line connecting the two subaperture and that of the subfields measured. The measured positions should first be rotated onto the line connecting the two apertures at the pupil plane. This angle rotates with height . A subfield FOV defined by a separation at the pupil plane and field angles and are projected at () at a height and which at large heights approaches . Thus virtually every subfield and every subaperture needs a unique basis function for every height layer in the model. By precalculating and storing the result as a table, these basis functions can be calculated by interpolation.
5 Conclusions
The proposed method is based on measurements of differential measurements of seeinginduced image displacements, making it insensitive to telescope tracking errors, vibrations or residual errors from a tiptilt mirror. The numerical computations of Fried (1975) can be used to provide the theoretical covariance functions needed, requiring very small amounts of software development and avoiding the need for calculation of covariance functions via numerical turbulence simulations. The finite FOV used for wavefront sensing with solar granulation is accounted for in an approximate way by defining an effective subaperture diameter , increasing with height.
In terms of data collection, the proposed method is identical to the SLODAR method (Wilson, 2002), also employing ShackHartmann wavefront sensor data. However, the SLODAR method uses averages of measured image shifts from all subapertures for each of the (two) stars observed to eliminate the effects of telescope guiding errors. Eliminating the anisoplanatism introduced by this averaging requires a fairly elaborate analysis of the data (Butterley et al., 2006). In terms of analysis, the present method appears simpler.
Based on simulations and data processed from a single day of observations, we conclude that the proposed method combined with wavefront data over about 55 arcsec subfields allows contributions to seeing from about 910 layers, stretching from the pupil up to 1630 km distance from the telescope. At distances up to about 6 km, measurements with good S/N and a height resolution up to nearly 1 km appears possible. The 5.5 arcsec FOV used for wavefront sensing leads to poor sensitivity to highaltitude seeing and strongly reduced height resolution beyond 10 km. At a distance of 30 km, the FOV used corresponds to averaging wavefront information over a diameter of 80 cm. Our estimates of are very uncertain at this height and clearly the FOV needs to be reduced for seeing measurements at such large distances to be convincing. We also have established empirically that detecting seeing from high layers with the present system requires to be larger than approximately 7.5 cm for the ground layer and that the FOV should be such that the corresponding averaging area is smaller than at the high (1030 km) layers.
An important limitation is wavefront sensor noise, leading to bias in the measured covariances. By using image position measurements from 4 crosscorrelation reference images, wavefront sensor noise is reduced and residual noise bias is estimated directly from the data and compensated for. However, the method used for estimation of noise bias relies on random errors from the 4 measurements to be independent and the method is furthermore “blind” to random errors that are the same for the 4 measurements. Noise propagation clearly needs further investigation.
The present method relies on wavefront tilts inferred from displacements of solar granulation images measured using cross–correlation techniques. The accuracy of these techniques for wavefront sensing is under investigation by means of simulations (Löfdahl, in preparation). A weakness of such conventional techniques is that the image displacement is assumed to be constant within the FOV used for crosscorrelations. This corresponds to modeling the wavefront as having pure tip and tilt without highorder curvature. At the intersection of adjacent subfields, this corresponds to discontinuous gradients of the wavefront. Crosscorrelations with overlapping subfields lead to multiple values of the wavefront gradient where subfields overlap. In addition to leading to inconsistencies and poor estimates of wavefront gradients, the use of conventional crosscorrelation techniques should also lead to noisier measurements when differential seeing within the FOV is strong. A more satisfactory approach may be to use a 2D Fourier expansion of the image distortions, and over the entire FOV and to use that representation to calculate the local gradients of the wavefront. The highest order Fourier components would be limited by sampling and the FOV. This would be quite similar to fitting SH wavefront data to loworder Zernike or KarhunenLoeve expansions, but over a rectangular FOV instead of over a round pupil. A simpler approach may be to use the 16 pixel FOV crosscorrelations as initial estimates and then refine the estimate with a 8–12 pixel FOV, restricting the corrections of the image position shifts to be small.
We believe that seeing measurements from the ground layer may be possible with the proposed method and appropriate noise bias compensation even when is somewhat less than 7.5 cm but that little or no information can be obtained from the higher layers in such conditions. The major problem is the relatively poor daytime seeing, limiting the quality and number of seeing estimates for the highest layers. Possibly, the Moon can be used as widefield wavefront sensor target for verification of daytime estimates of highaltitude seeing at night, but with obvious limitations in image scale, exposure time and S/N requiring careful consideration.
Acknowledgements.
The wavefront sensor optics was designed by Bo Lindberg at Lenstech AB and the wavefront sensor mechanics was designed and built by Felix Bettonvil and other members of the Dutch Open Telescope (DOT) team. The CCD camera software was written by Michiel van Noort. We are grateful for their help and assistance. We are also grateful for several valuable comments and suggestions by A. Tokovinin and M. Collados. This research project has been supported by a Marie Curie Early Stage Research Training Fellowship of the European Community’s Sixth Framework Programme under contract number MESTCT2005020395: The USOSP International School for Solar Physics. This work has been partially supported by the European Commission through the collaborative projectÂ 212482 ’EST: the large aperture European Solar TelescopeÂ´ Design Study (FP7  Research Infrastructures). This work has been supported by a planning grant from the Swedish Research Council.Footnotes
 offprints: scharmer@astro.su.se
 We will consistently refer to the heights of seeing layers above the telescope as if observations were made with the Sun at zenith. For observations made at a zenith distance , heights appearing in equations below need to be divided by to correspond to the actual distance between the telescope and the seeing layer.
 This plot is similar to that showing the height response for the multiaperture scintillation sensor (MASS) (Tokovinin et al., 2003).
References
 Beckers, J. 2002, in Astronomical Society of the Pacific Conference Series, Vol. 266, Astronomical Site Evaluation in the Visible and Radio Range, ed. J. Vernin, Z. Benkhaldoun, & C. MuñozTuñón , 350–+
 Beckers, J. M. 1993, Sol. Phys., 145, 399
 Beckers, J. M. 2001, Experimental Astronomy, 12, 1
 Butterley, T., Wilson, R. W., & Sarazin, M. 2006, MNRAS, 369, 835
 Fried, D. L. 1975, Radio Science, 10, 71
 Hill, F., Beckers, J., Brandt, P., et al. 2006, in Presented at the Society of PhotoOptical Instrumentation Engineers (SPIE) Conference, Vol. 6267, Society of PhotoOptical Instrumentation Engineers (SPIE) Conference Series
 Lui, Z. & Beckers, J. M. 2001, Sol. Phys., 198, 197
 Sarazin, M. & Roddier, F. 1990, A&A, 227, 294
 Seykora, E. J. 1993, Sol. Phys., 145, 389
 Tokovinin, A. & Kornilov, V. 2007, MNRAS, 381, 1179
 Tokovinin, A., Kornilov, V., Shatsky, N., & Voziakova, O. 2003, MNRAS, 343, 891
 Waldmann, T. A., Berkefeld, T., & von der Lühe, II, O. 2008, in Presented at the Society of PhotoOptical Instrumentation Engineers (SPIE) Conference, Vol. 7015, Society of PhotoOptical Instrumentation Engineers (SPIE) Conference Series
 Wilson, R. W. 2002, MNRAS, 337, 103
 Yi, Z., Darvann, T., & MolownyHoras, R. 1992, Software for Solar Image Processing  Proceedings from lest Mini Workshop, Tech. rep., LEST FOUNDATION TECHNICAL REPORT NO. 56