Studying Flow Close to an Interface by Total Internal Reflection Fluorescence Cross Correlation Spectroscopy: Quantitative Data Analysis

Studying Flow Close to an Interface by Total Internal Reflection
Fluorescence Cross Correlation Spectroscopy:
Quantitative Data Analysis

R. Schmitz Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz, Germany    S. Yordanov Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz, Germany    H. J. Butt Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz, Germany    K. Koynov Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz, Germany    B. Dünweg Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz, Germany Department of Chemical Engineering, Monash University, Clayton, Victoria 3800, Australia
July 5, 2019
Abstract

Total Internal Reflection Fluorescence Cross Correlation Spectroscopy (TIR-FCCS) has recently (S. Yordanov et al., Optics Express 17, 21149 (2009)) been established as an experimental method to probe hydrodynamic flows near surfaces, on length scales of tens of nanometers. Its main advantage is that fluorescence only occurs for tracer particles close to the surface, thus resulting in high sensitivity. However, the measured correlation functions only provide rather indirect information about the flow parameters of interest, such as the shear rate and the slip length. In the present paper, we show how to combine detailed and fairly realistic theoretical modeling of the phenomena by Brownian Dynamics simulations with accurate measurements of the correlation functions, in order to establish a quantitative method to retrieve the flow properties from the experiments. Firstly, Brownian Dynamics is used to sample highly accurate correlation functions for a fixed set of model parameters. Secondly, these parameters are varied systematically by means of an importance-sampling Monte Carlo procedure in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for massively parallel computers, which allows us to do the data analysis within moderate computing times. The method is applied to flow near a hydrophilic surface, where the slip length is observed to be smaller than , and, within the limitations of the experiments and the model, indistinguishable from zero.

pacs:
47.61.-k, 05.40.-a, 05.10.Gg, 05.10.Ln, 02.50.-r, 02.70.Uu, 02.60.Ed, 87.64.kv, 83.50.Lh, 07.05.Tp, 47.57.J-, 47.80.-v

I Introduction

A good understanding of liquid flow in confined geometries is not only of fundamental interest, but also important for a number of industrial and technological processes, such as flow in porous media, electro-osmotic flow, particle aggregation or sedimentation, extrusion and lubrication. It is also essential for the design of micro- and nano-fluidic devices, e. g. in lab-on-a-chip applications. However, in all these cases, an accurate quantitative description is only possible if the flow at the interface between the liquid and the solid is thoroughly understood Tabeling06 (); Vinogradova1999 (); Ellis04 (); Net05 (); Lauga07 (); Bonaccurso02 (); Neto03 (); Rod10 (); Guriyanova08 (); Feuillebois09 (). While for many years the so-called no-slip boundary condition (relative velocity at the interface equal to zero) had been successfully applied to describe macroscopic flows, more recent investigations revealed that this condition is insufficient to describe the physics when flows through channels with micro- and nano-sizes are considered Net05 (); Lauga07 (). On such small scales, the relative contribution from a residual slip between liquid and solid becomes important. This is commonly described by the so-called slip boundary condition, which is characterized by a non-vanishing slip length , defined as the ratio of the liquid dynamic viscosity and the friction coefficient per unit area at the surface. An equivalent definition is obtained by taking the ratio of the finite surface flow velocity, the so-called slip velocity , and the shear rate at the surface: , where is the spatial direction perpendicular to the surface, located at . This boundary condition is the most general one that is possible within the framework of standard hydrodynamics einzel1990boundary (); the no-slip condition is simply the special case .

The experimental determination of the slip length, however, is challenging, since high resolution techniques are needed to gain sufficiently accurate information close to the interface. Hence, the existence and the magnitude of slip in real physical systems, as well as its possible dependence on the surface properties, are highly debated in the community, and no consensus has been reached so far. Clearly, a resolution of these controversies requires further improvement of the experimental techniques.

To date, two major types of experimental methods, often called direct and indirect, have been applied to study boundary slip phenomena. In the indirect approach, an atomic force microscope or a surface force apparatus is used to record the hydrodynamic drainage force necessary to push a micron-sized colloidal particle versus a flat surface as a function of their separation Ducker1991 (); Butt1991 (). The separation can be measured with sub-nanometric resolution, and the force with a resolution in the range. A high force is necessary to squeeze the liquid out of the gap if the mobility of the liquid is small. Conversely, if the liquid close to the surface can easily slip on it, then a small force is necessary. From this empirical observation a quantitative value of the slip length can be deduced using an appropriate theoretical model Vinogradova1999 (); Vinogradova1995 (); Bonaccurso02 (). While this approach is extremely accurate at the nanoscale, it does not measure the flow profile directly.

Direct experimental approaches to flow profiling in microchannels are commonly based on various optical methods to monitor fluorescent tracers moving with the liquid. Basically they can be divided into two sub-categories.

The imaging-based methods use high-resolution optical microscopes and sensitive cameras to track the movement of individual tracer particles via a series of images Pit2000 (); Tretheway02 (); Jos05 (); Huang06 (); Lasne08 (); Bouzigues08 (); Li10 (). While providing a real “picture” of the flow, the imaging methods have also some disadvantages related mainly to the limited speed and sensitivity of the cameras: relatively big tracers are needed, the statistics is rather poor, and large tracer velocities cannot be easily measured.

In the Fluorescence Correlation Spectroscopy (FCS) based methods the fluctuations of the fluorescent light emitted by tracers passing through a small observation volume (typically the focus of a confocal microscope) are measured Rigler01 (). Using correlation analysis and an appropriate mathematical model the tracers’ diffusion coefficient and flow velocity can be evaluated Magde1978 (); Orden1998 (); Kohler2000 (); Gosch2000 (). In particular, the so-called Double-Focus Fluorescence Cross-Correlation Spectroscopy (DF-FCCS) that employs two observation volumes (laterally shifted in flow direction) is a powerful tool for flow profiling in microchannels Bri99 (); Dit02 (); Lum03 (); Vin09 (). Due to the high sensitivity and speed of the used photo detectors (typically avalanche photodiodes) in the FCS based methods even single molecules can be used as tracers. Furthermore, the evaluation of the velocity is based on large statistics and high tracer velocities can be measured.

During the last two decades both the imaging and the FCS methods were well developed to the current state that allows fast and accurate measurements of flow velocity profiles in microchannels. The situation, however, is different when the issue of boundary slip is considered. Due to the limited optical resolution imposed by the diffraction limit, it is commonly believed that these methods are less accurate than the force methods discussed above and cannot detect a slip length in the tens of nanometers range. On the other hand, the possibility to directly visualize the flow makes the optical methods still attractive and thus continuous efforts were undertaken to improve their resolution. One of the most successful approaches in this endeavor is Total Internal Reflection Microscopy (TIRM) Axelrod1984 (), which uses total internal reflection at the interface between two media with different refractive indices, like, e. g., glass and water. This creates an evanescent wave that extends into the liquid in a tunable region of less than from the interface. Optical excitation of the fluorescent tracers is then possible only within this narrow region. During the last few years TIRM was successfully applied to improve the resolution of particle imaging velocimetry close to liquid-solid interfaces Huang06 (); Lasne08 (); Bouzigues08 (); Li10 (), and slip lengths in the order of tens of nanometers were evaluated. With respect to FCS, however, TIR illumination had, until recently, been limited to diffusion studies only HasslerSecond05 (); Rie08 (), while there were no reports on TIR-FCS based velocimetry and slip length measurements.

With this in mind, we have recently developed a new experimental setup that combines for the first time TIR illumination with DF-FCCS for monitoring a liquid flow in the close proximity of a solid surface Koy09 (). Such a combination offers high normal resolution, extreme sensitivity (down to single molecules), good statistics within relatively short measurement times and the possibility to study fast flows. Our preliminary studies have shown, however, that the accurate quantitative evaluation of the experimental data obtained with this TIR-FCCS setup is not straightforward because the model functions needed to fit the auto- and cross-correlation curves (and extract the flow velocity profile) are not readily available. The standard analytical procedure to derive these functions is Bri99 (); Dit02 (); Lum03 (): (i) solve the convection-diffusion equation with respect to the concentration correlation function, (ii) insert the derived solution in the corresponding correlation integral and (iii) solve it to finally get the explicit form of the correlation functions. This procedure was successfully used by Brinkmeier et al. Bri99 () to derive analytical expressions for the auto- and cross-correlation functions obtained with double focus confocal FCCS (i. e. with focused laser beam illumination as opposed to the evanescent wave illumination in our case), where it was assumed that the flow velocity and tracer concentration are spatially constant, which simplifies the calculation substantially. Such an assumption is reasonable if the observation volumes (laser foci) are far away from the channel walls, in the same distance. In the case of TIR-FCCS, however, the situation is different: The experiments are performed in the proximity of the channel wall and the distribution of the flow velocity inside the observation volume has to be considered. Furthermore, the concentration of tracers may also depend on due to electrostatic repulsion or hydrodynamic effects. Finally the presence of a boundary, which must also be taken into account in the theoretical treatment, further complicates the problem. Therefore, a faithful description of the physics of TIR-FCCS makes the problem of calculating the correlation functions (rather likely) unsolvable in terms of closed analytical expressions.

For this reason, we rather resort to numerical methods, and in the present paper describe and test the procedure that we have developed: We employ Brownian Dynamics techniques to simulate the tracers’ motion through the observation volumes and generate “numerical” auto- and cross-correlation curves that are consequently used to fit the corresponding experimental data. This fitting is done via Monte Carlo importance sampling in parameter space. The method is therefore fully quantitative, while not being hampered by any difficulties in doing analytical calculations. It should be noted that this approach also provides a substantial amount of flexibility: The details of the physical model are all encoded in the Brownian Dynamics simulation which specifies how the tracer particles move within the flow. In the present work we have assumed a simple Couette flow with a finite slip length, while the particles are described as simple hard spheres with no rotational degree of freedom, and no interaction with the wall except impenetrability. It is fairly straightforward to improve on these limitations, by, e. g., including hydrodynamic and electrostatic interactions with the wall, rotational motion of the spheres, or polydispersity in the particle size distribution. Moreover, the geometry of the observation volumes can be easily changed as well, and we have made use of this possibility in our present work, but only to some extent. Further refinements are left for future work, in which the basic methodology would however remain unchanged.

To test the accuracy of the newly developed TIR-FCCS experimental setup and the numerical data evaluation procedure, we have studied aqueous flow near a smooth hydrophilic surface and evaluated the slip length to be between and (however with a systematic error that is hard to quantify, and whose elimination would need a more sophisticated theoretical model). It is commonly accepted Lasne08 (); Bouzigues08 (); Li10 (); Idol1986 (); Jos05 (); Charlaix05 (); Joly06 (); Honig07 () that the boundary slip should be zero (or very small) in this situation. Thus, our results indicate that TIR-FCCS offers unprecedented accuracy in the range for the measurement of slip lengths by an optical method. We believe that our result for the slip length will be fairly robust, even if the physical model is refined further.

Section II outlines the experimental setup, while Sec. III presents the experimental results and the numerical fits. We find that the measured cross-correlation functions deviate considerably from the model functions at short times, probably as a result of some optical effects which at present we do not fully understand. However, we show a practical way to eliminate such effects to a large extent, by means of a simple subtraction scheme. The following parts then outline in detail how the theoretical curves have been obtained: Firstly, Sec. IV elucidates the relation between the measured correlation functions and the underlying dynamics of the tracer particles. We then proceed to describe the Brownian Dynamics algorithm to sample the model correlation functions (Sec. V). Section VI then provides a detailed theoretical analysis of our subtraction scheme. In Sec. VII we describe the Monte Carlo method to find optimized parameter values of our model. Section VIII then discusses our results, in particular concerning the slip length; this is followed by a brief summary of our conclusions (Sec. IX).

Ii Experimental Setup

Figure 1: (Color) Scheme of the experimental TIR-FCCS setup. BFP - back focal plane of the objective; DM - dichroic mirror; M50/50 - neutral beam splitter; EF1, EF2 - emission filters; PH1, PH2 - pinholes; APD1, APD2 - avalanche photodiodes; L1 - tube lens; L2 - collimator lens; M - collimator’s prism based mirror. Note that the two spatially separated observation volumes are created by shifting the pinholes PH1/PH2 in the --plane. The cyan color indicates the excitation wavelength and the yellow-green color the fluorescence light, respectively.

Since the TIR-FCCS experimental setup has already been described in great detail elsewhere Koy09 (), only a brief qualitative overview of the basic ideas and quantities is given below. A scheme of the experimental setup is shown in Fig. 1. It is based on a commercial device (Carl Zeiss, Jena, Germany) that consists of the FCS module ConfoCor2 and an inverted microscope Axiovert 200. The TIR illumination is achieved by focusing the excitation laser beam (, Ar Laser) on the periphery of the back focal plane (BFP) of an oil immersion microscope objective with numerical aperture . This leads to a parallel laser beam which emerges out of the objective and then enters the rectangular flow channel through its bottom wall (Fig. 1). By adjusting the angle of incidence above the critical angle ( for the glass-water interface) total internal reflection is achieved. In this situation only an evanescent wave extends into the liquid and can excite the fluorescent tracers suspended in it. The intensity distribution of this wave in the --plane (parallel to the interface) is Gaussian with a diameter of (at ). In the direction the intensity decays exponentially, . The characteristic decay length , also called penetration depth, depends on the laser wavelength , the refraction indices of both media ( - glass, - water) and can be varied in the range by changing the angle of incidence. Thus the evanescent wave can excite only the tracers flowing in the proximity of the channel wall. The produced fluorescence light is collected by the same microscope objective and is equally divided by passing through a neutral beam splitter to enter two independent detection channels. In each channel the fluorescent light passes through an emission filter and a confocal pinhole to finally reach the detectors, two single photon counting avalanche photodiodes (APD1, APD2). The pinholes PH1 and PH2 define two observation volumes that are laterally shifted with respect to each other along the flow direction as schematically shown in Fig. 2. The center-to-center distance between the two observation volumes can be continuously tuned from to . The signals from both channels are recorded and correlated to finally yield the auto- and cross-correlation curves that contain the entire information about the flow properties, slip length and shear rate, close to the interface.

Figure 2: (Color) The coordinate system and the linear flow field employed in the TIR-FCCS experiment. and denote the shape and location of the observation volumes as seen by pinhole PH1 and pinhole PH2, respectively; is the penetration depth which defines the axial extent of the observation volume; is the typical extension of the observation volumes in the --plane; indicates the observation volumes separation, center-to-center distance; is the velocity field in positive direction, which depends linearly on .

The experiments were performed with a rectangular microchannel of width, height and length fabricated using a three-layer sandwich construction as described in earlier work Koy09 (); Lum03 (). The bottom channel wall at which the TIR-FCCS experiments were performed was a microscope cover slide made of borosilicate glass with a thickness of , cleaned with aqueous solution of Hellmanex and Argon plasma. The root-mean-square roughness of the glass surface was in the range of and the water advancing contact angle below (hydrophilic surface). The flow was induced by a hydrostatic pressure gradient, created by two beakers of different heights, where the water level difference was kept constant by a pump. This allowed us to vary the shear rate near the wall in the range .

Carboxylate-modified quantum dots (Qdot585 ITK Carboxyl, Molecular Probes, Inc.), with a hydrodynamic radius , were used as fluorescent tracers. The particles were suspended in an aqueous solution of potassium phosphate () buffer (, concentration ). The concentration of the quantum dots was found from our data analysis (see below) as , corresponding to roughly particles per .

Iii Correlation Curves

The motion of the fluorescence tracers results in two time-resolved fluorescence intensities and , which were measured with the two photo detectors. For the present system, we may safely assume that it is ergodic and strictly stationary on the time scale of the experiment, such that only time differences matter Lakowicz06 (). Therefore, we may define the intensity fluctuations via

(1)

where denotes a time average or, equivalently, an ensemble average, and evaluate the time-dependent auto- and cross-correlation functions via the definition

(2)

It should be noted that possible small differences in the sensitivity of the photo detectors or in the illumination of the pinholes cancel out, since in Eq. 2 only ratios of intensities occur. and are the two autocorrelation functions of pinholes and , respectively, while and are the forward and backward cross-correlation functions, respectively. It should be noted that in the presence of flow and differ substantially. In the limit of the two pinholes being located at the same position, the intensities and coincide, such that in this case all four entries of the matrix are identical.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Figure 3: (Color online) Correlation functions as defined in the text, and linear combinations thereof, comparing the experimental data (with error bars) with the numerical fit functions (without) for an optimized parameter set. The statistical error of the numerical data is smaller than the line width. Parts (a) – (f) have been obtained by modeling the observation volumes by Eq. 9, while for parts (g) and (h) we have assumed a Gaussian form (Eq. 6).

Figure 3 summarizes our experimental results for the and / or linear combinations thereof. Concerning the autocorrelation curves and , we find that they are practically identical, which means that for the modeling it is safe to assume that both pinholes have the same properties. This is clearly shown in part (a), where one sees that differs only marginally from zero (while in our model we have anyway strictly ). Therefore, we just used the arithmetic mean (part (b)) as autocorrelation input for our fits, while we discarded the data. Concerning the cross-correlations, one sees that the forward function (see part (c)) exhibits a pronounced peak, which is indicative of the typical time that a particle needs to travel from observation volume 1 to observation volume 2. Another striking feature of is the large plateau for small times. At such short times, the particles have essentially not moved at all. Hence the plateau indicates that a particle is able to send photons to both detectors from essentially the same position, or, in other words, that the effective observation volumes must overlap quite substantially. This overlap effect then of course also shows up in the backward correlation function (see part (d)) at short times, with precisely the same plateau value. Therefore, such overlap effects essentially cancel out when considering the difference instead (see part (f)), while of course they are strongly present in the mean (see part (e)).

Obviously, the source of the overlap must be an effect of the optical imaging system, which is of course somewhat complicated, due to the many components that are involved. However, beyond this general statement we have unfortunately so far been unable to trace down its precise physical origin, and therefore also been unable to construct a fully consistent model for the observation volumes. The simple models that we have considered in our present work are not fully adequate, meaning that they systematically underestimate the amount of overlap, unless one assumes highly unphysical parameters, which would cause other aspects of the modeling to fail completely. It should be noted that similar overlap effects are also present in standard double-beam FCCS Rigler01 (); however, the underlying physics for that setup is slightly different, and the modeling used there cannot be simply transferred to our system.

Fortunately, however, our best model for the observation volumes is at least physical enough such that it can describe not only the autocorrelation functions (see part (b)) but also the overlap-corrected difference (part (f)) reasonably well, while still failing to describe the mean (part (e)). For this reason, our fitting procedure altogether takes into account the linear combinations and , while deliberately discarding the data on and . This is nicely borne out in Fig. 3, which shows not only the experimental data, but also the result of our theoretical modeling for optimized parameters.

The fact that the success of the modeling depends crucially on an accurate description of the observation volumes is strongly underpinned by parts (g) and (h) of Fig. 3. The experimental data for and are again the same, but the theoretical model uses a different functional form for the observation volumes, whose performance is obviously significantly poorer: Not only is the overlap plateau (part (g)) underestimated even more strongly than for the better model (part (e)), but also in the overlap-corrected function (part (h)) are the deviations from the experimental data much more pronounced than for the better model (part (f)). It should also be noted that the autocorrelation functions are much less sensitive to these details; the autocorrelation curve for the poorer model (data not shown) fits the experiments as well as the better one (part (b)).

Iv Correlation Functions and Particle Dynamics

iv.1 Molecular Detection Efficiency

The fluorescence particles pass consecutively through the two observation volumes and (Fig. 2). The observation volume of each pinhole is given by the space-dependent molecular detection efficiency (MDE) function. It depends on the excitation intensity profile , and the collection efficiency of the objective plus detector system. In essence, the function denotes the probability density for the event that a fluorescence photon emitted from a particle at position will pass through pinhole and reach detector . Similarly, is the analogous function for pinhole . Since the intensity of the evanescent wave decays exponentially with a penetration depth (of order ), and the observation volumes are displaced with respect to one another by a distance (roughly ), we assume the functional form

(3)
(4)

where normalization of the probability densities implies

(5)

In general the function is given by the convolution of the pinhole image in the sample space with the point spread function (PSF) of the objective. However, one simple and widely used approximation, valid for pinholes equal or smaller than the Airy Unit of the system, assumes that is a Gaussian function Rigler01 (); HasslerSecond05 (); zhangzerubia ():

(6)

a typical value for the width that we obtain from fitting is .

Figure 4: (Color online) Comparison of the two normalized MDEs used in our study, for the optimized parameters of Fig. 3, using the natural unit system of the PCBPSF.

A substantially better description of can be obtained by considering the explicit form of the PSF webb (); richardswolf (). However, this form is described with complex mathematical equations and is often approximated by a squared Bessel function webb (); bornwolf (); zhangzerubia ()

(7)

where denotes the first Bessel function and

(8)

Here is the wavelength of the fluorescent light (in our case ). The Bessel PSF implicitly assumes a paraxial approximation (i. e. small ). While this assumption is probably not the best for confocal microscopy, it is certainly more accurate than a simple Gaussian PSF webb (); zhangzerubia ().

As mentioned above, in order to describe what a pinhole sees one must calculate the convolution of the PSF of the objective with the pinhole image in the sample space. The geometrical image of the pinhole is simply obtained by dividing the physical size of the pinhole (physical radius ) by the total magnification of the system (in our case ). This results in a radius in the sample space of approximately . Therefore the total model MDE is given by zhangzerubia ()

(9)

where

(10)

The convolution integral is difficult to evaluate analytically, but easy to calculate numerically. To this end, we use dimensionless length units in which the factor is unity. In these dimensionless units, takes the value for the parameters given above, which is the value we have used throughout our study. We call this function (9) the “pinhole-convoluted Bessel point spread function” (PCBPSF), which we calculated in dimensionless units once and for all, and stored as a table. During the actual data analysis, the transformation factor from dimensionless units to real units was used as a fit parameter, in analogy to for the Gaussian model. It should be noted that the PCBPSF decays for large distances like , therefore providing much more overlap than the Gaussian model.

In the present work, we have studied both models, the “Gaussian” model according to Eq. 6, as well as the PCBPSF model according to Eq. 9. The corresponding correlation curves have already been presented in Fig. 3. The corresponding MDEs are shown in Fig. 4. One sees that the PCBPSF model puts much more statistical weight into the tail of the distribution than the Gaussian model. As already discussed above, we found the Gaussian model to perform less well than the PCBPSF model, since it underestimates the overlap even more severely than the latter. In what follows, we will present data always for the PCBPSF model, unless stated differently.

iv.2 Theory of Correlation Functions

The dynamics of the tracer particles is described by the space- and time-dependent concentration (number of particles per unit volume) , its fluctuation

(11)

and the concentration correlation function

(12)

note that translational invariance applies only to time, but not to space, due to the presence of the flow and the surface. At time , this reduces to the static correlation function, for which we simply assume the function pertaining to an ideal gas:

(13)

Note that this assumption implies that we consider the particles as point particles, with no interaction with the surface except impenetrability, and no interaction between each other, due to dilution.

As described in Ref. Rigler01 (), the correlation functions are related to via

where in the second step we have taken into account the normalization of the . Therefore, the obvious strategy for analyzing the experimental data is to (i) evaluate within a model for the particle dynamics, (ii) evaluate the integrals in Eq. IV.2 to obtain a theoretical prediction for for a given set of parameters, (iii) compare the prediction with the data, and (iv) optimize the parameters. The normalizing prefactor is not known very accurately and will hence be treated as a fit parameter.

The tracer particles undergo a diffusion process and move in an externally driven flow field . Hence, we describe the concentration correlation function by a convection-diffusion equation of the form

(15)

which needs to be solved for with the initial condition Eq. 13 and the no-flux boundary condition at the surface,

(16)

which imposes that there is no diffusive current entering the solid. For reasons of simplicity, the hydrodynamic interactions with the surface are neglected, and hence the diffusive term is described only by an isotropic diffusion constant .

Since in the experiment the exponential decay length of the spatial detection volume normal to the surface is in the range of , while the channel size is three orders of magnitude larger, it is justified to assume the flow field to be approximately linear. For our geometry, this implies

(17)

where is the slip length, is the constant shear rate, denotes the unit vector in -direction and is the dimensionless rate-of-strain tensor.

At this point, it is useful to re-define the coordinate system in such a way that the finite hard-sphere radius of the tracer particles (roughly ) is taken into account. We therefore identify no longer with the interface, but rather with the coordinate of the particle center at contact with the interface. In this new coordinate system, the flow field is given by

(18)

i. e. we simply have to add the particle radius to the slip length. The functional form of the observation volumes and remains unchanged, since the dependence is just an exponential decay, such that a shift in direction just results in a constant prefactor that can be absorbed in the overall normalization. Our method therefore does not yield a value for , but rather only for the combination .

As mentioned previously, for some special cases the convection-diffusion equation can be solved analytically, for example in the case of uniform or linear flow in bulk, i. e. far away from surfaces Lum03 (); Bri99 (); Elr62 (); Foi80 (), or for pure diffusion close to the wall, but without any flow field Rie08 (); Hassler05 (). For our case, however, it is not easy, or even impossible, to find such a solution. Therefore the aim of the next sections will be to construct a stochastic numerical method. Concerning the problems that were mentioned after Eq. IV.2, (i) and (ii) can be solved by Brownian Dynamics, while problems (iii) and (iv) are tackled by a Monte Carlo algorithm in parameter space.

V Sampling algorithm

Brownian motion of particles under the influence of external driving is described by a Fokker-Planck equation Ris96 (); Hon94 (); Oet96 (); Maz02 (); Lad03 (), which has exactly the same form as the convection-diffusion equation, Eq. 15, the only difference being that is replaced by the so-called “propagator” , which is the conditional probability density for the particle motion within the time . and describe the same physics and are actually identical except for a trivial normalization factor, . We can therefore rewrite Eq. IV.2 as

As is well-known, the Fokker-Planck equation is equivalent to describing the particle dynamics in terms of a Langevin equation

(20)

Here is the tracer velocity, is the deterministic (external) velocity imposed by the flow, while is a stochastic Gaussian white noise term which describes the diffusion:

(21a)
(21b)

Here, are Cartesian indices and is the Kronecker delta. We solve this Langevin equation numerically by means of a simple Euler algorithm Oet96 () with a finite time step :

(22)

where is a vector of mutually independent random numbers with mean and variance . The boundary condition at the wall is taken into account by a simple reflection at , i. e. a particle that, after a certain time step, has entered the negative half-space is subjected to before the next propagation step is executed.

Now, let us consider a computer experiment where, at time , we place a particle randomly in space, with probability density , and then propagate it stochastically according to Eq. 22. The probability density for it reaching the position after the time is then given by

(23)

If we now consider an observable , which is some function of the particle’s coordinate, , and study the time evolution of its average, then this is obviously given by

Figure 5: (Color online) Analytical solution and simulated data for an average over trajectories.

Therefore, if we set and , then is identical to the rescaled correlation function . In other words, we place the particle initially with probability density , then generate a stochastic trajectory via Eq. 22, and evaluate for all times along that trajectory. This yields a function for that particular trajectory. This computer experiment is repeated often, and averaging over all trajectories yields directly a stochastic estimate for the (unnormalized) correlation function . Of course, these estimates will have statistical error bars, just as the experimental ones; however, we sample several hundred thousand trajectories, such that the numerical errors are substantially smaller than the experimental ones. In principle, the numerical data are also subject to a systematic discretization error as a result of the finite time step; however, by choosing a small value for we have made sure that this is still small compared to the statistical uncertainty. Note also that our approach implements an optimal importance sampling Lan00 () with respect to the factor , but not with respect to . In practical terms, our straightforward sampling scheme turned out to be absolutely adequate.

The simulations were run using a “natural” unit system where length units are defined by setting to unity, while the time units are given by setting the diffusion constant to unity. The time step was fixed in physical units to a value of at most (it was dynamically adjusted in order to match the non-equidistant experimental observation times), which, for all parameters, is much smaller than unity in dimensionless units. Obviously, this is small enough to represent the stochastic part of the Langevin update scheme with sufficient accuracy. For typical parameters (, , ), the dimensionless unit time corresponds to , such that the resulting value for the dimensionless shear rate () is of order unity as well. Since (or unity, in dimensionless units) defines the range in which the statistically relevant part of the simulation takes place, we find that typical flow velocities in dimensionless units are also of order unity. This shows that the time step is also small enough for the deterministic part of the Langevin equation. We also see that the experiment is neither dominated by diffusion nor by convection, and therefore the analysis needs to take into account both.

As a simple test case, we used our algorithm to calculate the autocorrelation function for vanishing flow and the Gaussian model for the observation volume, where an analytical solution is known Hassler05 (); Rie08 (). In our dimensionless units, it is, up to a constant prefactor, given by

Figure 5 shows the analytic autocorrelation function with and its simulated counterpart, averaged over independent trajectories, where a small time step of (in dimensionless units) was used. In Fig. 6 the deviation of the simulated data () from the analytic expression is shown,

(26)

Clearly, the numerical solution converges to the analytical result when the number of trajectories is increased, as it should be.

Figure 6: (Color online) Deviation from the analytic curve for , and trajectories.

Vi Subtraction Scheme

At this point, it is worthwhile to reconsider the subtraction procedure introduced in Sec. III. To this end, we assume that the true functions differ somewhat from the model functions, which we will denote by . This is most easily parameterized by the ansatz

(27)

where , and are all normalized to unity, while is a (hopefully) small parameter. For the purposes of the present analysis, we also assume that the Brownian Dynamics model is a faithful and correct description of the true dynamics, i. e. that the difference between and is the only reason for a systematic deviation between simulation and experiment.

Inserting Eq. 27 into Eq. V, we thus find

Since we treat as an adjustable parameter, it makes sense to view the first term (including the prefactor ) as the theoretical model for the correlation function, . For the deviation between experiment and theory we then obtain, neglecting all terms of order ,

and for its antisymmetric part

The terms in square brackets are antisymmetric under the exchange , and hence can be replaced by its antisymmetric part

(31)

Exchanging the arguments in the second terms within the square brackets then yields

This is clearly a nonzero contribution. In other words, the subtraction scheme (i. e. studying instead of ) does not provide a consistent cancellation procedure such that the first-order deviation would vanish. However, in practical terms the deviation is much smaller than for the original data ( and ), for which Eq. VI applies. To some extent, this is so because the error is the difference of two terms, but mostly it is due to the fact that not the full propagator contributes, but rather only its antisymmetric part . For short times the dynamics is dominated by diffusion, i. e. is essentially symmetric, or . At late times, we again expect to become quite small (exponentially damped, see Eq. VI), although we have no rigorous proof for this. Therefore one should expect that the strongest deviation occurs at intermediate times where is maximum. This time scale is not given by the optical geometry but rather by the dynamics; dimensional analysis then tells us that this time must be of order . For typical parameters of our experiment (, ) we obtain a value of roughly , which fits quite well to the observations one can make in Fig. 3, part (h). At such times, we expect that the main contribution to comes form the first term of Eq. VI (downstream vs. upstream correlation) and that is positive for most of the relevant arguments. Therefore, one should expect that the experimental data should lie systematically above the theoretical predictions, which is indeed the case. Our expectations concerning the behavior of come from studying the simple case of one-dimensional diffusion with constant drift without boundary conditions; here one has

(33)

and

Vii Statistical Data Analysis

vii.1 Monte Carlo Algorithm

For the model that we consider in the present paper, the space of fit parameters is (in principle) seven-dimensional. We have three lengths that define the geometry of the optical setup, , , and (Gaussian model) or (diffraction model). Three further parameters define the properties of the flow and the diffusive dynamics of the tracers; these are the diffusion constant , the shear rate , and the slip length plus particle radius . Finally, there is the concentration of tracer particles , which serves as a global normalization constant. The functions to be fitted are and . However, we have seen in Sec. VI that the non-idealities in modeling the observation volumes do have an effect on the normalizations, and therefore we allowed one separate normalization constant for each of the curves ( for the autocorrelation and for the cross-correlation), in order to partly compensate for these non-idealities. Therefore, our parameter space is finally eight-dimensional. The strategy that we develop in the present section aims at adjusting all parameters simultaneously in order to obtain optimum fits. For the further development, it will be useful to combine all the parameters into one vector . Furthermore, for each parameter we can, from various physical considerations, define an interval within which it is allowed to vary (because values outside that interval would be highly unreasonable or outright unphysical). This means that we restrict the consideration to a finite eight-dimensional box in parameter space.

A central ingredient of our approach is the fact that both the experimental data and the simulation results have been obtained with good statistical accuracy ( trajectories for the simulations, independent measurements for the experiments). This does not only allow us to obtain rather small statistical error bars, but also (even more importantly) to rely on the asymptotics of the Central Limit Theorem, i. e. to assume Gaussian statistics throughout. For both correlation curves and each of the considered times, we have both an experimental data point and a simulated data point , where the index simply enumerates the data points. Both and can be considered as Gaussian random variables with variances and , respectively. Then

(35)

is again a Gaussian random variable, whose variance is simply unity,

(36)

Therefore, is, in principle, a perfect variable to measure the deviation between simulation and experiment. Unfortunately, however, the parameters and are not known. What is rather known are their estimators and , as they are obtained from standard analysis to calculate error bars. Therefore, we rather consider

(37)

The statistical properties of this variable, however, are in the general case unknown Wel38 (). It is only in the case of rather good statistics (as we have realized it) that we can ignore the difference between and , and simply assume that is indeed a Gaussian variable with unit variance. It is at this point where the statistical quality of the data clearly becomes important.

If is the total number of data points, then

(38)

is obviously a quantity that measures rather well the deviation between experiment and simulation. In principle, the task is to pick the parameter vector in such a way that is minimized. We have deliberately chosen the symbol in order to point out the analogy to the problem of finding the ground state of a statistical-mechanical Hamiltonian. In case of a perfect fit, we have or , implying . In the standard nomenclature of fitting problems, is called “chi squared”. We also introduce , which we will call the “goodness of simulation” (standard nomenclature: “chi squared per degree of freedom”).

For optimizing , we obviously need to consider as a function of . In this context, it turns out that it is important to be able to consider it as a function of only , and to make sure that this dependence is smooth. For this reason, we use the same number of trajectories when going from one parameter set to another one, and use exactly the same set of random numbers to generate the trajectories. In other words, the trajectories differ only due to the fact that the parameters were changed. Therefore, both and are smooth functions of the parameters, and is as well.

In order to find the optimum parameter set, one could, in principle, construct a regular grid in and then evaluate for every grid point. However, for high-dimensional spaces (and eight should in this context be viewed as already a fairly large number, in particular when taking into account that it is bound to increase further as soon as more refined models are studied), it is usually more efficient to scan the space by an importance-sampling Monte Carlo procedure based upon a Markov chain Lan00 (). Applying the standard Metropolis scheme Lan00 (), we thus arrive at the following algorithm:

  1. Choose some start vector . This should be a reasonable set of parameters, perhaps pre-optimized by simple visual fitting.

  2. From the previous set of parameters, generate a trial set via , where is a random vector chosen from a uniform distribution from a small sub-box aligned with .

  3. If the new vector is not within , reject the trial set and go to step 2.

  4. Otherwise, calculate both and , as well as the Metropolis function

    (39)

    where is the “equilibrium” probability density of , i. e. the desired probability density towards which the Markov chain converges (more about this below).

  5. Accept the trial move with probability (reject it with probability ), count either the accepted or the old set as a new set in the Markov chain, and go to step 2.

  6. After relaxation into equilibrium, sample desired properties of the distribution of , like mean values, variances, covariances, etc., by simple arithmetic means over the parameter sets that have been generated by the Markov chain. This allows the estimation of not only the physical parameters, but at the same time also of their statistical error bars.

The scheme is defined as soon as is specified. Now, from the considerations above, we know that in case of a perfect fit the variables are independent Gaussians with zero mean and unit variance. This implies (ignoring constant prefactors which anyway cancel out in the Metropolis function)

(40)

which makes the interpretation in terms of statistical mechanics obvious. Clearly, this form for is the only reasonable choice for implementing the Monte Carlo algorithm. After relaxation into equilibrium, one should observe a value of roughly unity, while larger numbers indicate a non-perfect fit (even after exhaustive Monte Carlo search), and thus deficiencies in the theoretical model. One should also be aware that the equilibrium fluctuations of are expected to be quite small, since is the arithmetic mean of a fairly large number (, the number of experimental data points) of independent random variables.

In practice, we adjusted in order to obtain a fairly large acceptance rate of roughly . The Monte Carlo algorithm was then run for more than steps, each step involving the generation of roughly trajectories. The simulation was run on nodes ( processes) of the IBM Blue Gene-P at Rechenzentrum Garching, where each process generated trajectories. On this machine, one Monte Carlo run took roughly one day to complete. It turned out that discarding the first configurations was sufficient to obtain data in equilibrium conditions, where the mean values of the parameters and their standard deviations were calculated. It should be noted that the equilibrium fluctuations of the parameters tell us the typical range in which they can still be viewed as compatible with the experiments. Therefore these fluctuations are the appropriate measure to quantify the experimental error bars, while calculating a standard error of mean (or a similar quantity) would not be appropriate and severely underestimate the errors. Finally, it should be noted that the approach allows in principle to analyze the mutual dependence of the parameters as well, by sampling the corresponding covariances; this was however not done in the present study.

vii.2 Scale Invariance

As noted before, the correlation functions depend on the average concentration , the diffusion constant , the shear rate , and various lengths, which we denote by . Simple dimensional analysis shows that for any scale factor the scaling relation

holds. The “Hamiltonian” of the previous subsection is of course subject to the same scale invariance. This means that for each point in parameter space there is a whole “iso-line” in parameter space that fits the data just as well as the original point. Therefore, in order to improve the MC sampling, we generated such an iso-line for each point in parameter space that was produced by the Markov chain of the previous subsection. Of course, the iso-lines were confined to the region of the overall parameter box. It turned out that our Markov chains were still so short that this improvement was not completely superfluous (as it would be in the limit of very long chains). In other words, taking the invariance into account helped us to avoid underestimating the errors.

In practice, this was done as follows: Assuming that the most accurate input parameters are the penetration depth , the diffusion constant and the separation distance , we calculate for every data point a minimum and a maximum scaling factor , such that we obtain , and , for all in . This provides us with additional data points in parameter space that are added to the statistics.

vii.3 Sample-to-Sample Fluctuations

seed 42 4711 2409
av av av
acceptance rate
No. of MC steps
Table 1: Averaged values (av) and standard deviations () calculated from MC simulations with fixed , but different start values (“seeds”) for the random number generator.

It should be noted that the parameters found by the procedure outlined above are optimized for a specific set of random numbers used to generate the trajectories. Therefore, one must expect that one obtains different results when changing the set of random numbers. In our statistical-mechanical picture, we may view the set of random numbers as random “coupling constants” of a disordered system like a spin glass mezard (), where the disorder is weak since the number of trajectories is large. For disordered systems, the phenomenon of “sample-to-sample” fluctuations is well-known, and it should be taken into account. We have therefore run one test where we applied the same analysis to three different random number sequences. Indeed we found (see Tab. 1) that sample-to-sample fluctuations are observable, and somewhat larger than the errors obtained from simple MC, while still being of the same order of magnitude. A conservative error estimate should therefore take these fluctuations into account, by multiplying the error estimates from plain MC by, say, a factor of three. In what follows, we will only report the simple MC estimates for the errors.

Viii Results

Figure 7: Goodness of simulation as function of the number of Monte Carlo steps for .
Figure 8: Slip length plus particle radius as function of the number of Monte Carlo steps for .

The experiments were performed with a penetration depth of the evanescent wave of , the lateral size of the observation volumes (within the Gaussian model) was and their center-to-center separation was . Furthermore, the diffusion constant of the tracers is known to be roughly as measured by dynamic light scattering. The shear rate was determined from an independent measurement using single-focus confocal FCS Dit02 (); Kohler2000 () where the entire flow profile across the microchannel was mapped out. Alternatively, one might also use double-focus confocal FCCS Bri99 (); Lum03 (). From this measurement, we obtained a shear rate at the bottom channel wall of . More details on this issue and some theoretical background are presented in the appendix. Nevertheless, we took a conservative approach and allowed the shear rate to vary between and . Finally, we expected the slip length to be not more than a few nanometers, but we nevertheless allowed it to vary up to . These estimates allowed us to start the Monte Carlo procedure with good input values.

We then observed the Monte Carlo simulation to systematically drift to smaller and smaller values of , until finally “getting stuck” at the imposed lower boundary, . What we mean by this term is a behavior where fluctuations near still occur, but in such a way that is the most probable value, while smaller values only do not occur because we do not allow them. Since we know experimentally that is clearly unacceptable, this behavior again indicates that the theoretical model is not completely sufficient to describe the experimental data (see also the discussion in Secs. III and VI).

We therefore decided to keep fixed during a Monte Carlo run, and rather vary it systematically in the given range. For none of the parameters were we able to obtain a better goodness of simulation than , which is still a bit too large, i. e. indicates a non-perfect fit (although the data on have been discarded already). The convergence behavior of the method is shown in Fig. 7, where we plot as a function of the number of Monte Carlo iterations. For the Gaussian model, the best value that we could obtain was , which is substantially worse.

Figure 9: Averaged slip length as function of the shear rate, calculated from the Monte Carlo results.

With these caveats in mind, we may proceed to study the parameter values that the Monte Carlo procedure yields. Obviously, the most interesting one is the slip length , or the sum (recall that the method does not provide an independent estimate for these parameters, but only for their sum). Figure 8 presents data on the evolution of during the Monte Carlo process for ; is thus seen to fluctuate between roughly and , which is, within the limitations of the model, the statistical experimental uncertainty of this quantity. The mean and standard deviation of is shown in Fig. 9 as a function of shear rate, which are thus clearly seen to not be independent. Since we know much more accurately than the range plotted in Fig. 9, we see that in principle a fairly accurate determination of is possible, if the underlying theoretical model is detailed enough to fully describe the physics. One should note that the particle size (more precisely, its hydrodynamic radius) is roughly ; taking this into accunt as well, we find a value that is clearly smaller than . One should also note that for the Gaussian model we obtained a very similar curve; however, here the values are systematically smaller by roughly . This again highlights the importance of having an accurate model for the MDE.

The other results obtained from our MC fits are reported in Tab. 2.

3500 3600 3700
av av av
acceptance rate
No. of MC steps
3800 3900 4000
av av av
acceptance rate
No. of MC steps
Table 2: Averaged values (av) and standard deviations () calculated from MC simulations with various shear rates.

Clearly, the values of Fig. 9 could only be viewed as definitive experimental results on if the agreement between experiment and model were perfect, with , and a good fit of all correlation functions. The reasons for the observed deviations are not completely clear; however, all our findings hint very strongly to deficiencies in the description of the observation volumes, i. e. too inaccurate modeling of the detailed optical phenomena that finally give rise to the shape of these functions. Nevertheless, one should also bear in mind that the dynamic model is also rather simple, neglecting both hydrodynamic and residual electrostatic interactions with the wall. While one must expect that further refinements of the model will change both the values as well as their error bars, we believe that it is not probable that such a change would be huge. Given all the various systematic uncertainties of the modeling, we would, in view of our data, not exclude a vanishing slip length, while we consider a value substantially larger than, say, as fairly unlikely.

Let us conclude this section by a few more remarks concerning our choice of parameters and the systematic errors of the method. From the setup it is clear that there are three parameters that can be varied experimentally fairly easily — these are the shear rate , the penetration depth , and the effective pinhole-pinhole distance in sample space, . The choice of parameters was governed by various experimental considerations, which we will attempt to explain in what follows.

It is clear that one wants a fairly large shear rate , in order to ensure that the signal has a sizeable contribution from flow effects. In practice, however, increasing further by a substantial amount is limited by experimental constraints, such as channel construction, beaker elevation, etc. Furthermore, the choice of is subject to similarly severe experimental constraints: Increasing substantially would mean that we would approach the limit angle of total reflection closely, which would result in a very inaccurate a priori estimate of . On the other hand, an even smaller penetration depth value would be too close to the limits of the capabilities of the objective, resulting in possible optical distortion effects which we would like to avoid. Finally, the choice of was governed by our early attempts to suppress overlap effects by simply picking a fairly large value, such that the overlap integral is small. There are however two problems about such an idea. Firstly, a large value of decreases not only the overlap, but also the cross-correlation function as a whole Koy09 (), such that it ultimately becomes impossible to sample the data with sufficient statistical accuracy on the time scale on which we can confidently keep the experimental conditions stable. Our value of should therefore be viewed as limited by such considerations. However, secondly, and more importantly, we realized in the course of our analysis that the overlap issue is not a problem of an unintelligent choice of parameters at all, but rather of our insufficient theoretical modeling of the MDE functions. As we have seen above, our results for the slip length depend rather sensitively and fairly substantially on the choice of the MDE function (up to nearly a factor of two). In our opinion, there is no reason to assume that this dependence would go away if we had picked parameters in a regime of small or vanishing overlap. From this perspective, we view the overlap essentially as a blessing, since it shows us where the main source of systematic error is probably located.

In the light of these remarks, it would of course be interesting to systematically investigate the influence of the parameters and on our results. In terms of the correlation functions as such, this has been done in Ref. Koy09 (), and we refer the interested reader to that paper. However, doing the full analysis for a whole host of parameters would imply a very substantial amount of work, since all the experimental curves would have to be re-sampled again, in order to meet the rather stringent requirement of statistical accuracy that is built into our approach. We have hence not attempted to do this, but rather believe that it will be more fruitful to concentrate the efforts of future work on attempts to improve the theoretical MDE modelling, even if that will be challenging. As far as the slip length is concerned, one must of course expect that the fitted value will depend on parameters such as and , but only to the extent that this reflects the systematic error — if the physics were modeled perfectly correctly, we would of course always obtain the same value.

Ix Conclusions

The results from the previous sections demonstrate that the method of TIR-FCCS in combination with the presented Brownian Dynamics and Monte Carlo based data analysis is in principle a very powerful tool for the analysis of hydrodynamic effects near solid-liquid interfaces. Already within the investigated simple model of the present paper, we can conclude that the slip length at our hydrophilic surface is not more than . It was only the data processing via the Brownian Dynamics / Monte Carlo analysis that was able to demonstrate how highly sensitive and accurate TIR-FCCS is.

The computational method has the advantage to be easily extensible to include more complex effects. For example, the hydrodynamic interactions of the particles with the wall would cause an anisotropy in the diffusion tensor Dhont07 () and a dependence, electrostatic interactions would give rise to an additional force term in the Langevin equation, while polydispersity could be investigated by randomizing the particle size and the diffusion properties according to a given distribution. While these contributions are expected to yield a further improvement of the method, this was not attempted here, and is rather left to future investigations. However, we have also identified the inaccuracies in modeling the observation volumes as (most probably) the main bottleneck in finding good agreement between theory and experiment, i. e., at present, as the main source of systematic errors, which makes it difficult to find a fully reliable error bound on the value of the slip length.

Conversely, the problem of dealing with statistical errors can be considered as solved. For an extensive data analysis, as it has been presented here, one may need a supercomputer in order to obtain highly accurate results in fairly short time. Nevertheless, the method will yield meaningful results even if confined to just a single modern desktop computer. Given the moderate amount of computer time on a high-performance machine, one should expect that quite accurate data should be obtainable within reasonable times by making use of the powerful newly emerging GPGPU cards.

In our opinion, the presented method is a conceptually simple and widely applicable approach to process TIR-FCCS data, that is clearly only limited by inaccurate modeling. We believe that it has the potential to become the standard and general tool to process such data, in particular as soon as the optics is understood in better detail. The principle as such is applicable to all kinds of correlation techniques, such as FCS/TIR-FCS etc., and we think it is the method of choice whenever one investigates a system whose complexity is beyond analytical treatment.

Acknowledgements.
This work was funded by the SFB TR 6 of the Deutsche Forschungsgemeinschaft. Computer time was provided by Rechenzentrum Garching. We thank J. Ravi Prakash and A. J. C. Ladd for helpful discussions.

Appendix A Solution of the Stokes Equation in a Rectangular Channel

The flow profile throughout the height of the microchannel was measured by single-focus FCS under the same conditions as the TIR-FCCS experiments; the result is shown in Fig. 10. From a fit via a Poiseuille profile (solid line), we obtained an independent estimator for the shear rate near the wall, .

Figure 10: (Color online) Flow profile and Poiseuille fit along -direction (surface of measurement is located at ).

The purpose of this appendix is to analyze the theoretical background of this fit in some more detail. For a pure Poiseuille flow, i. e. a simple parabolic profile, it is clear that the shear rate at the surface does not depend on the slip length , because in this case a finite value simply shifts the profile by a constant amount. Therefore, in this case is indeed irrelevant for the fit. A short discussion on such issues is also found in Ref. Lum03 (), and experimentally Jos05 (); Vin09 () it is also known that typically the shift is so small that a finite slip length is hard to detect by direct measurements of the profile. However, from a theoretical and quantitative point of view it is not quite clear how well it is justified to assume a strictly parabolic profile, i. e. to assume that the flow extends infinitely in direction — in our experiments, , which is large but not infinite. For finite values of , the profile is somewhat distorted, and if this distortion is sufficiently large, then also a possible effect of should be taken into account. These questions can be easily answered by solving the flow problem in a rectangular channel in the presence of slip exactly, and this shall be done in what follows. The result of this analysis will be that for our conditions the distortion of the profile is indeed completely negligible, and that therefore needs not be taken into account either.

We start by considering the Stokes equation

(42)

in a rectangular channel with dimensions in the -plane, as in the experiment. Here, is the viscosity of the liquid and is the driving force density or pressure gradient acting on the liquid in -direction. We assume that all surfaces have the same slip length.

For the case of a no-slip boundary condition, the solution has been given in the textbook of Spurk and Aksel Spu06 (), however in a form that does not explicitly spell out the symmetry under exchange of and . Here we give the solution in a form that shows that symmetry, and generalize it to the case of a nonvanishing slip length .

Using the methods and notation of quantum mechanics, and allowing for some minor amount of numerics to evaluate a series, the solution is simple and straightforward. We identify a function with a vector in a Hilbert space, and define the scalar product as

(43)

Defining a “Hamilton operator” via

(44)

the Stokes equation is written as

(45)

Obviously, the functions

(46)

with , and

(47)