Automated tracking of colloidal clusters with sub-pixel accuracy and precision

Automated tracking of colloidal clusters with sub-pixel accuracy and precision

Abstract

Quantitative tracking of features from video images is a basic technique employed in many areas of science. Here, we present a method for the tracking of features that partially overlap, in order to be able to track so-called colloidal molecules. Our approach implements two improvements into existing particle tracking algorithms. Firstly, we use the history of previously identified feature locations to successfully find their positions in consecutive frames. Secondly, we present a framework for non-linear least-squares fitting to summed radial model functions and analyze the accuracy (bias) and precision (random error) of the method on artificial data. We find that our tracking algorithm correctly identifies overlapping features with an accuracy below \SI0.2\percent of the feature radius and a precision of 0.1 to 0.01 pixels for a typical image of a colloidal cluster. Finally, we use our method to extract the three-dimensional diffusion tensor from the Brownian motion of colloidal dimers.

\sisetup

separate-uncertainty = true, detect-all = true, range-phrase = –, range-units = single, list-units = single, multi-part-units = single \DeclareSIUnit[number-unit-product = ] \uL\micro\liter \DeclareSIUnit[number-unit-product = ] \umol\micro\mole \DeclareSIUnit[number-unit-product = ] \uM\microM

This document is a preprint version: for the (revised) accepted version, please refer to the publisher’s website via the following DOI: 10.1088/1361-648X/29/4/044001

1 Introduction

Extracting quantitative information about the position and motion of features in video images is often key to understanding fundamental problems in science. For example, the tracking of colloidal hard spheres in three-dimensional confocal images has provided important insights into phenomena such as melting, crystallization, and the glass transition [1, 2, 3, 4, 5]. Biophysical experiments such as the investigation of cell mechanics by microrheology [6, 7] or the measurement of single biomolecule mechanics using optical or magnetic tweezers [8] rely on the precise positional measurement of single colloidal particles. Moreover, the tracking of single proteins in live cells provided a powerful tool for understanding biological processes [9, 10], and eventually lead to the development of super-resolution microscopy techniques such as PALM and STORM [11, 12]. Crucial for these studies is a method to extract trajectories of features from video images, which has been described extensively in colloidal science [13, 14] as well as in single molecule tracking [15, 16, 17, 18].

Most single particle tracking algorithms have been designed for spherical features, as it is the most common type of signal. Recent developments in colloidal synthesis [19, 20, 21] provide means to assemble spheres in so-called colloidal molecules. Single particle tracking of these clusters of spheres will provide insights into the role of anisotropy in for instance crystallization and diffusion [22, 23, 24]. As the basic building blocks of these studies contain closely spaced particles, a robust automated method is required to perform accurate particle tracking on partially overlapping features.

Automated methods for single-particle tracking follow roughly the following pattern: an image with features of interest is first preprocessed, then single features are identified in a process called “segmentation”, these feature coordinates are refined to sub-pixel accuracy, and finally the features are linked to the features in the previous image. Iteration of this algorithm over a sequence of images results in particle trajectories that can be used for further analysis. Although this method has proven itself as a robust and accurate method [25, 26], issues arise when features become so closely spaced that their signals overlap. This essentially limits studies to dilute systems, repelling particles, or model systems with very specific characteristics such as index-matched and core-shell fluorescent particles [27, 28].

In particular, overlapping feature signals give rise to two complications: firstly, the segmentation step regularly recognizes two closely spaced features as one feature due to the overlap of signals. In order to identify the trajectories of closely spaced features completely, tedious frame-by-frame manual corrections are necessary, prohibiting the analysis of large data sets. In super-resolution microscopy methods, reported approaches to solve this issue are repeated subtraction of point-spread functions of detected features [29], or advanced statistical models classifying merge and split events [30]. Notably, these tracking methods do not use all the available information: as the feature locations are known in the previous frame, the segmentation of the image may be enhanced using the projected feature locations. Here we will present a fast and simple method for image segmentation that makes use of this history of the feature locations. We will test this method on artificial images and experimental data of colloidal dimers.

A second issue that arises when two feature signals overlap is that their refined coordinates will underestimate the separation distance. Especially the commonly employed center-of-mass centroiding suffers from this systematic “overlap bias”, leading to on apparent attraction between colloidal particles [26, 31]. For fluorescence images, this issue can be addressed by least-squares fitting to a sum of Gaussians, which has been reported as a way to measure the distance between overlapping diffraction limited features [32, 33]. Here, we will apply this method to images with features that are not diffraction limited. We conduct systematic tests on the accuracy (bias) and precision (random error) of the obtained feature positions.

To demonstrate the new automated segmentation and refinement methods, we will apply it to three-dimensional confocal images of a diffusing colloidal cluster consisting of two spheres and use the obtained trajectories to extract its diffusion tensor.

2 Method

2.1 Segmentation

As our algorithm for single particle tracking is based on the widely employed algorithm by Crocker and Grier [13], we will first introduce their algorithm and call it “CG-algorithm”. Throughout this work a Python implementation of this algorithm, Trackpy [34], was used for comparison. The CG-algorithm consists of four subsequent steps: preprocessing, feature segmentation, refinement, and linking. See Figure 1(a) for a schematic overview.

Figure 1: Schematic of the particle tracking of a single frame in (a) the Crocker-Grier algorithm and (b) the new algorithm. In the Crocker-Grier algorithm (a) the image is preprocessed and segmented. From the segments and the preprocessed image, a refinement step is done. Finally, subsequent coordinates are linked together with the coordinates in the previous frame. In our new algorithm (b) the image is preprocessed and segmented, making use of the knowledge of the previous coordinates. The linked segments are refined afterwards.

The preprocessing consists of noise reduction by convolution with a \SI1px sized Gaussian kernel and background reduction by subtracting a rolling average from the image with kernel size . The length scale is chosen just larger than the feature radius. The subsequent segmentation step finds pixels that are above a given relative intensity threshold and are local maxima within a certain radius . The length scale is the minimum allowed separation between particles. After the refinement step (see next section) the linking connects the features in frame with features in frame by minimizing the total displacement between the frames. Between two frames, particles are allowed to move up to a maximum distance .

In this process, each frame is treated individually: only during the final step (linking), features are connected into trajectories. We rearranged this process such that the information about the particle locations in the previous frame is used already in the segmentation. This allows us to project the expected feature locations in consecutive frames and therefore increase the success rate of segmentation. See Figure 1(b) for a schematic overview. We describe the new segmentation algorithm here using a minimal example of two closely spaced features in two subsequent frames, which can be generalized to an arbitrary number of features in any number of frames. See Figures 2(a)-(c).

We will assume that feature finding and refinement was performed successfully on the previous frame (Figure 2(d)). The current frame is first subjected to gray dilation and thresholding step, just as in the CG-algorithm. Because features are closely spaced in the frame 2, this leads to segmentation into only one single feature (Figure 2(e)).

Figure 2: Artificial example to illustrate the integrated segmentation and linking step. In (a) and (b) two subsequent computer-generated frames are shown and in (c) the corresponding true feature locations, with the frame 1 features in red circles and the frame 2 features in blue crosses. Links are indicated by red arrows. In (d) frame 1 is shown again, overlaid with its feature coordinates in red circles and in (e) the result of the initial feature finding is indicated by a blue cross on top of frame 2. (f) The subnet is formed by the linking candidates. Additionally, the green dashed line denotes a distance between features that is less than . Therefore these features could belong to a single subnet via a missing feature. (g) Subsequently, a region is defined up to distance from the features in frame 1 (dashed yellow line), that is used to (h) mask frame 2. In this step, also all features that were found already are masked up to , which enables the detection of the second feature that is less than distance from the features in frame 1 and farther than from other features in frame 2. The newly found feature is then added to the subnet so that the linking can be completed (i).

Then a part of the linking step is executed: features are divided into so-called subnetworks. This is a necessary step in the CG algorithm to break the sized combinatorial problem of linking two sets of features into smaller parts. First, linking candidates are identified using a kd-tree [34, 35]. Linking candidates for features in frame 1 are features that are displaced up to a distance in frame 2 and vice versa. Then subnetworks are created such that all features that share linking candidates are in the same subnetwork. For a sufficiently large distance , all features in Figure 2(f) belong to the same subnet: the feature in frame 2 is a linking candidate for both features in frame 1.

From the subnetworks, the number and estimated location of missing features is obtained “for free”: if a subnetwork contains fewer particles in frame 2 than in frame 1, there must be missing features in its vicinity. To account for the possibility that a missing feature could connect two subnetworks, we combine subnetworks if they are less than distance apart in frame 1 whenever missing features are being located.

In order to estimate the location of the missing features, a region up to distance around the features in the previous frame is masked in the current frame (dashed yellow line in Figures 2(g)-(h)). Subsequently, all already found features are masked up to a radius of (Figure 2(f)). This enables us to find local maxima that are further than distance from all other features in the current frame and closer than distance from the features in the previous frame. From the masked subimage, local maxima are obtained again through gray dilation and thresholding. After this, feature selection filters can be inserted in order to select appropriate features, for example with a minimum amount of integrated intensity. Then the new feature is added to the subnetworks and linking is completed by minimizing the total feature displacement (Figure 2(i)).

By performing the linking during the segmentation process, additional information is taken into account: not only the present image is used to identify the features, but also the coordinates from the previous frame. Therefore, we expect a higher number of correctly identified feature positions for the combined linking and segmentation method. Because all the computationally intensive tasks were already present in the original algorithm, the execution time of our new algorithm was observed to be similar.

2.2 Refinement

Subpixel accuracy and precision is a key feature of single particle tracking. Although the size of a single pixel is diffraction limited to approximately \SI200nm, localization precisions down to \SI1nm have been reported [17, 25]. These subpixel feature locations are obtained by starting from an initial guess supplied by the segmentation step, which is then improved in the so-called “refinement” step. Here, we will describe a general-purpose framework for refinement of overlapping features using non-linear least squares fitting to summed radial model functions.

We will compare this method to the center-of-mass centroiding that is present in the CG algorithm [13]. For radially symmetric features, the feature position is given by its center-of-mass. Due to its simplicity and computational efficiency, this method is a preferred choice for many tracking applications. In the center-of-mass refinement, the center coordinate of the feature is obtained iteratively from the image , such that:

(1)

Non-linear least squares fitting to a model function is conceptually different, since it goes beyond assuming only feature symmetry and requires knowledge on the feature shape. If image noise is uncorrelated and normal distributed, this method gives the maximum likelihood estimate of the true centroid. Although this assumption is not strictly valid [14, 17], the precision of this method is generally higher than the center-of-mass method when the image is subject to noise [25]. By simultaneously fitting a sum of multiple model functions, this method can be extended to tracking multiple overlapping features [32, 33]. We employ this approach here and formulate the feature model function in the following way:

(2)
(3)

Here, is the image coordinate, the feature center, its intensity, its radius, and a model function of a single feature, which is a function of and a list of parameters . The reduced radial coordinate is defined for any number of dimensions and allows for anisotropic pixel sizes through the vector nature of . The feature model function is defined only up to distance from the feature center. It is in principle possible to use any function for and apply it to images with different signal intensities and physical pixel sizes through the separate parameters and . In this article, we limit ourselves to the Gaussian function so that we do not have extra parameters . We keep constant and allow and to be optimized.

The model image is constructed by the summation of the individual features, which are each only defined within a region with radius . This additivity is a good assumption for fluorescence microscopy techniques [26]. We add a fixed background signal , which we keep constant within each cluster of overlapping features, but we allow it to vary between clusters to account for spatially different background values. For an image or video consisting of features, the following “objective function” is minimized:

(4)

The feature model function is defined by Eq. 2. If all features are separated by more than , this minimization can be separated into single feature problems. However, when features have overlapping regions, their objective functions cannot be separated and have to be minimized simultaneously. We separate the full image objective function (Eq. 4) into groups (“clusters”) using the kd-tree algorithm [35]. Each of the resulting cluster objective function is minimized using the Sequential Linear Least Squares Programming (SLSQP) algorithm [36] interfaced through the open-source Python package SciPy [37]. This SLSQP algorithm allows for additional constraints and bounds on the parameters. We use bounds to suppress diverging solutions and constraints to for example fix the distance between two features to a known value. The optimizer is supplied with an analytic Jacobian of Eq. 4 to increase performance.

The here described framework of feature refinement in principle allows refinement of any feature that can be described by a radial function. Although less computational efficient than the conventional refinement by center-of-mass, it can take into account feature overlap and additionally allows for constraints on parameters.

2.3 Testing methods

The above described methods for single particle tracking were tested quantitatively on both artificial and experimental data. Artificial images were generated by evaluating the following analytical functions for disc- and ring-shaped features on an integer grid:

(5)
(6)

Here, the reduced radial coordinate is given by Eq. 3, is the solid disc radius in units of , and is the ring thickness in units of . The true feature location was generated at a random subpixel location. Unless stated otherwise, we chose , , , and . See Figure 3 for two example model features generated with these parameters. Images were discretized to integer values and a Poisson distributed, signal-independent background noise with a mean intensity of is added to each image. The signal-to-noise ratio is defined as . Each refinement test was performed on 100 images having two overlapping features with a given center-to-center distance and random orientations. In order to ensure that the choice of initial coordinates did not affect the refined coordinate, we generated the initial coordinates randomly within \SI2px from the actual coordinate.

Figure 3: Radial functions of (a) a disc-shaped and (b) a ring-shaped model feature, generated with Equations 5 and 6 with parameters , , and . The insets show the corresponding single-feature images. Poisson-distributed noise was added to each feature.

Experimental measurements on colloidal particles were performed with an inverted Nikon TiE microscope equipped with a Nikon A1R resonant confocal scanhead scanning lines at \SI15kHz. For the two-dimensional diffusion measurements, we used a 20x objective (NA = 0.75), resulting in a physical pixel size of \SI0.628\um. For the three-dimensional measurements, a 100x (NA=1.45) oil immersion objective was used, resulting in an XY pixel size of \SI0.166\um. A calibrated MCL NanoDrive stage enabled fast Z stack acquisition with a Z step size of \SI0.300\um. As the objective immersion liquid () is closely matched with the sample solvent (), this step size equals the physical pixel size in Z direction within an error of \SI5\percent [38]. We acquired 5.13 three-dimensional frames per second with a size of 512x64x35 pixels (x-y-z).

For two-dimensional diffusion measurements we employed samples consisting of partially clustered TPM (\IUPAC3(tri∥methoxy∥silyl)propyl∥metha∥crylate) colloids with a diameter of \SI2.05\um containing a FITC (fluorescein \IUPACiso∥thio∥cyanate) fluorescent marker, as described in [39]. Particles were confined to the microscope coverslip through sedimentation.

The samples for three-dimensional measurements consisted of core-shell RITC (rhodamine B \IUPACiso∥thio∥cyanate) labeled PMMA (\IUPACpoly∥methyl∥methacrylate) colloidal clusters that were synthesized via an emulsification-evaporation method according to [19]. The average distance between the two constituent spheres of radius \SI1.87+-0.06\um in a cluster is \SI1.58+-0.12\um, determined by Scanning Electron Microscopy using an FEI NanoSEM at \SI15kV. The clusters were both index and density matched using a mixture of cyclohexyl bromide and cis-decalin in a weight ratio of 72:28 and imaged in a rectangular capillary, similar to experiments described in [40].

The Python code on which this work is based is available on-line1 and will be integrated into a future version of Trackpy [34], that is available through Conda as well as through the Python Package Index. All tests described in this work are implemented as “unittests” that ensure the correct functioning of the code on each update.

3 Results and Discussion

3.1 Segmentation and Linking

As described in the method section, the integrated segmentation and linking step extends the frame-by-frame segmentation used in the CG algorithm in such a way that it makes use of the history of feature locations. In order to test the effect of our extension, we compared the segmentation in the CG algorithm with our integrated segmentation and linking on experimental video images. The video images contain a single diffusing colloidal dimer, which consists of two permanently connected spheres. The identified trajectories for 800 frames are displayed in Figure 4. Clearly, by taking into account the history of the feature positions, the dimer positions can be tracked significantly better: for the new algorithm two features were detected in all of the 800 frames, while for the CG algorithm, only one third of the frames had 2 features, resulting in short disconnected trajectories that appear to hop between two feature locations.

Figure 4: Segmentation of an experimental video image of a colloidal dimer. In (a) the x and y coordinates obtained using the CG algorithm are shown. The corresponding a histogram of features per frame is displayed in (c). As roughly two third of the frames had only one feature, trajectories cannot be identified. In (b) the trajectories were obtained using the integrated segmentation and linking algorithm. As all frames had two features (see the histogram in (d)), trajectories were identified completely. In these plots, coordinates were refined using least-squares fitting to a sum of Gaussians.

The here described extension of segmentation increases the number of correctly segmented features significantly. It has to be noted though that the segmentation of the first frame is not enhanced by our method because of the lack of information on the previous feature positions. Generally, there is a start-up period of a few frames in which the number of correctly segmented features increases. These potentially incorrectly tracked frames can be ignored for most tracking applications. For cases where the first frames are relevant, the algorithm could be ran backwards from the first correctly segmented frame.

3.2 Refinement

After the segmentation step, the subpixel position is obtained in the refinement step. In this section we will analyze the effect of signal overlap on the accuracy and precision in the refined feature coordinates using both center-of-mass and the here described least-squares fitting to sums of model functions. We define the accuracy or bias as the mean difference between the measured and the true value. The precision is the random deviation around the measured average, which we calculate with the root of squared deviations from the measured average.

First, we took two Gaussian model features (Eq. 5 with ) and varied their spacing. See Figure 5. The deviations of the obtained positions are measured parallel and perpendicular to the line connecting the two actual feature positions. We found for both refinement methods that there is no bias in the perpendicular coordinate. For the parallel coordinate, however, we found a clear difference between the two refinement methods: in center-of-mass centroiding, the parallel coordinate was negatively biased because of feature overlap, meaning that the distance between the two overlapping features was systematically underestimated. For the least-squares fitting to sums of model functions, however, the bias stayed within \SI0.1px.

Figure 5: The effect of feature overlap on the bias in the parallel coordinate. The bias is negative when features appear too close together. In both graphs, the bias in the parallel coordinate as a function of the center-to-center distance is shown, for two Gaussian features with and signal-to-noise ratio . The bias for the center-of-mass (CoM) refinement is shown for mask radius from 6 to 10, both with rolling average background subtraction (denoted with dots) and without (denoted with crosses). The bias for the least-squares fitting to a sum of Gaussians method is denoted with tilted crosses. The dashed black line denotes the bias at which features are detected precisely in between the two actual feature positions.

This negative bias for center-of-mass centroiding has been described before [31, 41] and is a logical consequence of the method: if two features overlap, each of the features obtains extra intensity on the inside of the dimer. This bias increases in magnitude with decreasing particle separation, until both features are detected precisely in between the two actual positions. The bias increases also with increasing mask radius , as shown in Figure 5.

Apart from this negative bias, we observed a longer ranged positive bias. This effect has its origin in the preprocessing. For center-of-mass centroiding, it is vital that the constant image background is subtracted. This is conventionally achieved by subtracting a rolling average of the image with box size of typically [13]. Although this method has proven to be robust for background subtraction, it also introduces a skew in the feature signals when features are closer than (see Figure S1). Here, is the typical feature diameter. From this we conclude that it is important not to use a rolling average background subtraction in order to accurately track features that are spaced closer than . If the background subtraction was omitted, the positive bias was indeed not observed, as can be seen in Figure 5. In order to account for the background signal in the least-squares fitting algorithm, we introduced a background variable in the objective function (Eq. 4) instead.

Figure 6: Tracking errors of artificial overlapping features at a separation distance of \SI8px. The top row presents the computer generated model features, the middle and bottom row show the mean deviation (bias) and the root of the central variance of the deviations (precision), respectively. The data is separated into the error parallel (blue straight crosses) and perpendicular (red tilted crosses) to the line connecting the true feature positions. The effect of (a) the signal-to-noise ratio (S/N ratio) and (b) feature radius is plotted for Gaussian shaped model features. In (c) the tracking errors of overlapping solid discs with relative disc size are shown (see Eq. 5). For disc shaped features with , the effect of S/N ratio and feature size is plotted in (d) and (e), respectively. Finally in (f), the effect of relative ring thickness in a ring shaped feature is shown (see Eq. 6). Unless stated otherwise, a feature size and signal-to-noise ratio were employed.

Secondly, we analyzed the bias and precision of overlapping Gaussian features, disc shaped features, and ring shaped features while keeping the particle separation constant at \SI8px. See Figure 6. In all cases, we observed no bias in the perpendicular coordinate, as is expected from the symmetry of the dimer. Also, the precision for the perpendicular direction was in close agreement with the precision of parallel direction.

In Figure 6(a), it can be seen that the signal-to-noise (S/N) ratio did not influence the bias for Gaussian shaped model features, while the precision improved with increasing S/N ratio. At , the least-squares optimizer was always able to find a minimum. At , the optimizer sometimes diverged and yielded random results. This failure of least-squares fitting was reported already for by Cheezum et al. [25]. As the SLSQP minimization allows for bounds on the feature parameters, we were able to suppress the diverging solutions by limiting the displacements of center coordinates to the mask size . This enhancement enables us to also use the least-squares method for .

In Figure 6(b), it is visible that the bias in the parallel coordinate decreased with increasing feature size. Although the bias was so small that we can still speak of “subpixel accuracy”, the bias of approximately \SI-0.1px for typical values of might be problematic for super resolution techniques in which sum of Gaussians are used as model functions for overlapping point spread functions [32, 33]. As the magnitude of the bias increased with decreasing feature size and not with increasing S/N, we conclude that the bias is caused by the discretization of the feature shape, which depends on the used discretization model.

As colloidal molecules are often larger than the diffraction limit, their feature shape is typically not Gaussian. Here we will assess the effect of non-Gaussian shapes on the tracking bias and precision using a disc-shaped model feature as described by Eq. 5. See Figures 6(c)-(e). The observed precision in the refined position of the overlapping discs was surprisingly high, and the precision even slightly increases up to a disc size of (Figure 6(c)). This was probably caused by the larger integrated signal intensity of the disc shaped feature, which increased the S/N ratio integrated over the feature. For disc sizes greater than , the precision degraded due to the absence of smooth feature edges. The bias was lowest for small feature sizes (Figure 6(e)), since the disc-sized feature is then almost equal to the Gaussian shaped feature. Still, the magnitude of the bias did not exceed \SI0.2px for all tested disc-shaped features.

Finally, in Figure 6(f), we tested the least-squares fitting of Gaussians on ring-shaped model features (Eq. 6), such as may be obtained for particles with fluorescent markers on their surface only. Although a Gaussian is clearly a poor model function for these ring-shaped model features, it still performed remarkably well with absolute bias and precision both below \SI0.1px for any ring thickness above , probably because the tails of the features are still Gaussian-shaped. For thin rings with a thickness below , the least-squares optimization diverges. For these feature shapes, a more appropriate model function should be used.

To summarize, we observed that least-squares fitting to sums of Gaussians is able to accurately refine the location of overlapping Gaussian-shaped features. The negative bias of multiple pixels present in center-of-mass centroiding is reduced to less than \SI0.1px if the feature radius is above \SI3px. This makes fitting to sums of Gaussian an appropriate method for refining overlapping features with typical radii around \SI4px and S/N ratios above 2. Although the Gaussian is not a perfect model for disc-shaped or ring-shaped features, the bias and precision were very similar due to the limited pixel size for typical images of overlapping colloidal particles, given that the feature edges are smooth.

For overlapping features that are not well modeled by a Gaussian and that have a radius larger than \SI5px, different model functions should be used. As described by Jenkins et al. [26], it is possible to experimentally obtain an average feature shape and successfully use this for feature refinement of single features. For an extension to multiple overlapping features, we found that this technique is computationally too demanding, as there are no efficient optimizers for functions with discretized parameters. In order to use this technique for overlapping features, the average feature shape should be described with a continuous function, which can be directly used in our framework for least-squares minimization.

Although an accuracy of \SI0.1px is sufficient for many applications, a further improvement in accuracy could be reached by maximizing the log-likelihood corresponding to Eq. 4 instead of using the direct least-squares minimization. For single features, using a maximum likelihood estimator has been proven to give a more precise estimate of the true feature positions [18, 42].

3.3 Constrained least-squares

If additional information about the tracked features is available, constraints can be applied to increase tracking accuracy. In our framework for least-squares optimization of summed radial model functions, any combinations of parameters in the image model function (Eq. 2) can be constrained by equations of the following form:

(7)

Here, is a function and is an array consisting of all parameters of features that are in a cluster of size . We demonstrate the use of constraints here using colloidal dimers with known distance between the two constituent spheres. Using our algorithm we automatically tracked 1006 out of 1170 recorded frames. A constraint was chosen such that the distance between the constituent spheres equals the average distance measured on SEM images (\SI1.58\um). The resulting tracked three-dimensional images can be seen in Supporting Video S1.

Figure 7: Images of colloidal dimers. The coordinate system corresponding to the diffusion tensor is shown in (a). The origin of the coordinate system lays in the point of highest symmetry. A typical three-dimensional confocal image that is used for the particle tracking is displayed in (b), next to a representative Scanning Electron Micrograph of the employed colloidal dimers (c).

As the shape of a colloidal cluster is anisotropic, the short-term diffusion of such a particle is also anisotropic: for example, a dimer has a lower hydrodynamic friction when moving along its -axis, compared to when moving along its -axis. In general, the dynamics of any Brownian object is described by a symmetric second-rank tensor of diffusion coefficients, consisting of 21 independent elements [40]. We chose the point of highest symmetry for the origin of the cluster based coordinate system and aligned the z-axis with the long axis of the dimer, so that all off-diagonal terms in the diffusion tensor are zero. See Figure 7(a). The computed diffusion tensors were averaged over lagtimes up to \SI0.6\second. The resulting diffusion tensor reflects the symmetry of the dimer and can be seen in supporting Table S1.

Type Axis Diffusion coefficient
Translation x, y \SI61.2+-3.9e-3\um\squared\per\second
Translation z \SI65.2+-4.2e-3\um\squared\per\second
Rotation x, y \SI13.0+-1.1e-3\per\second
Table 1: Anisotropic diffusion coefficients of the colloidal dimer. The coordinate system is defined in Figure 7. The error denotes the \SI95\percent confidence interval estimated using a bootstrap algorithm.

In Table 1 we summarized the measured translational and rotational diffusion coefficients of the colloidal dimer. In line with previous results from holographic microscopy measurements [43], we observed that the translational diffusion constant along z is higher than the translational coefficient along x and y. These results illustrate that our new tracking algorithm is able to compute quantitative information from microscopy images of colloidal clusters, without the need of manual corrections.

4 Conclusion

We have presented a new algorithm for single-particle tracking that enables automated tracking of overlapping features with high accuracy and precision. The algorithm is based on a the well-known algorithm developed by Crocker and Grier [13] and implements two improvements. First, by exploiting the information obtained from the linking already in the segmentation stage, we were able to use the history of the feature positions to obtain segmentation with significantly fewer mistakes. In a test on two-dimensional experimental data of dimers, all frames were segmented correctly, while the conventional algorithm correctly segments only one third of the frames.

The second improvement enables sub-pixel accuracy. The conventional center-of-mass refinement is unable to find unbiased feature locations: signal overlap results in a negative bias if the feature separation distance is below the mask diameter, and the commonly used rolling average background subtraction imposes a positive bias already at separation distances below approximately 1.5 times the mask diameter. We reach sub-pixel accuracy and precision by least-squares fitting to sums of Gaussians. First, we tested Gaussian-shaped model features with varying signal-to-noise ratio and feature size and found that the obtained coordinates are biased less than \SI0.1px for a feature radius above \SI3px. The bias decreases with increasing feature size, implying that the pixel discretization is the cause.

Non-Gaussian features can also be tracked with surprising accuracy and precision using a Gaussian model function: for features with a radius of \SI4px, sub-pixel accuracy and precision was obtained even if \SI80\percent of the feature is a solid disc. For ring shaped features, sub-pixel accuracy and precision was achieved for a ring thickness of more than \SI20\percent of the feature size. For feature radii larger than \SI5px, the bias increases above \SI0.2px, implying that more appropriate model functions should be used in order to obtain sub-pixel accuracy of overlapping features.

We demonstrated the use of constraints in least squares fitting with experimental three-dimensional image sequences of colloidal dimers. Trajectories through \SI86\percent of all frames were obtained without any manual refinement. From this, the diffusion tensor was reported and found to accurately reflect the particle symmetry.

With the described method, two problems are solved that are encountered when employing conventional tracking methods to overlapping features. Firstly, the need for case-to-case meticulous optimization or manual reparation of tracks is significantly reduced. Secondly, by employing least squares fitting to summed Gaussians we found that the bias of the center-to-center separation distance is \SI0.2px in the worst case, which clearly outperforms the center-of-mass centroiding. Our method provides accurate automated tracking of videos containing overlapping features with minimal need for manual adjustments.

Acknowledgement

The authors would like to thank Giovanni Biondaro and Vera Meester for supplying the PMMA clusters for the three-dimensional diffusion measurements. This work was supported by the Netherlands Organisation for Scientific Research (NWO/OCW), as part of the Frontiers of Nanoscience program, and through VENI grant 680-47-431.

References

Supporting Material for “Automated tracking of colloidal clusters with sub-pixel accuracy and precision”

Authors: Casper van der Wel and Daniela J. Kraft

Figure S1: Illustration of the positive bias due to background subtraction. If the image background is nonzero (a), it can be subtracted using a rolling average resulting in a perfectly black background (b). However, in the image cross sections (c) and (d), it can be seen that the rolling average also results in a skew of the feature shapes, which gives a outwards directed bias.
x y z
x 61.64.0 -0.92.8 -0.43.1 0.01.3 -0.41.3
y -0.92.8 60.83.8 -0.73.0 -0.41.3 -0.21.4
z -0.43.1 -0.73.0 65.24.2 -0.01.3 -0.41.4
0.01.3 -0.41.3 -0.01.3 12.51.1 -0.20.7
-0.41.3 -0.21.4 -0.41.4 -0.20.7 13.41.1
Table S1: Tensor of dimer diffusion coefficients. The translational coefficients are given in units of \SIe-3\um\squared\per\second, the rotational coefficients in units of \SIe-3\per\second, and the rotation-translation cross terms in units of \SIe-3\um\per\second. Because rotation around the z-axis cannot be measured for a dimer, we omitted the corresponding elements. The error denotes the \SI95\percent confidence interval estimated using a bootstrap algorithm.
Still of Video S1: Three-dimensional video of a diffusing colloidal dimer as measured by confocal microscopy. The particle tracking is shown in an overlay on maximum intensity projections in three directions (upper left: xy, upper right: xz, lower left: yz). The rectangular pixels reflect the larger pixel size in the z direction. On the lower right, a three-dimensional plot is showing the orientation of the dimer. The axes are in microns.

Footnotes

  1. https://github.com/caspervdw/clustertracking

References

  1. C. A. Murray and David G. Grier. Video microscopy of monodisperse colloidal systems. Annu. Rev. Phys. Chem., 47(1):421–462, 1996.
  2. E. R. Weeks, J. C. Crocker, A. C. Levitt, A. Schofield, and D. A. Weitz. Three-dimensional direct imaging of structural relaxation near the colloidal glass transition. Science, 287(5453):627–631, 2000.
  3. W. K. Kegel and A. van Blaaderen. Direct observation of dynamical heterogeneities in colloidal hard-sphere suspensions. Science, 287:290–293, 2000.
  4. A. D. Dinsmore, E. R. Weeks, V. Prasad, A. C. Levitt, and D. A. Weitz. Three-dimensional confocal microscopy of colloids. App. Optics, 40(24):4152–9, 2001.
  5. G. Meng, J. Paulose, D. R. Nelson, and V. N Manoharan. Elastic instability of a crystal growing on a curved surface. Science, 343(6171):634–7, feb 2014.
  6. F. C. MacKintosh and C. F. Schmidt. Microrheology. Curr. Opin. Colloid In., 4(4):300–307, 1999.
  7. Y. Tseng, T. P. Kole, and D. Wirtz. Micromechanical mapping of live cells by multiple-particle-tracking microrheology. Biophys. J., 83(6):3162–3176, 2002.
  8. K. C. Neuman and A. Nagy. Single-molecule force spectroscopy: optical tweezers, magnetic tweezers and atomic force microscopy. Nat. methods, 5(6):491–505, 2008.
  9. J. Gelles, B. J. Schnapp, and M. P. Sheetz. Tracking kinesin-driven movements with nanometre-scale precision. Nature, 331(6155):450–453, 1988.
  10. A. Yildiz, J. N. Forkey, S. A. McKinney, T. Ha, Y. E. Goldman, and P. R. Selvin. Myosin V walks hand-over-hand: single fluorophore imaging with 1.5-nm localization. Science, 300(5628):2061–2065, 2003.
  11. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess. Imaging intracellular fluorescent proteins at nanometer resolution. Science, 313(2006):1642–1645, 2006.
  12. B. Huang, W. Wang, M. Bates, and X. Zuang. Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science, 319:810–813, 2007.
  13. J. C. Crocker and D. G. Grier. Methods of Digital Video Microscopy for Colloidal Studies. J. Colloid Interf. Sci., 179(1):298–310, 1996.
  14. T. Savin and P. S. Doyle. Static and dynamic errors in particle tracking microrheology. Biophys. J., 88(1):623–38, jan 2005.
  15. R. N. Ghosh and W. W. Webb. Automated detection and tracking of individual and clustered cell surface low density lipoprotein receptor molecules. Biophys. J., 66(5):1301–1318, 1994.
  16. M. J. Saxton and K. Jacobson. Single-particle tracking: applications to membrane dynamics. Annu. Rev. Bioph. Biom., 26:373–399, 1997.
  17. R. J. Ober, S. Ram, and E. S. Ward. Localization accuracy in single-molecule microscopy. Biophys. J., 86(2):1185–1200, 2004.
  18. C. S. Smith, N. Joseph, B. Rieger, and K. A. Lidke. Fast, single-molecule localization that achieves theoretically minimum uncertainty. Nat. Methods, 7(5):373–5, 2010.
  19. V. N. Manoharan, M. T. Elsesser, and D. J. Pine. Dense packing and symmetry in small clusters of microspheres. Science, 3010:483–487, 2003.
  20. D. J. Kraft, J. Groenewold, and W. K. Kegel. Colloidal molecules with well-controlled bond angles. Soft Matter, 5(20):3823, 2009.
  21. V. Meester, R. W. Verweij, C. van der Wel, and D. J. Kraft. Colloidal recycling: reconfiguration of random aggregates into patchy particles. ACS Nano, 10:4322–4329, 2016.
  22. S. C. Glotzer and M. J. Solomon. Anisotropy of building blocks and their assembly into complex structures. Nat. Mater., 6(7):557–562, 2007.
  23. G. L. Hunter, K. V. Edmond, M. T. Elsesser, and E. R. Weeks. Tracking Rotational Diffusion of Colloidal Clusters. Opt. Commun., 19(18):17189–17202, 2011.
  24. K. V. Edmond, M. T. Elsesser, G. L. Hunter, D. J. Pine, and E. R. Weeks. Decoupling of rotational and translational diffusion in supercooled colloidal fluids. P. Natl. Acad. Sci. USA, 109(44):17891–17896, 2012.
  25. M. K. Cheezum, W. F. Walker, and W. H. Guilford. Quantitative comparison of algorithms for tracking single fluorescent particles. Biophys. J., 81(4):2378–2388, 2001.
  26. M. C. Jenkins and S. U. Egelhaaf. Confocal microscopy of colloidal particles: towards reliable, optimum coordinates. Adv. Colloid Interfac., 136:65–92, 2008.
  27. A. van Blaaderen and P. Wiltzius. Real-space structure of colloidal hard-sphere glasses. Science, 270(19):1177–1179, 1995.
  28. R. Besseling, L. Isa, E. R. Weeks, and W. C. K. Poon. Quantitative imaging of colloidal flows. Adv. Colloid Interfac., 146:1–17, 2009.
  29. A. Sergé, N. Bertaux, H. Rigneault, and D. Marguet. Dynamic multiple-target tracing to probe spatiotemporal cartography of cell membranes. Nat. Methods, 5(8):687–694, 2008.
  30. K. Jaqaman, D. Loerke, M. Mettlen, H. Kuwata, S. Grinstein, S. L. Schmid, and G. Danuser. Robust single-particle tracking in live-cell time-lapse sequences. Nat. Methods, 5(8):695–702, 2008.
  31. J. Bäumgartl, J. L. Arauz-Lara, and C. Bechinger. Like charge attraction in confinement: Myth or truth? Soft Matter, 2(8):631, 2006.
  32. M. P. Gordon, T. Ha, and P. R. Selvin. Single-molecule high-resolution imaging with photobleaching. P. Natl. Acad. Sci. USA, 101(17):6462–6465, 2004.
  33. X. Qu, D. Wu, L. Mets, and N. F. Scherer. Nanometer-localized multiple single-molecule fluorescence microscopy. P. Natl. Acad. Sci. USA, 101(31):11298–303, 2004.
  34. D. Allan, T. Caswell, N. Keim, and C. van der Wel. Trackpy v0.3.1. Zenodo, 55143, 2016.
  35. S. Maneewongvatana and D. M. Mount. It’s okay to be skinny, if your friends are fat. Proceedings of the 4th Annual Workshop on Computational Geometry, pages 1–8, 1999.
  36. D. Kraft. A software package for sequential quadratic programming. Technical report, DLR German Aerospace Center, DFVLR-FB 88-28, 1988.
  37. E. Jones, T. Oliphant, P. Peterson, and Others. SciPy: Open source scientific tools for Python.
  38. T. H. Besseling, J. Jose, and A. van Blaaderen. Methods to calibrate and scale axial distances in confocal microscopy as a function of refractive index. J. Microsc., 257(2):142–150, 2015.
  39. C. van der Wel, R. K. Bhan, R. W. Verweij, H. C. Frijters, A. D. Hollingsworth, S. Sacanna, and D. J. Kraft. Preparation of colloidal organosilica spheres through spontaneous emulsification, in preparation.
  40. D. J. Kraft, R. Wittkowski, B. Ten Hagen, K. V. Edmond, D. J. Pine, and H. Löwen. Brownian motion and the hydrodynamic friction tensor for colloidal particles of complex shape. Phys. Rev. E, 88(5):2–6, 2013.
  41. A. Ramírez-Saito, C. Bechinger, and J. L. Arauz-Lara. Optical microscopy measurement of pair correlation functions. Phys. Rev. E, 74(3):030401, 2006.
  42. A. V. Abraham, S. Ram, J. Chao, E. S. Ward, and R. J. Ober. Quantitative study of single molecule location estimation techniques. Opt. Express, 17(26):23352–73, 2009.
  43. J. Fung and V. N. Manoharan. Holographic measurements of anisotropic three-dimensional diffusion of colloidal clusters. Phys. Rev. E, 88:020302, 2013.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
100543
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description