Improved Workflow for Unsupervised Multiphase Image Segmentation
Abstract
Quantitative image analysis often depends on accurate classification of pixels through a segmentation process. However, imaging artifacts such as the partial volume effect and sensor noise complicate the classification process. These effects increase the pixel intensity variance of each constituent class, causing intensities from one class to overlap with another. This increased variance makes threshold based segmentation methods insufficient due to ambiguous overlap regions in the pixel intensity distributions. The class ambiguity becomes even more complex for systems with more than two constituents, such as unsaturated moist granular media. In this paper, we propose an image processing workflow that improves segmentation accuracy for multiphase systems. First, the ambiguous transition regions between classes are identified and removed, which allows for global thresholding of singleclass regions. Then the transition regions are classified using a distance function, and finally both segmentations are combined into one classified image. This workflow includes three methodologies for identifying transition pixels and we demonstrate on a variety of synthetic images that these approaches are able to accurately separate the ambiguous transition pixels from the singleclass regions. For situations with typical amounts of image noise, misclassification errors and area differences calculated between each class of the synthetic images and the resultant segmented images range from 0.691.48% and 0.010.74%, respectively, showing the segmentation accuracy of this approach. We demonstrate that we are able to accurately segment xray microtomography images of moist granular media using these computationally efficient methodologies.
Keywords:
Image segmentation, Multiphase segmentation, Local deconvolution, Nongaussian mixture modeling, Granular media∎ \xpatchcmd
1 Introduction
Given a twodimensional image, image segmentation is the process of assigning each pixel in the image a particular class, or label. This is an important task in the analysis of experimental observations and is often the first step in much longer scientific workflows Ketcham and Carlson (2001); Sheppard et al (2004); Kaestner et al (2008); Porter and Wildenschild (2010). It is common in many scientific fields to use xray microtomography (CT) to capture micronscale detail of physical samples. For example, in the study of granular media, segmenting CT scans of a granular media sample is necessary to study the size and spatial distributions of the sample’s components, or to simulate the mechanical behavior of the medium with a numerical model. However, segmentation is difficult for images with more than two constituents, especially when the intensities (i.e., pixel values) of the different components are similar Sezgin et al (2004); Iassonov et al (2009); Baveye et al (2010). An example of this occurs in images of moist granular media, where each pixel represents either a gas, solid, or fluid phase, as shown on the left of Figure 1. The solid and fluid phases attenuate xrays in a similar manner, and are thus difficult to distinguish in the resultant CT image. In addition, the fluid phase is less prevalent than the grain phase, which makes the intensity distribution for fluid class confounded with the solid class distribution; in the histogram of Figure 1 it is difficult to even identify the fluid phase. To further exacerbate this problem, raw CT images are often noisy and typically contain gradual transition zones between classes instead of sharp interfaces. This gradual transition can cause many pixels to be mislabeled in applications with three or more classes.
In images with three components the gradual transitions are particularly problematic for pixels that are in transition between the highest and lowestintensity classes. The intensity of these pixels will be midway between the high and low intensities, which is similar to the intensity of the medium intensity class. Figure 1 illustrates the distribution overlap issue in a CT image of a moist sand sample. The problem areas are those where the gas and fluid phases are adjacent to each other. In these transition regions, pixel intensity does not correspond to a single component, but contains intensity contributions from both the gas and fluid. Therefore, the pixel intensities in the transition between gas and fluid are not distinct from typical intensities of the solid class. This will result in ambiguous class assignment even with segmentation methods that include neighborhood information, e.g., Vogel and Kretzschmar (1996); Berthod et al (1996); Oh and Lindquist (1999); Kulkarni et al (2012).
To support more accurate image segmentation in the presence of noise and blurred transitions, we propose a multiphase segmentation workflow that considers both local and global aspects of the image to segment its constituents. Our method uses an initial smoothing step to remove image noise followed by a series of processes to identify those pixels that are in a transition region and those that are not. The transition and nontransition pixels are then segmented separately using the methods described below. The segmented regions are then combined to form a complete segmentation of the image. This workflow is intentionally modular and allows users to interchange different smoothing methods, transitionidentification methods, and segmentation algorithms for their particular application.
We present three methods for identifying transition regions and provide guidance on which method may work best for certain situations. To establish the efficacy of each transitionidentification method, we use quantitative measures to compare each method’s segmented results for synthetic images with known constituent compositions and levels of Gaussian smoothing and noise. We then apply these methods to a real CT image of moist sand where the true component at each pixel is unknown.
2 Background
Segmenting CT images of porous media is the subject of several studies Schlüter et al (2010); Andrae et al (2013); Kato et al (2015); Lindquist and Venkatarangan (1999), and several different methods have been proposed in the literature. Typical image classification methods first use preprocessing to denoise the image before assigning each pixel in the image with one label from a finite set of possibilities (e.g., ). While a comprehensive overview of preprocessing and classification algorithms is not possible here, in part because of the problemspecific nature of many algorithms, below we provide background relevant to our proposed workflow.
2.1 EdgePreserving Smoothing Methods
Noise is common in CT images, but can be problematic for segmentation algorithms. We therefore preprocess images with a smoothing step, which helps mitigate the impact of noise. However, naive smoothing filters, e.g., a Gaussian blur filter, can enlarge transition zones, which is also problematic. In the image segmentation context, smoothing methods should reduce noise while also preserving the pixel values along the edges of each component.
Kaestner and colleagues Kaestner et al (2008) summarize several smoothing methods used to address noise and transition zones in CT images of porous media samples. The simplest of these methods are convolution filters, such as the Gaussian blur filter, and spatial filters, such as the mean and median filters. These methods quickly remove noise in image data Jain (1989), however, they affect all pixels within an image thus failing to preserve the edges, i.e., these filters attenuate the intensity gradient, between components. The authors of Kaestner et al (2008) also describe a class of edgepreserving filters that use the solutions of diffusive partial differential equations (PDE) as the smoothed image. These approaches have proven useful in many cases Perona and Malik (1990); Osher and Rudin (1990); Rudin et al (1992), but can sometimes require large amounts of memory and can be computationally expensive. While these filters can use up to seven times the memory required to store the initial image, a firstin firstout (FIFO) queue was used by Kaestner et al (2008) to significantly reduce the amount of required memory.
One of the most common PDE smoothing methods is the anisotropic diffusion filter (ADF), which uses a diffusion equation with nonlinear diffusivity to reduce noise in an image Catté et al (1992). The PDE takes the form
(1) 
where is the image intensity, is the nonlinear diffusivity term, and is the squared norm of the image’s gradient after being smoothed with a Gaussian filter with standard deviation Catté et al (1992); Kaestner et al (2008). The diffusivity term, , changes based on the magnitude of the image gradient, such that it is small where the gradient is high (such as at edges) and near unity where the gradient is low (such as within homogeneous regions). The result is a filter that smooths within each constituent, but provides minimal smoothing near the interfaces between them.
Another common edge preserving smoothing method is the nonlocal means algorithm (NLM) proposed by Buades and coworkers Buades et al (2005). This method smooths an image by replacing each pixel value with a weighted average of all other pixels in the image. Pixels that have similar Gaussiansmoothed neighborhoods as the target pixel have larger weights than those pixels that do not Buades et al (2005). This smoothing calculation takes the form
(2) 
where is the image, is the pixel of interest, is a normalizing factor, is a Gaussian smoothing kernel with standard deviation , is the filtering parameter, and corresponds to the other pixels in the image. The and terms correspond to the local neighborhoods around pixels and , respectively.
2.2 Segmentation Methods
While edgepreserving smoothing methods retain intensity information at the edges, they do not sharpen edges or remove transition zones. Our segmentation approach therefore needs to overcome the transition pixel problem explained earlier.
Similar to the various smoothing methods in the literature, there are multiple approaches for segmenting the different components of an image. In most cases, this process is challenging due to the gradual transitions between components. Clustering methods, such as KMeans MacQueen (1967) or Gaussian mixture modeling Gupta and Sortrakul (1998) can help identify intensity thresholds that separate components, but are not effective when there is significant overlap between the intensity distributions of each component. Unfortunately, this overlap is common in CT images. As mentioned previously, the small number of fluid pixels creates a class distribution that is essentially indistinguishable from the solid phase distribution. The authors of Kaestner et al (2008) argue that histogram based segmenting methods are “unsuitable as a final segmentation method,” however we find that these methods, Gaussian mixture modeling in particular, can be useful building blocks for more complex algorithms.
Several studies have utilized energy based techniques for segmenting images Hagenmuller et al (2013); Juneja and Kashyap (2016). These methods are based on finding a full segmentation of the image that minimizes an energy function. These methods must consider all pixels in the image to find the minimumenergy segmentation, which is computationally expensive for large three dimensional image sets. While effective in many settings, these types of global segmentation methods are too expensive for our particular purposes. Later we discuss how our segmentation approach uses local deconvolution to incorporate global information in a way that does not couple all of the pixels together, thereby avoiding the cost of common energybased methods.
Several porous media segmentation studies use morphological operations, such as erosion and dilation, to remove spurious features or fill holes within their segmented results. We avoid these methods because they alter the image data without considering the image’s underlying structure. We have also found that morphological operations can erroneously remove small, but important, regions like the water phases in Figure 1.
2.3 Transition Detection
A significant component of our workflow is identifying transition pixels. Two existing concepts from the literature will prove particularly useful in this regard: gradientbased edge detection (e.g., the Sobel filter), and steerable filters.
Thinking of the image as a function in two variables, the gradient at each pixel is an indication of how rapidly the image intensity is changing at that location. This is commonly used in the image processing community to identify edges, which are simply regions of the image with rapid intensity changes. Many different filters have been developed for approximating the image gradient and detecting edges Canny (1986); Roberts (1963); Serra (1982); Prewitt (1970); Sobel (1990). Of these many options, we employ the wellstudied morphological gradient Serra (1982); Maragos (1987), which is fast to compute and available in many image processing packages. While the true gradient is a vectorvalued quantity, the morphological gradient approximates the magnitude of the gradient at a pixel as the difference between the maximum intensity and minimum intensity over a small neighborhood around the pixel.
It is wellknown that an image gradient is sensitive to noise in the image. However, we have found that an initial edgepreserving smoothing of the image (e.g., ADF or NLM) helps reduce the noiseinduced gradient peaks.
The Sobel filter Sobel (1990), which is a common method for computing the gradient at each point in an image, involves the convolution of an image with two filter images, denoted here by and . These filter images approximate the horizontal (i.e., ) and vertical (i.e., ) components of the gradient. They are given by
(3)  
(4) 
and satisfy
(5) 
Notice that from , we can compute any directional derivative. Let denote the filter that would give us the directional derivative in the direction . Thus,
(6) 
In this sense, the filters and form a basis for any directional derivative filter. This is a specific example of a steerable filter. In general, a family of filters is said to be steerable if the filter coefficients at any angle can be represented through a finite number of basis filters
(7) 
where is the number of basis filters, are weights for each basis filter, and are the angles of the basis filters. Freeman and Adelson Freeman and Adelson (1991) provide more details on the conditions necessary for steerable filters. They also show how the dominant orientation can be estimated with quadrature pairs of steerable filters. As an example of quadrature pairs consider the second derivative of a Gaussian, denoted by , and its Hilbert transform, denoted by . The energy of the image at any angle is given by
(8) 
To find the orientation of the image at a point , we need to find the angle that maximizes this energy. Following Freeman and Adelson (1991), this expression can be approximated by a truncated Fourier expansion to obtain
(9) 
where the coefficients depend on the specific filters and . The maximum of (9) occurs at an angle given by
(10) 
The phase, denoted by , at this angle can also be computed from the quadrature pair coefficients
(11) 
Notice that the phase at angle is an estimate of the distance from the point to an edge in the image. This will be used in Section 3.2 as part of a local deconvolution scheme.
3 Algorithmic Components
Here we utilize the background and previous work outlined above to introduce two new concepts needed for our segmentation workflow: nonGaussian mixture modeling of image intensities and local deconvolution.
3.1 NonGaussian Mixture Modeling
As mentioned above, Gaussian mixture models cannot always effectively model the intensities of pixels in transition zones. To help overcome this, we extend standard Gaussian mixture modeling approaches with nonGaussian components that explicitly model intensities of transition pixels.
A standard Gaussian mixture model has a probability density of the form
(12) 
where is the number of components in the mixture, is a normal probability density with mean and variance , and is the probability of component . Gaussian mixture models are in the exponential family of models and provide an efficient and flexible framework for unsupervised classification. However, to accurately characterize complex distributions, like those coming from CT intensity images, a large number of components are required. This decreases the interpretability of the resulting model. For example, if there are three phases in the CT image and three components in the mixture model, it is clear that each component in the mixture model corresponds to a phase in the image. However, in our case , because of the transition regions, and it becomes much more difficult to match the mixture model component with the image phase.
To overcome this interpretability challenge, we propose the use of a nonGaussian mixture model with specially designed components to model the intensities of transition pixels. The intensity of a pixel in transition lies between the intensity of the two phases it is transitioning between. This is because the physical volume within one pixel contains more than one phase; each phase only fills part of the volume of the pixel. Let be a uniform random variable describing the fraction of a transition pixel that contains phase in a transition between phases and . Let denote the pixel intensity distribution for phase . We assume the intensity of a transition pixel, denoted by , is then given by
(13) 
where for some variance . The density of is then given by
(14) 
where is the standard normal cumulative density function. Combining these transition densities with the Gaussian mixture in (12) yields the nonGaussian mixture density
(15) 
The unknown model parameters: , , , , and can be computed with the expectation maximization (EM) algorithm Dempster et al (1977). We approximate the expectation step in the EM algorithm by sampling over pixel classes with a Gibbs sampler. The initial values for the model parameters are taken from a simple KMeans classification of the image.
3.2 Local Deconvolution
Let denote the image intensity at location . We model the image as a blurred and noisy function of an idealized image such that
(16) 
where is an isotropic Gaussian kernel and is a white noise Gaussian process. From the steerable filter, we can get the orientation at any point as well as the filter coefficients at that orientation. Mathematically, this gives
(17)  
(18) 
where is the orientation at that maximizes the energy in (8). Notice that (16), (17), and (18) provide three complementary observations of . We will use these information sources to estimate the values of the idealized image , but first we will parameterize in the local area around .
Near a point in the image, assume that the idealized image is given by as a step function along a particular direction . More specifically, assume that in the neighborhood around the idealized image is given by
(19) 
This local approximation of the idealized image is illustrated in Figure 2, where is perpendicular to the angle that maximizes the energy and is any point in a transition region. Note that represents the signed distance between and the nearest transition; it can therefore be estimated from the phase of the steerable filter.
Using the form in (19), it is possible to compute (16)–(18) analytically and obtain the linear system
(20) 
where is the steerable filter phase from (11) computed at point and
(21) 
where can be computed by applying the steerable filters to a known synthetic image containing a step function and additive Gaussian noise. The covariance of the filter coefficients on this synthetic image will be . The covariance matrix captures correlation between , , and that stems from the appearance of in all three equations (16), (17), (18).
The expression in (20) defines a weighted least squares problem that can be easily solved to obtain estimates of and , and thus the values of the idealized image . Figure 3 shows how the transitions in a real image correspond to the estimates of and . The idealized transition consists of a sharp change from one value to the next, whereas the real image transition is gradual and noisy. With the local deconvolution process we attempt to extract the idealized values. The values of and provide information needed to decide whether a pixel is in a transition zone or not. If , then the pixel is not in a transition zone.
4 Method Overview
Our method incorporates the information presented in Sections 2 and 3, and consists of four main steps:

Smooth image to remove noise.

Identify transition pixels and separate image into singleclass and transition zone subgroups.

Use a Gaussian mixturemethod to segment the singleclass pixel group.

Segment transition pixels based on minimum distance to a singleclass region, as dictated by Euclidean distance.
The result is a combined segmented image of both the singleclass and transition pixels segmented into their respective classifications.
4.1 Smooth Image
The first step uses either the ADF or NLM smoothing methods to remove noise within the image while maintaining the intensity values near the component edges. We used a NLM filter from the Scikit Image (Skimage) Python library van der Walt et al (2014). Although noise is removed from the regions within each component, the ambiguous transition pixels are left essentially unaltered. This is important because it maintains the original structural information that we use to identify those transition regions.
4.2 Identifying Transition Pixels
The next step in the workflow is to identify potential transition pixels. We present three different methods for identifying these pixels based on the techniques outlined in Sections 2 and 3.
4.2.1 Local Deconvolution Classification
The first method incorporates the steerable filter ideas from Section 2.3 with the local deconvolution method introduced in Section 3.2 and the nonGaussian mixture modeling described in Section 3.1.
After smoothing, we construct a nonGaussian mixture model of the image intensity distribution. The Gaussian (i.e., inphase) components of the mixture model allow us to identify which phase is most likely for a particular intensity value. Once the nonGaussian mixture model is constructed, we apply the local deconvolution scheme to estimate the and values in (19) for each pixel in the image. Combined with the mixture model, the values and allow us to classify each pixel into different transition types (i.e. gastograin, graintofluid, or gastofluid). For example, consider the case where a certain pixel has an value that is most likely in the gas class and a value that is most likely from the grain class. This pixel is most likely an interface between gas and grain components. Similarly, if and fall within the same class, the pixel is not likely in a transition zone. Notice that this approach does not require setting any thresholds or other critical parameters.
Comparing and thus allows us to identify transition regions. In theory, the values of and could also be used to identity which type of transition region a pixel is in (e.g., gasgrain), but we found these estimates to be inaccurate and only use this approach to identify transition pixels. Algorithm 1 provides a more detailed description of our approach and Figure 4 illustrates the final transition pixels selected by this method.
4.2.2 Local Deconvolution Difference
Like the above classification approach, this method uses the steerable filter results and local deconvolution scheme to calculate and for each pixel. However, instead of looking at the nonGaussian mixture model to classify the pixels, the absolute difference between and is used. If and are significantly different, they likely represent two different components and the pixel is likely in a transition zone. We therefore use the absolute difference between and , , as an indication of transition pixels. If is larger than a prescribed threshold, the pixel is in a transition zone. To determine an appropriate threshold, we plotted a histogram of all values within the CT image and determined that was representative of transition pixels in our image.
4.2.3 Gradient Peak Detection
Our last approach for transition detection uses the magnitude of the image gradient as an indicator of transition regions. If the magnitude of the morphological gradient is larger than a predefined threshold, the pixel is marked as a transition pixel. To determine an appropriate threshold value for separating edges from singleclass regions, we plotted a histogram of all gradient values within the CT image and determined that pixels with gradients belonged to a transition zone.
4.3 Segment SingleClass Regions
With the transition zones identified and removed from the smoothed image, segmenting the singleclass regions is quite simple because the intensity distribution of the phases are more distinct. This step uses a threecomponent Gaussian mixture model approach to calculate thresholds between each constituent. Figure 5 shows the distributions of both the singleclass regions and transition zones after being identified and separated. After removing transition pixels, the underlying Gaussianlike distributions corresponding to each phase become more apparent. This includes the fluid class, which is confounded with the tail of the grain distribution in the raw intensity histogram. The transition pixel intensity distribution is bimodal with each mode corresponding to the transitions between gastograin and graintofluid. What this curve does not show is a third mode corresponding to the gastofluid transition since it spans the two other transition types.
4.4 Segment Transition Regions
The approaches described in Section 4.2 are used to identify transition pixels, but not to identify the class of each transition pixel. To perform this classification, we first compute the distance to the nearest pixel in each singleclass region defined in Section 4.3 (e.g., the nearest gas pixel). The transition pixel is then labeled with the class of whichever singleclass is closest. As observed by Hagenmuller Hagenmuller et al (2013), the intensity standard deviations of each constituent are the same because it is characteristic of the sensor not the sensed medium. This allows us to assume the constituents spread equally into the transition regions and that it is appropriate to segment the transition pixels based on intensity proximity. Figure 6 illustrates the results of combining the segmented singleclass and segmented transition regions. In this example, the transition regions were identified with the local deconvolution approach.
5 Results
5.1 Synthetic Comparison of Pixel Identification Methods
To evaluate the accuracy of our pixelidentification methodologies we computed several quantitative metrics using a synthetic image representative of granular media CT images. Artificial test images are useful for calculating comparative metrics because, unlike real CT imagery, the exact constituent compositions are known. Our base synthetic test image (Figure 7) was composed of two circular grains with a hyperbolic fluid bridge between them, where each phase was assigned the mean intensity value of phases from a CT scan dataset.
The synthetic noise and blur artifacts were created by adding Gaussian noise and blurring the image with a Gaussian filter. The segmentation workflow was tested on multiple images with a range of noise levels and transition zone widths to understand how each pixelidentification method responds to each type of image artifact.. The transition widths were controlled by changing the standard deviation of the Gaussian blur filter. We determined from CT images of moist granular samples that the noise in singleclass regions can be approximated with Gaussian noise standard deviations of . Similarly, we determined that transition zones can be approximated with Gaussian blur standard deviations of . Therefore, we tested synthetic images with Gaussian noise standard deviations ranging from to , and Gaussian blur standard deviations ranging from to .
We found that the NLM approach removed noise better than the ADF approach for CT images of granular media, so we used that smoothing method in all results. After each segmentation, the noisy images were evaluated against the unaltered image in terms of class area fractions and misclassification error (ME). The area fraction provides information regarding the image’s total class composition, however it does not verify whether the segmented components are accurate. It is possible to have images with the same area fractions that are vastly different from a pixel to pixel perspective. The ME provides complementary information by comparing each classified pixel in the “true” image with the corresponding pixel in the final segmented image using the expression
(22) 
where is the number of pixels in the image, is an indicator function that is one when both arguments are the same and zero otherwise, is the initial pixel classification and is the pixel class after segmentation. If the two pixels have the same classification then there is no error, and , otherwise . Classified images that are exactly the same have a ME of , whereas images that are exactly dissimilar have a ME of .
Results from the area fraction calculations are provided in Figure 8. Overall the three methods provided final segmentations that were close to the true constituent composition of the synthetic images, with the gradient method providing the lowest area differences for all three classes. The deconvolution classification method results in the largest fraction differences in the gas and grain classes of approximately 0.75% and 0.30%, respectively. However, the deconvolution difference method produces the highest difference in the fluid class by a small margin.
The ME values appear fairly independent of transition zone width and do not change drastically as the transition zones become larger (top image Figure 9). In addition, ME increases slightly as noise increases, however the gradient method ME increases significantly for noise standard deviation of (bottom image Figure 9).
When comparing computational efficiency, the gradient identification method has the shortest average runtime of seconds for a synthetic image that is pixels in size. This is significantly faster than the deconvolution difference and classification methods which have average runtimes of and seconds, respectively.
5.2 Performance on Real CT Images
In addition to the synthetic image tests, we segmented a CT scan of a moist sand sample. Figure 10 illustrates sections of the original CT scan and their subsequent segmentations.
This set of segmented images are classified into gas, grain, and fluid classes. With this data we can analyze the composition of the sample and calculate its moisture content. The fluid class of the segmented image makes up of the segmented volume, and the physical sample had a known volumetric fluid content of . The difference of is well within measurement error and any human error while preparing the physical sample. This indicates we have achieved good agreement between the physical sample and its segmented counterpart.
This segmented dataset will provide the basis for numerical models of the granular samples. As an example, we took a voxel section of the segmented CT images and created meshes of the different grains using a level set approach (Figure 11).
Numerical models and computer simulations of realworld phenomena provide researchers with a powerful compliment to theoretical and physical experimentation. In order to provide meaningful results, these numerical simulations require highfidelity representations of the realworld geometric and material properties that they are modeling. In many cases the simulated properties are merely approximations of the real systems, which is especially true for systems with complex physical shapes and extreme scales. We have shown that we can use our image segmentation methodology to match numerical model geometries with those of actual physical specimens in a precise manner, improving the validation process between numerical and physical experiments.
6 Discussion and Conclusions
Multiphase image segmentation is challenging due to ambiguity in the transition regions between components, which can lead to pixel misclassification. We have presented a workflow that addresses these transition regions while providing flexibility to substitute different methods for each step of the process. In addition, we have presented the results for three methodologies that identify the ambiguous transition regions within images.
Of the three methods, the gradient detection method returns the closest segmented class areas to the original image, as well as the lowest ME values for cases with moderate noise. However, it appears much more susceptible to large noise than the other methods, therefore the gradient approach is not recommended when the input images are significantly noisy. The deconvolution classification method results in the highest ME values of the three identification methods and produces the largest area fraction differences for both grain and gas classes. However, this approach had the second closest area fraction of the fluid constituent, which for our images was the most difficult component to segment. The deconvolution difference method provides lower ME values than the deconvolution classification method but not quite as low as the gradient detection method for the gas and grain classes. Of the three methods, the deconvolution difference method seems least affected by both blur and noise in the input images, indicating this method is a good approach for images with significant noise and blurring artifacts. In summary, all methods seem relatively unaffected by increased blur, however they all exhibit decreased accuracy as noise increases. Despite the differences in performance metrics, the area differences are low and the ME values are less than of the entire image for noise levels we would expect to see. This indicates all of these methods provide accurate segmentations of the original image for moderate noise.
While the gradient and deconvolution difference methods performed better than the deconvolution classification approach with respect to area fraction and ME metrics, these methods rely on threshold values that were specifically tuned to produce good results, whereas the classification approach does not rely on such specification. Threshold values that work well for this particular case will likely not work well for others, and the time required to determine adequate thresholds can far outweigh the additional time necessary for the deconvolution classification method to run. The main advantage of the local deconvolution classification approach is that it does not require guidance from the user in order to produce quality image segmentations that can be used for further analyses, such as quantitatively analyzing the image, or initializing numerical simulations.
In summary, we have presented a modular and flexible workflow to segment images composed of multiple constituent phases. In addition, we have presented an unsupervised method for segmenting images that rivals the performance of manuallytailored methods.
References
 Andrae et al (2013) Andrae H, Combaret N, Dvorkin J, Glatt E, Han J, Kabel M, Keehm Y, Krzikalla F, Lee M, Madonna C, Marsh M, Mukerji T, H Saenger E, Sain R, Saxena N, Ricker S, Wiegmann A, Zhan X (2013) Digital rock physics benchmarksâpart i: Imaging and segmentation. Computers & Geosciences 50:25â32
 Baveye et al (2010) Baveye PC, Laba M, Otten W, Bouckaert L, Sterpaio PD, Goswami RR, Grinev D, Houston A, Hu Y, Liu J, et al (2010) Observerdependent variability of the thresholding step in the quantitative analysis of soil images and xray microtomography data. Geoderma 157(1):51–63
 Berthod et al (1996) Berthod M, Kato Z, Yu S, Zerubia J (1996) Bayesian image classification using markov random fields. Image and vision computing 14(4):285–295
 Buades et al (2005) Buades A, Coll B, Morel JM (2005) A nonlocal algorithm for image denoising. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)  Volume 2  Volume 02, IEEE Computer Society, Washington, DC, USA, CVPR ’05, pp 60–65
 Canny (1986) Canny J (1986) A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence PAMI8(6):679–698
 Catté et al (1992) Catté F, Lions PL, Morel JM, Coll T (1992) Image selective smoothing and edge detection by nonlinear diffusion. SIAM Journal on Numerical Analysis 29(1):182–193
 Dempster et al (1977) Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society Series B (methodological) pp 1–38
 Freeman and Adelson (1991) Freeman W, Adelson E (1991) The design and use of steerable filters. IEEE Transactions on Pattern Analysis and Machine Intelligence 13:891–906
 Gupta and Sortrakul (1998) Gupta L, Sortrakul T (1998) A gaussianmixturebased image segmentation algorithm. Pattern Recognition 31(3):315 – 325
 Hagenmuller et al (2013) Hagenmuller P, Lesaffre B, Flin F, Calonne N, Naaim M (2013) Energybased binary segmentation of snow microtomographic images. Journal of Glaciology 59(217)
 Iassonov et al (2009) Iassonov P, Gebrenegus T, Tuller M (2009) Segmentation of xray computed tomography images of porous materials: A crucial step for characterization and quantitative analysis of pore structures. Water Resources Research 45(9)
 Jain (1989) Jain AK (1989) Fundamentals of digital image processing. PrenticeHall, Inc.
 Juneja and Kashyap (2016) Juneja P, Kashyap R (2016) Energy based methods for medical image segmentation. International Journal of Computer Applications 146(6)
 Kaestner et al (2008) Kaestner A, Lehmann E, Stampanoni M (2008) Imaging and image processing in porous media research. Advances in Water Resources 31(9):1174 – 1187, quantitative links between porous media structures and flow behavior across scales
 Kato et al (2015) Kato M, Takahashi M, Kawasaki S, Kaneko K (2015) Segmentation of multiphase xray computed tomography images. Environmental Geotechnics 2(2):104–117
 Ketcham and Carlson (2001) Ketcham RA, Carlson WD (2001) Acquisition, optimization and interpretation of xray computed tomographic imagery: applications to the geosciences. Computers & Geosciences 27(4):381–400
 Kulkarni et al (2012) Kulkarni R, Tuller M, Fink W, Wildenschild D (2012) Threedimensional multiphase segmentation of xray ct data of porous materials using a bayesian markov random field framework. Vadose Zone Journal 11(1)
 Lindquist and Venkatarangan (1999) Lindquist W, Venkatarangan A (1999) Investigating 3d geometry of porous media from high resolution images. Physics and Chemistry of the Earth, Part A: Solid Earth and Geodesy 24(7):593 – 599
 MacQueen (1967) MacQueen J (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, University of California Press, Berkeley, Calif., pp 281–297
 Maragos (1987) Maragos P (1987) Tutorial on advances in morphological image processing and analysis. Optical engineering 26(7):623–632
 Oh and Lindquist (1999) Oh W, Lindquist B (1999) Image thresholding by indicator kriging. IEEE Transactions on Pattern Analysis and Machine Intelligence 21(7):590–602
 Osher and Rudin (1990) Osher S, Rudin LI (1990) Featureoriented image enhancement using shock filters. SIAM Journal on Numerical Analysis 27(4):919–940
 Perona and Malik (1990) Perona P, Malik J (1990) Scalespace and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence 12(7):629–639
 Porter and Wildenschild (2010) Porter ML, Wildenschild D (2010) Image analysis algorithms for estimating porous media multiphase flow variables from computed microtomography data: a validation study. Computational Geosciences 14(1):15–30
 Prewitt (1970) Prewitt JM (1970) Object enhancement and extraction. Picture processing and Psychopictorics 10(1):15–19
 Roberts (1963) Roberts LG (1963) Machine perception of threedimensional solids. PhD thesis, Massachusetts Institute of Technology
 Rudin et al (1992) Rudin LI, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60(14):259–268
 Schlüter et al (2010) Schlüter S, Weller U, Vogel HJ (2010) Segmentation of xray microtomography images of soil using gradient masks. Computers & Geosciences 36(10):1246–1251
 Schlüter et al (2014) Schlüter S, Sheppard A, Brown K, Wildenschild D (2014) Image processing of multiphase images obtained via xray microtomography: a review. Water Resources Research 50(4):3615–3639
 Serra (1982) Serra J (1982) Image analysis and mathematical morphology, v. 1. Academic press
 Sezgin et al (2004) Sezgin M, et al (2004) Survey over image thresholding techniques and quantitative performance evaluation. Journal of Electronic imaging 13(1):146–168
 Sheppard et al (2004) Sheppard AP, Sok RM, Averdunk H (2004) Techniques for image enhancement and segmentation of tomographic images of porous materials. Physica A: Statistical mechanics and its applications 339(1):145–151
 Sobel (1990) Sobel I (1990) An isotropic 3 3 image gradient operator. Machine vision for threedimensional scenes pp 376–379
 Vogel and Kretzschmar (1996) Vogel H, Kretzschmar A (1996) Topological characterization of pore space in soilâsample preparation and digital imageprocessing. Geoderma 73(12):23–38
 van der Walt et al (2014) van der Walt S, SchÃ¶nberger JL, NunezIglesias J, Boulogne F, Warner JD, Yager N, Gouillart E, Yu Ta (2014) scikitimage: image processing in python. PeerJ 2:e453