Automated Defect Localization via Low Rank Plus Outlier Modeling of Propagating Wavefield Data

Automated Defect Localization via Low Rank Plus Outlier Modeling of Propagating Wavefield Data

Stefano Gonella Jarvis D. Haupt Department of Civil Engineering, University of Minnesota, Minneapolis, MN Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN

This work proposes an agnostic inference strategy for material diagnostics, conceived within the context of laser-based non-destructive evaluation methods which extract information about structural anomalies from the analysis of acoustic wavefields measured on the structure’s surface by means of a scanning laser interferometer. The proposed approach couples spatiotemporal windowing with low rank plus outlier modeling, to identify a priori unknown deviations in the propagating wavefields caused by material inhomogeneities or defects, using virtually no knowledge of the structural and material properties of the medium. This characteristic makes the approach particularly suitable for diagnostics scenarios where the mechanical and material models are complex, unknown, or unreliable. We demonstrate our approach in a simulated environment using benchmark point and line defect localization problems based on propagating flexural waves in a thin plate.

Anomaly detection, Low rank plus outlier models, Saliency, Non-destructive evaluation
journal: the IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control

1 Introduction

Within the realm of dynamics-based non-destructive evaluation (NDE) and damage prognosis methodologies, techniques based on guided waves have become popular Staszewski et al. (2004); Rose (2002); Giurgiutiu (2008) due to their sensitivity to a variety of damage types and their ability to travel long distances (with minor attenuation) and interact with defects located far from available actuation and sensing points. Guided waves are generated and received by transmitter-receiver pairs distributed over the test specimen and the detection process follows the pitch-catch  Ihn and Chang (2008) or the pulse-echo Raghavan and Cesnik (2008) paradigms: a signature of wave scattering is captured along the transmitter-receiver path, and the position of the defect is triangulated from the data collected from multiple transducer pairs. Numerous efforts have been devoted to the construction of techniques for damage estimation, classification and localization: some popular approaches involve statistical methods Flynn et al. (2011), acoustic imaging techniques Michaels et al. (2005), singular value decomposition Liu et al. (2012), neural networks Su and Ye (2004), conventional and Monte Carlo matching pursuit decomposition Das et al. (2005, 2009); Mallat and Zhang (1993) and support vector machines Das et al. (2007), among others. Damage detection and triangulation has also been successfully tackled using delay-and-sum techniques Michaels and Michaels (2007); Michaels (2008) and phased arrays Yu and Giurgiutiu (2007). Spatial optimization of the sensor networks has also been proposed Wang and Yuan (2009) to enhance the detection capabilities.

(a) Material damping
(b) Microstructural clutter
Figure 1: Ambiguity observed in echo-pulse detection methods in the presence of material damping and microstructural clutter and opportunities for anomaly detection via spatial measurement diversity and identification of salient behavior

Pitch-catch methods are at the core of in-situ or online structural health monitoring methodologies (SHM) Kessler (2002); Kirikera et al. (2011). In-situ SHM mandates that components are continuously monitored during their operational lives, and requires that appropriate actuation and sensing devices are embedded in the design. Embedding sensors in structures is an intriguing concept that has been explored with interesting results (e.g. for composite materials Qing et al. (2007)), although questions have been raised about the weakening effect of built-in devices on the material Chung (2003). The applicability of online monitoring techniques depends on a trade-off between the agility of these methods and the thoroughness of inspection that is achievable with methods involving full acoustic or optical access to the entire structural domain.

Radar-based approaches work well under the assumption that the waves travel in the undamaged portion of the specimen without significant attenuation or dispersion; this assumption is valid for thin metal structures with limited damping, small number of widely-spaced defects, and negligible contribution to dispersion from the material microstructure. However, radar based pitch-catch methods break down when these assumptions are relaxed. This is illustrated with a simple example in Fig. 1, which depicts the results of a finite element simulation of S- and P-waves in a thin plate with soft inclusions. Figure 1(a) depicts the effect of damping in a lossy material, where the amplitude decay can be so pronounced that the signal processed by the receiver is too weak to be detected or interpreted, especially in the case of noisy signals. Similarly, Fig. 1(b) makes use of the simple problem of scattering induced by four inclusions to illustrate the issues arising in the presence of multiple damage zones with high proximity. The proximity of the scatterers produces wave interference and significant clutter, with some inclusions becoming de facto acoustically invisible to the traveling wave, and further, the signal recorded by the receiver is hard to decipher, as it is tainted by multiple reflections. On the other hand, in either of these examples, full spatial reconstruction of the wavefield in the neighborhood of the defect, when available, allows a visual identification of the damage zone from the interpretation of the inherent spatial richness of the wavefield.

The virtue of spatial measurement diversity, coupled with the superior reconstruction capabilities available using laser interferometers, have inspired a powerful class of laser-enabled inspection methodologies Michaels et al. (2011); Sharma et al. (2005). A Scanning Laser Doppler Vibrometer (SLDV) allows non-contact measurement of points belonging to a scanning grid, which enables full reconstruction of the propagating wavefield and provides abundant measurement diversity. Subsequent data processing, for example using methods based on space-time Discrete Fourier Transform (DFT) Basri and Chiu (2004); Alleyne and Cawley (1991) or incident wave removal through wavenumber filtering Ruzzene (2007), have been proposed to improve visualization of propagating wavefields from SLDV data.

This work explores a new automated approach for inference of structural anomalies, which uses local (in space and time) low-rank modeling to describe typical wavefield behavior, and identifies (nominally rare) structural defects based on deviations from this model. Our approach is based on fusing notions of saliency Itti et al. (1998); Itti and Koch (2001); Harel et al. (2006); Liu et al. (2007), with low rank plus (sparse) outlier modeling Chandrasekaran et al. (2011); Candes et al. (2011); Xu et al. (2012), along the lines of several recent efforts examining these techniques in computer vision applications Yan et al. (2010); Shen and Wu (2012). Saliency is often described in terms of human perception (e.g., in the study of visual saliency); here, our goal is to remove the human component in the diagnostic inference task in order to automate the inference procedure, in turn making it suitable for embedding in intelligent multi-step structural monitoring routines. The proposed approach is largely agnostic, employing only minimal knowledge of the geometric and acoustic properties of the medium to perform the detection.

The remainder of this paper is organized as follows. In Section 2 we state our material diagnostic problem in detail, specify our model for identifying saliency in the propagating wavefield data, and describe how this model is employed in our proposed automated inference approach. Section 3 contains a validation of the proposed approach conducted using synthetic data in the context of two benchmark anomaly detection problems. We examine the robustness of our procedure applied to spatially downsampled data in Section 4. Concluding remarks and future directions are discussed in Section 5.

2 Saliency based anomaly detection

The objective of anomaly detection is to identify atypical patterns in data Patcha and Park (2007); Chandola et al. (2009). Here, our aim is to identify the locations (or regions) of a structure at which some structural anomalies exist, from measurements in the form of kinematic time histories gathered at a set of discrete points on the structure’s surface when the structure is excited by a propagating acoustic wave. Propagating wavefields, of course, exhibit characteristics that depend upon the material properties of the medium (e.g., elastic moduli and density) and the externally applied excitation (e.g., carrier frequency and bandwidth). In addition, the interactions between the propagating wavefields and structural anomalies produce signatures in the measured data (e.g., reflections from, or interactions with, localized regions with different material properties). Our approach here identifies and exploits subtle differences between these signatures and otherwise “nominal” wave propagation characteristics to infer the locations of structural anomalies. For clarity of exposition we restrict our discussion in the sequel to two-dimensional plate-like structures, though our proposed approach may be generalized to three-dimensional structures.

2.1 Domain discretization and region partition

Wave motion in two dimensions corresponds to the existence of a non-zero spatiotemporally evolving field , where are coordinates of locations in the material domain, is time and is a certain physical descriptor relative to the type of wave being considered. In two-dimensional plate-like structures experiencing flexural wave propagation, for instance, can be a single displacement component, say , directed along the direction normal to the plane of the domain. In our numerical simulations, as well as in experimental acquisitions involving a laser scan, is replaced by an array of time histories of nodal degrees of freedom belonging to a grid. For the sake of exposition, let us assume for simplicity that the grid is rectangular and has dimensions . We suppose that the wavefield acquisition times are also discretized, so that time histories may be obtained at each point for times indexed by the discrete indices . As a result, the full spatiotemporal evolution of the system is expressed in the form of a data cube comprising one length time history for each grid point.

(a) Criterion for time shift and windowing
(b) Selection of time shifting parameters
(c) Schematic sequence of data cubes
Figure 2: Procedure for the construction of region-wise data cubes of size from the space-time histories. Subfigure (c) highlights the role of time shifting in the identification of the regional data cubes. Every regional data cube is of size , although the centering of each window in the time domain varies with the location of the corresponding region. The intensity of the regions color is proportional to the distance from the excitation point.

Conceptually, let us consider a partition of the spatial domain into rectangular (identical in shape and size) regions, where and . Each region of the partition contains a subset of nodes along the direction, and along the direction. The precise relation between and depends on the partition (eg., whether two neighboring regions are allowed to share their boundary nodes or not). We assume here that each region shares its sides with the neighbors, in which case we have


As customary in wave-based detection procedures, we suppose here that the structure is excited using an incident narrow-band burst applied at a single point in the domain. Following this assumption, it is possible to construct a spatiotemporal windowing of the displacement time histories informed by space and time characteristics of the excitation as follows.

Let index regions of the partition. Inside the region, the expected time-of-flight of the centroid of the incident wave packet (i.e. the centroid of the envelope of the burst) to the physical centroid of the region is given by


where is the group velocity of the packet, is the position vector of the region centroid from the point of application of the excitation, is the number of cycles in the burst and is the wave frequency. The arrival time in (2) can be seen as the sum of the arrival time of the front of the packet () plus the time interval corresponding to half burst (). Now, for each sample location inside the selected region we retain only the first time samples immediately following the estimated incidence time. This results in the generation of a local (region wise) data cube of size . The procedure is schematically depicted in Fig. 2.

This spatio-temporal “preprocessing,” essentially aligning the wave motion event at all locations, is a key component of our approach. The windowing process is the only instance in the present formulation in which some knowledge of the wave propagation characteristics is invoked in the inference process. It is nevertheless worth noting that an estimate of group velocity does not necessarily require a priori knowledge of the medium or construction of a model, as it can be inferred from a simple time of flight calculation, performed for instance between two virtual sensors placed in the domain along a direction where the likelihood of anomalous behavior is considered low (e.g. along one boundary).

2.2 Anomaly localization as a sparse estimation task

Our defect or anomaly localization problem can be understood fundamentally as a sparse estimation task. Let us introduce a matrix , which we interpret as a feature matrix whose entries correspond to distinct spatial regions of the partitioned domain. Each element of the feature matrix corresponds to a contiguous region of the material domain. Informally, we may view as a spatial “map” that indicates the locations of anomalies in the material. Formally, let us suppose that entries of take the value at locations that correspond to anomaly-free homogenous regions, and are otherwise nonzero. We say is sparse whenever most of its entries are zero; here, this corresponds to the number of regions of the material containing anomalies being small relative to the total number of regions. We may consider the class of all sparse as elements of a feature space .

The feature matrix is, of course, an abstraction, and we are not able to observe it directly. Instead we must make inferences about by examining the response of the material to the incident traveling wavefield. Our task is fundamentally an inverse problem, which we can formalize as follows. Suppose that there exists an unknown (and potentially complicated) function that maps an element from the feature space to a corresponding element of a structure space . Elements correspond to the real, physical material which exhibits the structural defects and inhomogeneities defined by their corresponding abstract feature matrices. We make measurements of of the form of surface displacement time histories measured at locations on the material surface denoted by the ordered pair . We can model the observations as elements of an observation space . Note that each measurement is a function of some implicit parameters (e.g., wavelength and frequency) of the incident wave used for diagnostics, but for the sake of notational clarity we let the dependence on these wave parameters be implicit in the function .

Given this framework, the detection task can be succinctly stated as follows: our aim is to infer (the sparse feature matrix, whose entries indicate the presence of structural anomalies) from a set of measurements , obtained at a set of locations described by elements . Our proposed inference approach is based on identifying salient features in the propagating wavefield; that is, identifying local deviations from shared typical behavior exhibited by the bulk of the material.

2.3 Low rank and sparse models for spatiotemporal saliency

The essential premise behind saliency-informed diagnostics is the notion that, in every region of the domain that is sufficiently far from any defects, the displacement time histories will exhibit some similar, but unknown, “typical” behavior, while the time histories recorded in spatial regions in the immediate vicinity of a defect will exhibit some (also unknown) signature of the defect that is different from the typical response observed in the bulk of the domain. The regions exhibiting atypical behavior are referred to here as salient. When only a few regions exhibit this atypical behavior, this notion of saliency can be viewed as a generalization of the concept of sparsity, which has been a central theme in signal processing, statistics, and machine learning research in recent years (see, for example, the voluminous works online at (????) which build upon the initial contributions in compressed sensing Candes et al. (2006); Donoho (2006); Candès and Tao (2006); Haupt and Nowak (2006); Candès and Tao (2007)).

Our aim here is to identify a function of the measured data that serves as a surrogate, or proxy, for the unknown (sparse) feature matrix . Recall that, following the time-windowing (described above) our measured data comprises an data cube, where the data corresponding to each spatial region is a cube. Let us recall that all the time-windows have the same length , although, because of shifting, they represent distinct actual time intervals. For each time “slice” of the data cube (indexed by ), we can interpret the data from each region as a two-dimensional array or, in equivalent vectorized form, a vector of dimension . Let denote the -dimensional vector associated with the location of the feature space, where . Now, we assume that at each time step there exists a common -dimensional linear subspace , with , that well approximates the spatiotemporal data inside the anomaly-free regions. Let denote the orthogonal projection onto the subspace and consider the matrix whose -th entry is


for , where is the square of the standard Euclidean norm. When an entry of is significantly different from zero, it means that the local wavefield at the corresponding location in the domain has significant energy outside of the subspace (or, equivalently, is not well-represented as a linear combination of vectors in ). In this interpretation, the matrix serves as a proxy for the true (unknown) feature matrix , which identifies the locations whose windowed time histories deviate significantly from the bulk or typical behavior according to the selected linear subspace model.

Note that the model we employ here can be interpreted in terms of a low-rank plus outlier data model. In particular, let us denote by the matrix formed by assembling the vectorized data from each region at time step . Our approach is equivalent to decomposing as


where is a low-rank matrix and is “column-sparse”, having only a few non-zero columns, which can be interpreted as vectors lying outside of the subspace spanned by the columns of . Identifying the “outlier” vectors, which correspond to the non-zero columns of , is the aim of so-called Robust Principal Component Analysis (or Robust PCA), and many procedures have been examined to address this problem Gnanadesikan and Kettenring (1972); Fischler and Bolles (1981); Torre and Black (2003); Ke and Kanade (2005), including recently-proposed convex formulations Chandrasekaran et al. (2011); Candes et al. (2011); Xu et al. (2012).

Here, for the sake of simplicity, we perform the decomposition of using an approach based on standard matrix operations. Specifically, we identify a rank- approximation of as a solution to the optimization problem


where the notation corresponds to the matrix Frobenius norm. The Eckart-Young Theorem establishes that the solution of (5) is easily obtained via the singular value decomposition (SVD) Eckart and Young (1936); Stewart (1993). Write the singular value decomposition of as


where and are unitary, is a rectangular diagonal (and nonnegative) matrix whose diagonal entries are the singular values ordered from largest to smallest, and the superscript denotes the matrix Hermitian (complex conjugate transpose) operation. Then, the solution of (5) is obtained as


where is obtained from by keeping only the largest singular values and zeroing the rest. Note that the first columns of comprise an orthonormal basis for the common subspace. In the context of (3) above, we may let be the matrix comprised of only the first columns of , then . In practice, the dimension of the common subspace is inferred empirically from the knee of the plot of (ordered) singular values, which corresponds to the point beyond which the decay in the singular values becomes gradual (see Fig. 3).

From an implementation standpoint, our inference approach proceeds as follows. At each time step, we construct the low rank approximation and a corresponding estimate of the column-sparse matrix , and we identify the columns of that have non-negligible energy. The significant columns correspond to salient regions in the domain, i.e., regions with high likelihood to contain an anomaly. These estimates are finally aggregated over the distinct time steps of the considered time window to account for the complete scattered wavefield induced by each anomaly. The identification of the significant columns operation inevitably involves the definition of appropriate thresholding levels which can slightly vary from case to case. Additional implementation details are provided in the next section in the context of specific applications.

3 Application to two-dimensional transversal wave propagation

The approach presented above is now tested in a simulated environment using data generated via the finite element method (FEM). The application selected to test the method is the benchmark problem of circular-crested transversal waves excited in a thin plate via application of an out-of-plane point force at one node. From the point of view of the wavefield topology, this scenario is representative of both flexural plate waves and guided Lamb waves, therefore the following results and conclusions about the efficacy of the anomaly detection algorithm will be valid for both. In this work, flexural waves are modeled according to Mindlin’s plate theory, which involves three nodal degrees of freedom, one out-of-plane displacement and two rotations and , although the anomaly detection analysis will be restricted to the deflection which is, from the point of view of an equivalent laser-based acquisition, the degree of freedom which is more directly measurable from a surface scan.

The choice of flexural waves is here primarily dictated by the relative simplicity of the corresponding FEM model and the advantages (in terms of ambiguity removal) resulting from dealing with a single-mode wave solution. The excitation is applied at one corner of the plate in order to maximize the radius of the propagation domain for a given mesh size. The time history of the load is a -cycle narrow band burst, as customary in ultrasonic testing. The domain is taken to be square (), and the dimensions are selected jointly with the material properties and the carrier frequency of the excitation such that , where is the carrier wavelength of the induced wave. Note that the analysis is conducted here in fully non-dimensional form to emphasize the scalability of the results, which is an important feature of this treatment. However, for the sake of completeness, we list the geometric and material properties used in the simulations: material properties (Aluminum, ); dimensions (); thickness (). The excitation has a carrier frequency of , which corresponds to a wavefield with carrier wavelength of . Exploiting the square shape, a partition in square regions of identical size is adopted in the detection process. A structured mesh comprising square elements is used to generate an accurate solution free of unwanted numerics-induced dispersive and directional effects. This data field can be a posteriori coarsened via under-sampling of the nodal solution to explore the robustness of the method against coarse data acquisition.

Figure 3: Knee in the singular value curve

3.1 Benchmark problem . Scattered defects

(a) Final snapshot of wave motion
(b) Saliency map
(c) Identification of salient regions
Figure 4: Anomaly detection via saliency map constructed using coarse region partition. At this level of refinement, the method over-predicts the salient regions, thus providing a moderately accurate but conservative estimate.

Three anomalies are scattered in the domain at locations , respectively. The anomalies are introduced by changing the material properties of each finite element that contains an anomaly point. In this example, the Young’s modulus and density are increased inside the defected element by two orders of magnitude to simulate a distribution of stiff inclusions in the material. Let us recall that the material is homogeneous and isotropic and in its pristine state other than at the locations where the anomalies are explicitly introduced. A snapshot of the computed wavefield in the presence of defects is shown in Fig. 4(a). The saliency analysis is first implemented using a coarse partition of the domain, as shown in Fig. 4(c), such that each region extends over a -element mesh. By comparison of the wavefields in Fig.s 4(a) and 4(c), it can be noticed how, as a result of the region-wise time shifting performed according to (2), each regional local wavefield corresponds to the time instant at which the wave front impinges on the centroid of the region. This shifting operation de facto replicates the wave “state” from one region to the next. As a result, the intrinsic (not associated with the presence of defect) topological differences between regions are limited to two factors: a) A phase shift (observable by comparing the wave crests) due to the difference between phase and group velocity ( according to plate theory); b) An increase in radius of curvature due to the time evolution of the radius of the circular-crested wave front (). As a result, the windowing procedure highlights the salient features associated with extrinsic mechanisms (in this case the scattering induced by the inclusions, where this is observed), thus minimizing the ambiguity between real and spurious sources of salient behavior in the detection process. The white regions in Fig. 4(c) represent peripheral portions of the domain where the wave has not propagated within the simulated time window; the wavefield is there identically zero, and the method is naturally incapable of identifying any anomalies in those sectors. It is also worth pointing out that, while the anomalies are interpreted in this example as defects, they can represent any structural feature that acts as a source of deviation from the nominal behavior of the plate, including fastener holes, thickness changes, etc. In fact, at this stage of development, the method would not be able to discriminate between these different sources of saliency.

The time histories collected at the nodes of each -th region are used to construct the matrices to feed to the optimization problem of (5), where and here , i.e. time instants following the average time of flight are considered. The low rank approximation is obtained by selecting from inspection of the knee plot of Fig. 3. With the information from each region at each time instant, a saliency map is constructed as shown in Fig. 4(b) by counting how many times the energy of the corresponding column of the outlier estimate is sufficiently large (here we identify all columns of whose energy exceeds of the largest outlier energy found in any region at the same time snapshot). The saliency map tells us how consistently each region is classified as salient (with respect to the surrounding ones) over the whole time window of interest. For instance, if a region is given a value in the saliency map, that region has been found to be salient of the time instants. By highlighting the salient regions on the partitioned domain, the results can be verified against the known positions of the scatterers, as shown in Fig. 4(c).

It can be noted that, at this level of refinement, the method over-predicts the salient regions, thus providing a moderately accurate but conservative estimate. Let us now explore the sensitivity of the method to refinements of the region partition by considering first and subsequently regions, as shown in Fig. 5(a) and Fig. 5(b), respectively. In the first case, the saliency analysis pinpoints accurately the defects, the outliers are avoided and the accuracy in defect localization has grown at the rate of the partition refinement. Reasons for the improved performance with respect to the case are to be found in the fact that smaller regions allow focusing on limited wavefields that are less likely to be affected by spurious effects (such as boundary reflections, or reflections from the surrounding scatterers) that could be originating in neighboring regions. Another explanation is that smaller regions are accompanied by tighter and more localized time referencing, which allows pinpointing the instances of scattering formation more precisely.

The performance of the case offers some interesting insight into the details of the outlier modeling philosophy. The defects are again found and localized very precisely, and the small size of the regions almost allows a determination of their point locations. However, a few outliers appear in the neighborhood of the excitation source. An explanation of this can be that the wavefield data structure inside small regions is very sensitive to differences in the curvature of the wave crests: the pronounced curvature observed in the regions surrounding the origin provides them with an element of saliency that competes with the one due to the scattering mechanisms. It is important to report that the neighborhood of the origin is found to be critical in many simulations, which suggests modest improvements of the algorithm geared toward eliminating this weakness.

(a) regions
(b) regions
Figure 5: Saliency analysis results via refinement of region partition

3.2 Benchmark problem . Line defect

Let us test the method against the case of line defect, i.e. a region of anomaly extending over a considerable portion of the domain that overlaps with multiple region partitions at any stage of refinement. These conditions can be representative of a crack, or a region of de-bonding between material phases, or a path of void coalescence (which typically precedes the onset of crack). The defect is simulated by reducing by several orders of magnitude the Young’s modulus and density of the material inside a one-element thick layer extending from to . The results of the analysis are shown in Fig. 6 for increasing levels of refinement. With regions (Fig. 6(a)), the method does a good job at coarsely bounding the region of the defect. The saliency criteria embedded in the algorithm emphasize the regions surrounding the tips of the defect, as this is where the major local perturbations of the wavefield are observed in terms of perturbation of the curvature of the wave crests. The refinement (Fig. 6(b)) allows complete localization of the defect path, but slightly over-predicts in the tip areas, and features one outlier by the origin, both effects being traceable back to the considerations on the crest curvature made above. The refinement (Fig. 6(c)) interestingly eliminates the ambiguity in the neighborhood of the origin, captures and refines the localization of the defect tips and provides a spotty reconstruction of the defect path. Note that the behavior of the method in the neighborhood of the source shows marked differences between the results of Fig. 5 and those of Fig. 6, for the same excitation conditions. This is an intrinsic feature of saliency-based detection methods, due to the very essence of saliency, which is not a property of a region, but a property of the region with respect to its neighbors. The same regional data are deemed salient or not relative to the salient features displayed by other regions in the domain. In this case, the energy available outside of the common subspace in the regions surrounding the excitation is comparable to that of the regions containing the scattered anomalies, but considerably less than that of the regions containing the crack.

(a) regions
(b) regions
(c) regions
Figure 6: Saliency analysis results via refinement of region partition - Line defect.

4 Assessment of the method flexibility

4.1 Spatially deterministic and random sub-sampling

The results presented so far rely on dense sampling ( nodal points within each of the regions). In this section we explore the limitations of the saliency based anomaly detection strategy involving a subset of the points from each region. The benefits of this kind of approach would be in dealing with experimentally acquired wavefields, where acquiring and processing less data would enable a faster acquisition phase, thus increasing the efficiency of the entire anomaly detection process. Coarser data are here easily obtained from FEM computed wavefields by sub-sampling the data in space.

We explore two main classes of sub-sampling techniques. The first three cases (Fig. 7(a)7(b) and 7(c)) depict the results obtained using a random sub-sampling approach, with spatial sub-sampling ratios , and , respectively. As expected, the performance degrades relative to the reference exhaustive case as decreases. However, in all cases depicted in the figure, the algorithm does manage to identify the correct neighborhoods of the defects. Based on the figures, the practical implications of this appear to be minor, as the regions identified using the sub-sampled data can be interpreted as slightly more conservative estimates of the actual defect locations.

However, it is important to note that random sub-sampling inevitably introduces some variability in the results. We investigate this variability empirically here. We conduct independent trials (each corresponding to a different randomly generated subsampling pattern for each region) for each of several downsampling rates, and report the average number of regions corresponding to correct discoveries and false discoveries. We also compute two “regional” error metrics that take into account whether the detections are within the immediate vicinity of the regions containing the true anomalies. These regional criteria are evaluated as follows. If in a given trial, at least one of the discovered anomalous regions is an immediate neighbor of the true region containing the anomaly (or is the anomalous region itself), that anomaly is deemed discovered. Similarly, false discoveries are only deemed false if they occur at least one region removed from any region containing a true anomaly. We also report the average number of false discoveries that occur in the region at the origin of the domain, motivated by the empirical observation that false discoveries often occur near the excitation source. The results are collected in Table 1.

It can be noted how, while the performance decays as expected when the sub-sampling ratio decreases, the algorithm features quite remarkable detection capability even for very coarse acquisitions. Further, the “regional” error metrics underscore the empirical observation above – detections overwhelmingly occur in the immediate vicinity of the anomalous regions, and false discoveries (when present) are predominantly near the excitation source. This suggests that simply disregarding these regions in the final estimate may be a useful, if simple, heuristic.

Another visual look to the variability introduced by random sampling is taken by applying sub-sampling to the line defect problem with a region partition. Figure 8 shows the saliency maps obtained using four randomly generated () sub-sampling patterns. While the results feature the expected variability, they consistently pinpoint the anomalous region, often capturing the length of the crack path as well as the thickness of the area of influence of the defect.

Correct Discoveries
False Discoveries
Correctly Disc. Regions
Falsely Disc. Regions
False Disc. Origin
Table 1: Effect of sub-sampling on several metrics of saliency identification. There are true anomalies. Results presented here are averaged over random trials, each corresponding to a different random selection of sub-sampling locations, for several effective sub-sampling rates .

We also examine the performance of our procedure using deterministic sub-sampling sequences. We select a family of sequences where the evaluation points sit on a double cross (horizontal-vertical and diagonal, corresponding to a sub-sampling ratio ), where the topology choice is dictated by the attempt to have sufficiently dense sequences of sensing points along three important directions of propagation. The results in Fig. 7(d) indicate that this sensing topology performs optimally in terms of correct saliency detection, as all the defects are identified without outliers. We can further increase the sub-sampling within the same family of pattern by further downsampling by a factor of two () or four () along each direction of the cross (Fig. 7(e) and 7(f)). The results, as expected, decay, however the capability of the method to identify the correct salient regions is preserved, as a testament to the validity of the selected sequence pattern. It is worth noting that, although the cases of Fig. 7(f) and 7(c) enforce roughly the same sub-sampling ratio, they profoundly differ in that one is random and the other one deterministic. As a result, although the outcomes plotted in Fig 7 happen to be comparable, the one of Fig. 7(c) is subjected to the variability documented in Table 1, which indicates an overall inferior performance. This result suggests that, while random sub-sampling represents an easy-to-implement and sufficiently reliable option, it may be possible to design deterministic sampling sequences with superior performance in the context of the proposed saliency-identification procedure.

4.2 Limits of sub-sampling for anomaly detection?

(a) Random sub-sampling ()
(b) Random sub-sampling ()
(c) Random sub-sampling ()
(d) Deterministic double cross sub-sampling ()
(e) Deterministic double cross sub-sampling ()
(f) Deterministic double cross sub-sampling ()
Figure 7: Effect of subsampling with random and deterministic patterns - Scattered defects.

Our consideration of spatial subsampling approaches was motivated here by a desire to alleviate the computational burden on the data processing component of the inference procedure. On the other hand it is, of course, well known that continuous-time bandlimited signals (such as the propagating wavefields here) can often be exactly recovered from a collection of their discrete samples. Indeed, suppose that the maximum (spatial) frequency present in the wave data is . Then, spatial sampling at a rate of at least (the Shannon/Nyquist rate) is sufficient to enable exact recovery of the full wavefield data. For a multiband signal – one whose frequency domain representation is comprised of a collection of possibly disjoint spectral bands – the minimal sampling rate required for reconstruction corresponds to the total size (or Lebesgue measure) of its spectral support. This condition, called the Landau sampling rate Landau (1967), is often a less restrictive condition than the Shannon/Nyquist rate. Indeed, several works have examined the recovery of multiband signals at sampling rates approaching the Landau rate – see, for example, Cheung and II (1990); Cheung (1993); Vaidyanathan and Liu (1990); Feng and Bresler (1996); Bresler and Feng (1996).

This suggests an alternative, brute force approach, where the subsampled data is first used to recover the wavefield in the whole structure. Then, the full reconstructed wavefield is processed (e.g., using the proposed approach) to identify anomalous regions. In light of this, performing an inference of structural anomalies from undersampled data is perhaps most interesting if the spatial sampling rate is sufficiently low, such that it would be impossible to reconstruct the wavefield from the sampled data. How far are we from this “sub-Landau” sampling goal?

The Landau rate can be estimated by inspection of the signal in the spectral plane. Fig. 9 shows the spectral representation of the wave of Fig. 4(a) with and without defects, obtained by taking the two-dimensional Discrete Fourier Transform (2D-DFT) of the wavefield and where are the components of the in-plane wavevector normalized by the size of the domain. Here, a back-of-the-envelope calculation of the area of the occupied spectrum, relative to the total area, shows that the Landau rate corresponds to downsampling the data to about 2.5%. In contrast, here we saw reasonable performance only for downsampling rates at about 6-7%, depending on the sampling strategy and the nature of the defect. This suggests that further improvements are necessary to achieve the goal of sub-Landau defect identification using our proposed approach.

We note that in real applications there may be advantages of adopting approaches like the one proposed here, which make inferences directly from the sub-sampled data. Namely, as alluded above, by virtue of the reduced data set sizes, inferring anomalous regions from the sub-sampled data directly enjoys the benefit of lower computational complexity relative to the same approach applied to the full wavefield data. Overall, it remains to be seen whether accurate anomaly detection can be achieved by our method using sub-Landau sampling rates, or whether the Landau rate represents a fundamental limit on the sub-sampling rate using our approach. A full investigation of these topics is left for future work.

5 Conclusions and recommendations for future work

This work investigates a saliency-based approach for the detection and localization of anomalies in two-dimensional structures probed with propagating waves. The concept of saliency is here associated with the notion of sparsity, and is interpreted in terms of deviations from a “common” linear model for local (in space and time) subsets of the propagating wavefield data. This concept is adapted to the wave problem and mathematically formalized as a PCA problem performed on partitions of the domain in conjunction with a time shift correction of the waves inside each region to remove ambiguity. The detection capability of the method has been tested against two benchmark problems in damage detection (distributed defects and line defect) and is found to be a viable approach over a large range of region partitions and spatial sampling rates. Indeed, the performance of the proposed method degrades gracefully as the sample rate decreases.

Figure 8: Effect of subsampling with random () patterns - Line defect.

We note also that many recent efforts have proposed convex formulations of the low-rank plus outlier matrix decomposition problem Chandrasekaran et al. (2011); Candes et al. (2011); Xu et al. (2012), while our approach proposed here utilized simple matrix computations (low rank approximation by truncated SVD). Other recent works have examined the low-rank plus outlier decomposition problem from limited or compressive measurements Waters et al. (2011); Wright et al. (2012) – a task that is very similar in spirit to the subsampling approach presented in this work. It will be interesting to see whether these convex formulations provide any improvements in our anomaly localization performance, and whether the theoretical foundations outlined in these works can provide additional insight into our problem.

It is worth noting that the computational aspect of our approach is reminiscent of the well-known DORT (Décomposition de l‘opérateur de retournement temporel) detection approach Prada and Fink (1994); Prada et al. (1995, 1996), in that the DORT approach also relies on eigenanalysis of certain matrices in order to locate defects. There are, however, a few key differences between DORT and our proposed approach.

First, our proposed approach is based on detecting subtle differences or anomalies in the temporal transient behavior of the propagating wavefield generated from a single actuator, using multiple measurements obtained in the immediate vicinity of potential defects. In contrast, the DORT approach utilizes multiple transmitters and receivers (transducers), and relies on a frequency domain analysis of the corresponding multiple input multiple output system. In DORT, each path from transmitter to receiver is modeled as linear and time invariant (LTI) system, the essential idea being that the presence of pointlike scatterers in the medium will alter the responses of each of the channels in an identifiable way. Our approach relies on the notion that defects or scatterers will locally (in space and time) perturb the wavefield as it interacts with the defect. (We note that frequency domain analysis of signals transmitted and received by multiple transducers also forms the basis of detection techniques based on the Multiple Signal Classification, or MUSIC, procedure – see, for example, Foroozan and ShahbazPanahi (2012).)

Further, and somewhat more subtly, the DORT approach implicitly assumes that the number of transducers be greater than the number of (resolvable) pointlike scatterers in the medium. For instance, the DORT approach applied to a system with transducers prescribes describing the resulting multiple input multiple output system in the frequency domain (and at at a particular frequency) in terms of an transfer matrix. This matrix is learned or approximated via experimentation, and the number of nonzero eigenvalues of this matrix corresponds to the number of resolvable point-like scatterers. Thus, DORT requires that the number of defects (say ) be smaller than , else the results may be ambiguous. Stated another way, if the rank of the transfer matrix in the DORT approach is less than the number of pointlike scatterers (as it would be in a single input multiple output system for any ), DORT will be unable to identify the distinct scatterers. In contrast, we find in our initial investigations here that our approach succeeds in identifying the locations of multiple pointlike defects as well as crack-like defects using measurements of a propagating wavefield generated by only a single actuator. It is however important to emphasize again that, while it relies on a single actuation source, our method utilizes a significant number of sensing points to achieve the performance levels discussed above.

Another interesting direction pertains to how the measured data is utilized in the inference procedure. There is no inherent restriction to use the raw measured data itself in our inference approach; alternatively, we may consider some (perhaps nonlinear) preprocessing of the measured wavefield data (e.g., conformal mapping, projective geometry, power flow statistics, etc.) in an effort to enhance anomalous features in the data. Finally, it is intriguing to consider a multi-step sampling and detection algorithm, where the proposed subsampling approach is used in several stages to iteratively identify smaller and smaller subsets of the material domain that may contain anomalies. This sort of adaptive “coarse-to-fine” sampling strategy has been successfully employed in certain image processing tasks (e.g., Castro et al. (2004)); it remains to be seen whether similar notions can be utilized here. This is the objective of current investigations, an account of which is left for future work.

(a) Defect-free plate
(b) Scattered defects
Figure 9: charts revealing the Landau sampling rate limit and the signature of scattering-induced reflections.


J. Haupt acknowledges partial support by NSF Grant CCF-1217751.


  • Staszewski et al. (2004) W. Staszewski, C. Boller, G. Tomlinson, Health monitoring of aerospace structures: Smart sensors and signal processing, Wiley & Sons, 2004.
  • Rose (2002) J. Rose, A baseline and vision of ultrasonic guided wave inspection potential, Journal of Pressure Vessel Technology 124 (3) (2002) 273–282.
  • Giurgiutiu (2008) V. Giurgiutiu, Structural health monitoring with piezoelectric wafer active sensors, Academic Press, 2008.
  • Ihn and Chang (2008) J.-B. Ihn, F.-K. Chang, Pitch-catch Active Sensing Methods in Structural Health Monitoring for Aircraft Structures, Structural Health Monitoring 7 (1) (2008) 5–19.
  • Raghavan and Cesnik (2008) A. Raghavan, C. Cesnik, Effects of Elevated Temperature on Guided-wave Structural Health Monitoring, Journal of Intelligent Material Systems and Structures 19 (12) (2008) 1383–1398.
  • Flynn et al. (2011) E. Flynn, M. D. Todd, P. D. Wilcox, B. W. Drinkwater, A. J. Croxford, Maximum-likelihood estimation of damage location in guided-wave structural health monitoring, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science 467 (2133) (2011) 2575–2596.
  • Michaels et al. (2005) T. Michaels, J. Michaels, B. Mi, M. Ruzzene, Damage detection in plate structures using sparse ultrasonic transducer arrays and acoustic wavefield imaging, in: AIP Conference Proceedings, 2005.
  • Liu et al. (2012) L. Liu, S. Liu, F.-G. Yuan, Damage localization using a power-efficient distributed on-board signal processing algorithm in a wireless sensor network, Smart Materials and Structures 21 (2).
  • Su and Ye (2004) Z. Su, L. Ye, Lamb wave-based quantitative identification of delamination in CF/EP composite structures using artificial neural algorithm, Composite Structures 66 (1-4) (2004) 627 – 637.
  • Das et al. (2005) S. Das, A. Papandreou-Suppappola, X. Zhou, A. Chattopadhyay, On the use of the matching pursuit decomposition signal processing technique for structural health monitoring, in: Proc. SPIE Smart Structures and Materials 2005: Smart Structures and Integrated Systems, 2005.
  • Das et al. (2009) S. Das, I. Kyriakides, A. Chattopadhyay, A. Papandreou-Suppappola, Monte Carlo Matching Pursuit Decomposition Method for Damage Quantification in Composite Structures, Journal of Intelligent Material Systems and Structures 20 (6) (2009) 647–658.
  • Mallat and Zhang (1993) S. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Trans. Signal Processing 41 (12) (1993) 3397–3415.
  • Das et al. (2007) S. Das, A. Srivastava, A. Chattopadhyay, Classification of Damage Signatures in Composite Plates using One-Class SVMs, in: Proc. 2007 IEEE Aerospace Conference, 2007.
  • Michaels and Michaels (2007) J. E. Michaels, T. E. Michaels, Guided wave signal processing and image fusion for in situ damage localization in plates, Wave Motion 44 (2007) 482–492.
  • Michaels (2008) J. E. Michaels, Detection, localization and characterization of damage in plates with an in situ array of spatially distributed ultrasonic sensors, Smart Materials and Structures 17.
  • Yu and Giurgiutiu (2007) L. Yu, V. Giurgiutiu, In-situ optimized PWAS phased arrays for Lamb wave structural health monitoring, Journal of the Mechanics of Materials and Structures 2 (3) (2007) 459–487.
  • Wang and Yuan (2009) Q. Wang, S. Yuan, Baseline-free Imaging Method based on New PZT Sensor Arrangements, Journal of Intelligent Material Systems and Structures 20 (14) (2009) 1663–1673.
  • Kessler (2002) S. Kessler, Piezoelectric-based in-situ damage detection of composite materials for structural health monitoring systems, Ph.D. Thesis, 2002.
  • Kirikera et al. (2011) G. Kirikera, O. Balogun, S. Krishnaswamy, Adaptive Fiber Bragg Grating Sensor Network for Structural Health Monitoring: Applications to Impact Monitoring, Structural Health Monitoring 10 (1) (2011) 5–16.
  • Qing et al. (2007) X. Qing, J. Beard, A. Kumar, T. Ooi, F-.K., Chang, Built-in Sensor Network for Structural Health Monitoring of Composite Structure, Journal of Intelligent Material Systems and Structures 18 (1) (2007) 39–49.
  • Chung (2003) D. Chung, Composite materials: Science and applications. Functional materials for modern technologies, Springer, 2003.
  • Michaels et al. (2011) T. Michaels, J. Michaels, M. Ruzzene, Frequency-wavenumber domain analysis of guided wavefields, Ultrasonics 51 (4) (2011) 452 – 466.
  • Sharma et al. (2005) V. Sharma, S. Hanagud, M. Ruzzene, Damage Index Estimation in Beams and Plates Using Laser Vibrometry, in: Proc. International Workshop on Structural Health Monitoring (IWSHM), 2005.
  • Basri and Chiu (2004) R. Basri, W. Chiu, Numerical analysis on the interaction of guided Lamb waves with a local elastic stiffness reduction in quasi-isotropic composite plate structures, Composite Structures 66 (1-4) (2004) 87–99.
  • Alleyne and Cawley (1991) D. Alleyne, P. Cawley, A Two-dimensional Fourier Transform Method for the Measurement of Propagating Multimode Signals, Journal of the Acoustical Society of America 89 (3) (1991) 1159–1168.
  • Ruzzene (2007) M. Ruzzene, Frequency-wavenumber domain filtering for improved damage visualization, Smart Materials and Structures 16 (6) (2007) 2116.
  • Itti et al. (1998) L. Itti, C. Koch, E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Trans. on Pattern Analysis and Machine Intelligence 20 (11) (1998) 1254–1259.
  • Itti and Koch (2001) L. Itti, C. Koch, Computational Modelling of Visual Attention, Nature 2 (3) (2001) 194–203.
  • Harel et al. (2006) J. Harel, C. Koch, P. Perona, Graph-based visual saliency, in: Proc. Neural Information Processing Systems, 2006.
  • Liu et al. (2007) T. Liu, J. Sun, N. Zheng, X. Tang, H. Shum, Learning to detect a salient object, in: Proc. CVPR, 2007.
  • Chandrasekaran et al. (2011) V. Chandrasekaran, S. Sanghavi, P. Parrilo, A. Willsky, Rank-sparsity incoherence for matrix decomposition, SIAM J. Optimization 21 (2) (2011) 572–596.
  • Candes et al. (2011) E. Candes, X. Li, Y. Ma, J. Wright, Robust principal component analysis?, Journal of the ACM 58 (3).
  • Xu et al. (2012) H. Xu, C. Caramanis, S. Sanghavi, Robust PCA via Outlier Pursuit, IEEE Trans. Inform. Theory 58 (5) (2012) 3047–3064.
  • Yan et al. (2010) J. Yan, M. Zhu, H. Liu, Y. Liu, Visual saliency detection via sparsity pursuit, IEEE Signal Proc. Letters 17 (8) (2010) 739–742.
  • Shen and Wu (2012) X. Shen, Y. Wu, A unified approach to salient object detection via low rank matrix recovery, in: Proc. Computer Vision and Pattern Recognition, 2012.
  • Patcha and Park (2007) A. Patcha, J. Park, An overview of anomaly detection techniques: Existing solutions and latest technological trends, Computer Networks 51 (12) (2007) 3448–3470.
  • Chandola et al. (2009) V. Chandola, A. Banerjee, V. Kumar, Anomaly Detection: A Survey, ACM Computing Surveys 41 (3).
  • (????), Rice University Compressed Sensing Repository .
  • Candes et al. (2006) E. Candes, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52 (2) (2006) 489–509.
  • Donoho (2006) D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52 (4) (2006) 1289–1306.
  • Candès and Tao (2006) E. J. Candès, T. Tao, Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?, IEEE Trans. Inform. Theory 52 (12) (2006) 5406–5425.
  • Haupt and Nowak (2006) J. Haupt, R. Nowak, Signal Reconstruction from Noisy Random Projections, IEEE Trans. Inform. Theory 52 (9) (2006) 4036–4048.
  • Candès and Tao (2007) E. J. Candès, T. Tao, The Dantzig Selector: Statistical Estimation When is Much Larger Than , Ann. Statist. 35 (6) (2007) 2313–2351.
  • Gnanadesikan and Kettenring (1972) R. Gnanadesikan, J. Kettenring, Robust estimates, residuals, and outlier detection with multiresponse data, Biometrics 28 (1) (1972) 81–124.
  • Fischler and Bolles (1981) M. Fischler, R. Bolles, Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Communications of the ACM 24 (6) (1981) 381–395.
  • Torre and Black (2003) F. D. L. Torre, M. Black, A framework for robust subspace learning, International Journal on Computer Vision 54 (1-3) (2003) 117–142.
  • Ke and Kanade (2005) Q. Ke, T. Kanade, Robust -norm factorization in the presence of outliers and missing data by alternative convex programming, in: Proc. Computer Vision and Pattern Recognition, 2005.
  • Eckart and Young (1936) C. Eckart, G. Young, On the approximation of one matrix by another of lower rank, Psychometrika 1 (3) (1936) 211–218.
  • Stewart (1993) G. W. Stewart, On the early history of the Singular Value Decomposition, SIAM Review 35 (4) (1993) 551–566.
  • Landau (1967) H. Landau, Necessary density conditions for sampling and interpolation of certain entire functions, Acta Math. 117 (1) (1967) 37–52.
  • Cheung and II (1990) K. F. Cheung, R. J. M. II, Image sampling below the Nyquist density without aliasing, J. Opt. Soc. Am. A 7 (1) (1990) 92–105.
  • Cheung (1993) K. F. Cheung, A multidimensional extension of Papoulis’ generalized sampling expansion with application in minimum density sampling, in: Advanced Topics in Shannon Sampling and Interpolation Theory, Springer-Verlag, 85–119, 1993.
  • Vaidyanathan and Liu (1990) P. P. Vaidyanathan, V. C. Liu, Efficient reconstruction of band-limited sequences from nonuniformly decimated versions by use of polyphase filter banks, IEEE Trans. Acoustics, Speech, and Signal Processing 38 (1) (1990) 1927–1936.
  • Feng and Bresler (1996) P. Feng, Y. Bresler, Spectrum-blind minimum-rate sampling and reconstruction of multiband signals, in: Proc. Intl. Conf. on Acoustics, Speech, and Signal Processing, 1996.
  • Bresler and Feng (1996) Y. Bresler, P. Feng, Spectrum-blind minimum-rate sampling and reconstruction of 2-D multiband signals, in: Proc. Intl. Conf. on Image Processing, 1996.
  • Waters et al. (2011) A. Waters, A. Sankaranarayanan, R. Baraniuk, SpaRCS: Recovering low-rank and sparse matrices from compressive measurements, in: Proc. Neural Info. Processsing Systems, 2011.
  • Wright et al. (2012) J. Wright, A. Ganesh, K. Min, Y. Ma, Compressive principal component pursuit, in: Proc. IEEE Intl. Symposium on Inform. Theory, 2012.
  • Prada and Fink (1994) C. Prada, M. Fink, Eigenmodes of the time reversal operator: A solution to selective focusing in multiple-target media, Wave Motion 20 (2) (1994) 151–163.
  • Prada et al. (1995) C. Prada, J.-L. Thomas, M. Fink, The iterative time reversal process: Analysis of the convergence, J. Acoust. Soc. Am. 97 (1) (1995) 62–71.
  • Prada et al. (1996) C. Prada, S. Manneville, D. Spoliansky, M. Fink, Decomposition of the time reversal operator: Detection and selective focusing on two scatterers, J. Acoust. Soc. Am. 99 (4) (1996) 2067–2076.
  • Foroozan and ShahbazPanahi (2012) F. Foroozan, S. ShahbazPanahi, MUSIC-based array imaging in multi-modal ultrasonic non-destructive testing, in: Proc. IEEE Sensor Array and Multichannel Signal Processing Workshop, 2012.
  • Castro et al. (2004) R. Castro, R. Willett, R. Nowak, Coarse-to-fine manifold learning, in: Proc. Intl. Conf. on Acoustics, Speech, and Signal Processing, 2004.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description