Earthquake Arrival Association with Backprojection and Graph Theory

Earthquake Arrival Association with
Backprojection and Graph Theory

Ian W. McBrearty, Joan Gomberg, Andrew A. Delorey, Paul A. Johnson
Abstract

The association of seismic wave arrivals with causative earthquakes becomes progressively more challenging as arrival detection methods become more sensitive, and particularly when earthquake rates are high. For instance, seismic waves arriving across a monitoring network from several sources may overlap in time, false arrivals may be detected, and some arrivals may be of unknown phase (e.g., P- or S-waves). We propose an automated method to associate arrivals with earthquake sources and obtain source locations applicable to such situations. To do so we use a pattern detection metric based on the principle of backprojection to reveal candidate sources, followed by graph-theory-based clustering and an integer linear optimization routine to associate arrivals with the minimum number of sources necessary to explain the data. This method solves for all sources and phase assignments simultaneously, rather than in a sequential greedy procedure as is common in other association routines. We demonstrate our method on both synthetic and real data from the Integrated Plate Boundary Observatory Chile (IPOC) seismic network of northern Chile. For the synthetic tests we report results for cases with varying complexity, including rates of 500 earthquakes/day and 500 false arrivals/station/day, for which we measure true positive detection accuracy of 95. For the real data we develop a new catalog between January 1, 2010 - December 31, 2017 containing 817,548 earthquakes, with detection rates on average 279 earthquakes/day, and a magnitude-of-completion of M1.8. A subset of detections are identified as sources related to quarry and industrial site activity, and we also detect thousands of foreshocks and aftershocks of the April 1, 2014 Mw 8.2 Iquique earthquake. During the highest rates of aftershock activity, 600 earthquakes/day are detected in the vicinity of the Iquique earthquake rupture zone.

Geophysics Group, Los Alamos National Laboratory – Los Alamos, NM, USA

United States Geological Survey – Seattle, WA, USA

Peer-Review disclaimer: This draft manuscript is distributed solely for purpose of scientific peer review. Its content is merely being considered for publication. Until the manuscript has been approved for publication by the U.S. Geological Survey (USGS), it does not represent any official USGS finding or policy.

Introduction

Earthquake catalogs are invaluable for monitoring seismic activity and for developing geophysical images of the Earth structure. Cataloging earthquakes generally follows the detection of impulsive arrivals, followed by the association of arrivals between stations to a common source, and then inferring the location, origin time, and magnitude of each earthquake. As improved instrumentation and increasing quantities of data have become available, arrival picking algorithms have markedly progressed (Gibbons and Ringdal, 2006; Yoon et al., 2015; Poiata et al., 2016; Perol et al., 2018; Ross et al., 2018; Zhu and Beroza, 2018). Along with these improvements the number of detected arrivals generally increases approximately by an order of magnitude per decreasing magnitude unit of sensitivity, introducing new challenges into the earthquake detection pipeline. Converting high rates of arrival picks into an accurate earthquake catalog can be invaluable in seismology, since dense catalogs improve our understanding of seismogenic processes occurring at plate boundaries (Kato and Nakagawa, 2014; Delorey et al., 2015), allow for monitoring rate changes of seismicity (Montoya-Noguera and Wang, 2017; Fiedler et al., 2018), improve resolution of tomographic images (Peng and Ben-Zion, 2005; Watkins et al., 2018), reveal dynamic triggering and anthropogenic induced seismicity (Shapiro et al., 2006; Hill and Prejean, 2007; Peng et al., 2009; Ellsworth, 2013), and may contain information regarding the timing of future earthquakes (Rouet-Leduc et al., 2017; Lubbers et al., 2018). Dense catalogs also provide new datasets that can be incorporated into increasingly popular machine learning approaches for a variety of applications in seismology (DeVries et al., 2018; Perol et al., 2018; Ross et al., 2018; Zhu and Beroza, 2018; Zhu et al., 2018).

Existing earthquake association algorithms often struggle when arrival rates are high and there is ambiguity regarding the number of earthquakes that produce the arrival time measurements. False arrivals and unknown phase types of arrivals also introduce uncertainties. These issues have historically been most problematic during aftershock and foreshock sequences, however in the new era of sensitive arrival picking methods, these same issues are encountered for even normal background seismicity rates. As such, there is a need for improved association algorithms to enable the development of high fidelity low-magnitude-of-completion earthquake catalogs. A general purpose association algorithm should be able to handle a few inter-related problems, all connected to the key problem of earthquake detection. These include (1) an unknown number of sources in a time window, (2) false arrivals, (3) missing arrivals, (4) unknown phase types (e.g., P- or S-waves), (5) uncertainties in the velocity model, and (6) the scenario in which arrivals from several earthquakes overlap in time across a network (Fig. 1).

Figure 1: Schematic example of the earthquake source-arrival association problem. (top) A subducting plate interface with several sources producing impulsive waves that propagate to nearby seismic stations. (bottom left) The observed set of unassociated arrivals (of a single phase type) across the seismic network. (bottom right) The correctly associated set of arrivals, colored into five distinct sources and linked by a moveout curve (red dashed line).

In this paper we solve the association problem when all of these uncertainties (1 - 6) are present using a backprojection and graph theory framework. The method processes a dataset of arrival time picks assumed to be of unknown phase, and which can include an unknown number of false arrivals and errors in arrival time estimates. The technique is generalizable to arbitrary spatial scales (from laboratory to Earth) and recording network geometries. In order to perform backprojection we rely on a velocity model, however the method can tolerate uncertainty in the velocity model and travel time calculations used in the analysis. In the process of inferring associations, estimates for hypocenter locations and origin times are also determined, which can be used as starting points for higher-precision methods. Effort has been taken to maintain a computationally efficient and scalable algorithm that can be used for catalog development in a wide range of applications. In conjunction with this work we also introduce a newly developed phase picker, referred to as the ‘Norm Picker’, a single station energy-based picking algorithm that is similar to an STA/LTA (Allen, 1978), but that can pick P- and S-waves nearly equally as well and also pick low-SNR arrivals.

The structure of this paper is as follows: (1) Introduction and background, (2) Methods, (3) Synthetic Tests, (4) Application, (5) Discussion and (6) Conclusion. We perform synthetic tests of our method on simulated arrival time data from 17 stations of the Integrated Plate Boundary Observatory Chile (IPOC) of northern Chile (Table S1), and use real data from the same network for our application. In the synthetic tests, we analyze the performance of the method as the rate of earthquakes, the rate of false arrivals, and errors on arrival time data all change. We apply the method up to the relatively challenging synthetic case where 700 earthquakes/day, and 700 false arrivals/station/day occur across the network. For our application to real seismic data, we develop a catalog for northern Chile from 2010 through the end of 2017, which has a magnitude-of-completion of M1.8 and detects on average 279 earthquakes/day.

Background

Previous studies have solved the association problem in several different ways with varying degrees of generalization and computational complexity. Several approaches use heuristic-based search procedures that involve choosing a set of arrivals occurring nearby in time, removing arrivals with high misfit to the inferred best-fitting location to these picks, and iteratively adding and removing new trial arrivals until convergence with a stable set of arrivals is obtained (Sippl et al., 2013; Gibbons et al., 2016). These techniques generally suffer from high computational cost and can scale with the squared or factorial of the number of arrivals considered (Johnson et al., 1997). Using waveforms (rather than arrival times), network template matching can be used to resolve the association step implicitly (Gibbons and Ringdal, 2006). Drawbacks of template matching include that templates must be known a priori, there is a high computational cost, and generally only earthquakes that are similar to a template are detected. A related class of techniques transform raw seismic time-series into characteristic functions such as analytical envelopes or STA/LTA traces, before correlating the characteristic functions between stations to detect when a common source moveout crosses the network (for a review, see Li et al. 2017). Rule-based, fuzzy-logic, and ‘expert systems’ have also been introduced (Bache et al., 1993), as well as a Bayesian-based probabilistic association method (Arora et al., 2013). Other recent developments in earthquake association include a waveform-based pairwise association technique built around the Fingerprint and Similarity Thresholding (FAST) method (Bergen and Beroza, 2018), a method using deep recurrent neural networks applied to arrival time data (Ross et al., 2019), and a convolutional neural network approach for waveform-based pairwise associations (McBrearty et al., 2019).

Backprojection, beamforming, and source-scanning are other methods for detecting earthquakes and implicitly associating arrivals (Kao and Shan, 2004; de Hoop and Tittelfitz, 2015; Nakata and Beroza, 2016). While details vary, in general these methods back-propagate waveforms from several stations into the source domain, and locations with high coalescence are identified as probable sources. Some variants replace full waveforms with finite-width kernels such as box-cars or scaled Gaussians centered about arrival times prior to implementing backprojection (Ringdal and Kværna, 1989). While effective in many scenarios, backprojection-based techniques generally yield difficult to interpret results when event rates are high and earthquakes occur closely together in time. In our work, we seek to resolve this ambiguity by interpreting backprojection results in a graph theory context. We use the remainder of this section to describe backprojection (and its subtle nuances) in greater detail.

Notation Conventions

We define some notation conventions for clarity. The physical source space is given by , the origin time space is given by , and the combined spatial-temporal domain by . The travel time to station from coordinate of phase type is given by ; the relative travel time with respect to the network is given by . The set of all travel time vectors is given by ; the set of all relative travel time vectors by . We let represent the observed set of all th arrivals on each th station, and let represent a set of theoretical arrivals recorded across the entire seismic network, for each th station and th phase type, from a source at . A complete summary of the key notation used throughout the paper is given in Table 1.

Symbol Description spatial coordinate (latitude, longitude, depth) travel time to station from position of phase type relative travel time to station from position of phase type backprojection space theoretical backprojection space for dataset of source prior likelihood distribution of sources spatial-temporal domain of sources spatial domain of sources temporal domain of sources set of all moveout vectors and set of all relative moveout vectors residual distribution between template moveout vectors and moveout vectors arrival time dataset theoretical arrival time dataset of source candidate source dataset set of all template vector locations, for th arrival time of station th triggering time of template average rate of observed arrivals, per station, per day edge-weights between arrival and triggering times (phase type ) stations, templates constraint matrix and constraint vector of competitive assignment optimization vector and solution vector of competitive assignment RBF-kernel width, triggering threshold, and cost term
Table 1: List of variables and their definitions.

The Backprojection Space

Backprojection may be summarized as the reverse time-correction and stacking of recorded seismic waves by the relative theoretical travel time from a source to each station. When applied to a set of discrete arrival times from a seismic network, , let

(1)

denote the backprojection space (BP-space), , where represent possible spatial coordinates and origin times of sources. represents any type of locally supported kernel (centered about zero) used to map discrete arrivals onto continuous space and time; e.g., whenever for small , is non-zero and represents placing a kernel of some finite width along a manifold , highlighting possible spatial-temporal sources of arrival . The key feature of this mapping is that whenever a subset of arrivals in are from true source-receiver(s) path(s) (e.g., ), then will have a significant degree of coalescence at , since several manifolds will all intersect and constructively interfere at this coordinate (i.e., in equation (1), all of the related terms for ). Note also that (1) is written for the general case where the directivity and phase types of arrivals are initially unknown, and each arrival is backprojected in all directions for all of the phase types under consideration.

It is insightful to note that while the global maximum of would identify the source location (and origin time) when , elsewhere throughout a complex interference pattern is produced where subsets of the backprojected manifolds intersect. The coordinates of partial coalescence can form isolated, local maxima in , producing a topology in the BP-space that may be complex for even simple arrival time datasets of a single earthquake (Figs. 2b,c). The complex interference patterns of the BP-space shown in Fig. 2 are typical for earthquakes in the interior of a seismic network. In teleseismic applications, or when the azimuthal coverage is poor, backprojected wavefronts are significantly more co-planar in the vicinity of the source location, reducing the number of subsidiary wavefront intersections.

Figure 2: Backprojection result for a two earthquake synthetic example. Both earthquakes occur at 50 km depth, 0 seconds origin time, and at positions marked and by white lines in panels (b,c,d). (a) The P-wave arrival times of two earthquakes recorded across the network, where is recorded on ten stations and is recorded on seven stations. (b,c) Backprojection results using only arrivals of , or , respectively. (d) Backprojection results using arrivals of both earthquakes. In (b,c,d) the BP-space is shown for 50 km depth and origin time of 0 seconds, and the RBF-kernel width used in equation (1) for backprojecting arrivals is 2 seconds.

As a revealing example, we consider a case where two earthquakes occur nearby in time, with one earthquake producing 10 P-wave arrivals, and the second earthquake producing 7 P-wave arrivals across the IPOC network (Fig. 2a). Taken individually, equation (1) produces a BP-space with well defined global maxima (Figs. 2b,c) at each of the respective source coordinates. When the arrival datasets are combined, because (1) is a linearly superposable field with respect to , the interference patterns overlay one another and the identification of each individual source is more ambiguous (Fig. 2d). The global maximum remains essentially invariant (from Fig. 2b) at the location of the first source (with 10 arrivals), however the location of the second source - while remaining a local maximum - now has a coherency value on the order of many of the other local maxima in this space. In this case it’s no longer evident whether or not it, or one of the other local maxima is a true source when using only the BP-space. This result is general: the coherency value of subsidiary modes (or local maxima) in the BP-space may not be diagnostic of which local maxima are true; e.g., a maxima in may be a false source, yet have higher coherency than another maxima that is itself a true source. This later case is common when one earthquake has more arrivals than another nearby earthquake, since the secondary modes of the larger source may have higher coherency than even the primary maxima of the smaller source. We also note that the BP image space maps shown in Fig. 2 are for the fixed, correct values of depth and origin time, and in practice the BP-space has four degrees of freedom (latitude, longitude, depth, and origin time), each of which introduces additional structure, local maxima, and variability.

The Backprojection Space for Theoretical Data

One helpful insight into the ambiguity of the BP-space when multiple earthquakes are present in a short time-interval is that (1) predicts a priori what the interference pattern will look like throughout all of for any given choice of . This type of information has utility in determining which of many local maxima in may be true, because it can indicate which set of local maxima are expected for any given source. If, for example, a source at and its associated dataset has a predictably large coherency value at a local maxima underneath the mapping (1), we might be skeptical that the locally coherent point is a true source when compared against the point of higher coherency, . Conversely, if a local maxima existed at , where the BP-space is predictably low given , we would have stronger evidence that may be a second, true source, in addition to that of .

This type of duality between the joint likelihood of pairs of sources will be used in a later step of our method. Analogously to the theoretical dataset, , we define the theoretical backprojection space as

(2)

which is equal to (1) computed for the case where . We note that (2) assumes all of the arrivals of theoretical source are contained in , and also that (2) is symmetric with respect to arbitrary pairs of sources , , in the sense that whenever the kernel is symmetric.

Association Methods

The primary objective of our method is to process the BP-space produced by any of the complications considered (1 - 6 in the Introduction) under the mapping (1) and still infer the correct number of sources, the associations of arrivals to their correct source, and the identification of phase types and of false arrivals. The basic outline of our method is to detect coalescent points in the BP-space with an efficient search procedure. With these detections, we assemble a mathematical graph which connects all coalescent points (candidate sources) to arrivals consistent with those sources. We then formulate and apply a constrained linear optimization on the graph itself, which imposes physical constraints on the allowed connections (called ‘edges’ in graph theory parlance) between candidate sources and arrivals. This step enables us to extract the true sources from the full graph and discard the false coalescent sources. In this process, arrivals are assigned to their optimal sources, phase types of arrivals are determined, and most false arrivals are discarded. For numerical efficiency, we also describe a graph pruning technique that can be applied prior to the optimization routine, which is useful to keep the optimization problem tractable and allow the method to scale over continuous time (streams of arrival times) applications. The pruning routine is implemented by combining information contained in both the observed and theoretical BP-space between pairs of sources into a mathematical graph that connects all sources to one another, and which allows for the rapid removal of the least optimal sources by using the Markov Clustering Algorithm (Dongen, 2000). All of the steps of the method are outlined in the flowchart titled Algorithm 1.

Template Moveout Vectors

To detect all local maxima in for an arbitrary dataset , one could theoretically compute over a dense grid-search of discretized and robustly find all local maxima. This process, however, would be computationally inefficient. A more practical alternative is to select a small, finite set of representative spatial backprojection coordinates, , and compute for all , and over all time . Since absolute seismic travel times to a fixed station change only gradually as source coordinates change, the space of all moveout vectors generally has low order complexity (i.e., varies smoothly), which ensures that a set of optimally chosen representative coordinates, , with their corresponding relative template travel time vectors, , can capture the broad characteristics of the space of all relative travel time vectors, . Fig. S1 illustrates the essence of this partitioning approach, in which we have subdivided the region surrounding the IPOC seismic network into eight subregions, where for each subregion a single moveout vector approximates the true moveout vector for any source within that subregion. In this case eight subregions are shown for clarity, though in practice the optimal number is generally in the hundreds, which produce many more higher resolution sub-divisions than shown in Fig. S1.

To find a representative set of template vectors that optimally captures the main characteristics of , we implement vector quantization (Vasuki and Vanathi, 2006) with the K-Means clustering algorithm (Jain, 2010). K-Means is an efficient algorithm that can identify optimal cluster centroids (e.g., template vectors) by iteratively sampling vectors from and updating a set of template vectors, (. The updates ensure that, statistically, arbitrary vectors in have at least one relatively similar template vector in . That is, the algorithm updates the templates to iteratively minimize the distribution of nearest-neighbor residuals,

(3)

between vectors in and those in . As can be confirmed with empirical tests and by evaluating for arbitrary choices of , this type of clustering shows that can be chosen more optimally than simply uniformly sub-dividing space. This is because more cluster centroids can locate in areas with high travel time variability, while a sparse set of cluster centroids can occupy regions with highly similar sets of moveout vectors (for instance in areas with poor seismic network azimuthal coverage). In practice, the template moveout vectors returned from K-Means may not have an exact corresponding travel time vector from any given . However, can be taken as the nearest matching source coordinate to the template vectors; i.e., for each .

In this application the minimum travel time was subtracted from each travel time vector prior to running K-Means, since has a simpler structure than , and only relative arrival times are important when applying backprojection techniques. We note that in practice can either be known on a discrete grid (such as the case for most 3D velocity model ray-tracing algorithms), or can be sampled from continuous space during the clustering algorithm’s iterations if analytical formula of are available. The fidelity with which the obtained template moveout vectors characterize all of space can be evaluated by estimating the residual distribution (3), which can help guide the choice of that should be used for any given application.

Discrete Backprojection with Template Moveout Vectors

Rather than backproject to all of continuous , for efficiency we backproject to the discrete templates located at , with the fixed template moveout vectors obtained from K-Means clustering. This procedure, we refer to as ‘discrete backprojection’, can be succinctly written as a slight modification of equation (1):

(4a)
(4b)
for (4c)

The discrete backprojection algorithm stacks the backprojected finitely supported radial basis function (RBF) kernel, , with standard deviation , for all arrivals , templates , and phases . The number of stations is given by , and the additional operator is used to clip the maximum contribution to of any given station to a maximum of for any given fixed template and phase type. This is useful to prevent an individual station from saturating when, for example, the station has many closely spaced false arrivals in . Here, we have written (4) explicitly for the case of two-phases (e.g., P- and S-waves), and the scaling is designed such that whenever contains a set of P- and S-wave arrivals recorded across all of the network equal to a template moveout vector, (i.e., ), then . For completeness we have also included the optional term, , in (4), to indicate that RBF-kernel widths can adapt for different pairs of source coordinates and stations if uncertainties in travel times vary across the seismic network and monitoring region.

It is straightforward to compute (4) for any arbitrary dataset . At this step in the method, we are interested in detecting all local maxima in (referred to as candidate sources), regardless if many of the maxima are not true sources. To achieve this we choose a hyperparameter, , referred to as the ‘triggering threshold’, and detect all local peaks in using basic 1D peak-finding algorithms. While the maximum value (4) can obtain is 1, in general it’s more practical to choose a relatively low . This will allow for the detection of sources at intermediate coordinates to any of the , detection of sources with noisy arrival time estimates (or uncertainties in travel time calculations), and also the detection of small magnitude sources that do not produce arrivals on all stations. Since each template has a characteristic source location, , the set of all local maxima in (4) can be assigned equivalent sources in ; the set of all candidate sources is given by , for each th triggering time of the th template, . The time-offset from the triggering time is required since the template moveout vectors encode relative arrival times (i.e., have the minimum travel time subtracted), and the correction is needed to return the template triggering time back into the absolute time domain of origin times in .

Calculating (4) with a fixed kernel, , and a finite set of template vectors (where usually it is sufficient to set ), makes discrete backprojection and the identification of candidate sources highly computationally efficient. This replaces the need for a dense grid-search over the entire source domain. However, a valid criticism may be that computing (4) only yields a sparse and sub-optimal sampling of , since in reality none of the sources in are actually at maxima in the continuous space (e.g., a maxima in restricted to may be on the flank of a true peak in continuous space). Our experiments have shown that this effect on the remainder of our algorithm is relatively small as long as is large enough; however, to circumvent this issue we introduce an optional step that significantly reduces the resulting sensitivity to moveout templates. Namely, we take all candidate sources, , and update their locations in continuous until a local maxima is reached by using standard local gradient ascent. Such an approach is allowed since (4) is differentiable whenever is available. To account for the general case where is known only on a discrete grid and is unknown, we propose fitting with a simple fully-connected neural network that will ensure is continuous and differentiable inside (for a demonstration, see Supplemental Materials). In this framework, locally optimizing sources in is efficient, especially when using automatic differentiation software (e.g., Paszke et al. 2017), and the computations can also be sped up with GPU acceleration.

Graph Construction of Source-Arrival Pairs

Following discrete backprojection (and the optional local gradient ascent), a set of candidate sources, , has been obtained. As discussed in the Introduction, the relative coherency values of maxima in the BP-space are not themselves sufficient to determine which sources are real or false (Fig. 2). Further, the BP-space has a strong interference pattern and a fixed (low) triggering threshold, , is used to identify all candidate sources. This results in only a small subset of the total number of candidate sources in being true sources. The key component of our method is to make use of more information that is available than simply the coherency value of maxima in the BP-space. The main insight is that following the detection of all candidate sources, we know which (and by how much) each arrival contributed to the coherency value of each source. Namely, by inspection of (4), we see that for any pair , the arrival contributed

(5)

to for phase type Technically, if the clipping operator is used in equation (4), then this is the value that would have been contributed prior to the clipping operator being used.. Since this quantity can be computed for all pairs of arrival(s)-sources(s), the set of all can also be used to represent the edge weights of a mathematical graph that links all source-arrival pairs by a weight proportional to how much each arrival contributed to the detection of each source (which is also dependent on the phase type ). A large edge weight quantity for a source-arrival (and phase dependent) pair indicates the arrival constructively interfered with several other backprojected arrivals to elucidate the source in question. Conversely, when the edge weight goes to zero, the arrival didn’t contribute to the detection of the source with that phase type. Over continuous-time applications, a graph of this kind will be highly sparse, since each arrival will only be connected to a small subset of the total number of candidate sources. We also note that (5) is written for the case where sources have not been locally optimized in space, however if they have been, the only change is that becomes the new origin time estimate, and becomes the new source coordinate in continuous (rather than being fixed at the original and values).

Given the properties of the source-arrival graph outlined, it is clear such a graph will contain useful information for interpreting the results of backprojection (Fig. 3). For instance, for an arrival dataset of a single earthquake and its associated collection of candidate sources obtained from discrete backprojection, , the true source in will have large edge weights to all of the arrivals in , while each false candidate source will have large edge weights to only a subset of arrivals (each candidate source being connected to different subsets of the full set of all arrivals). Moreover, since each edge weight is recorded as a phase-dependent measurement, the true source will have large edge weights of phase type to the P-waves in the dataset, and large edge weights of phase type to the S-waves in the dataset, but not vice-versa. In the following section we describe a constrained optimization routine that can be applied to arbitrary graphs, which can extract these inferences directly, and which also generalizes to multiple earthquakes being contained in the dataset, and other complicating factors such as missing arrivals and false arrivals.

Competitive Assignment

When viewed on short time scales, the source-arrival graph generally has a highly entangled set of edge connections, since any number of arrivals may coalesce to cause a source to trigger (i.e., to exceed the threshold for declaring a candidate source), and each arrival may itself contribute to several candidate sources. Our optimization scheme imposes constraints on the connections in that force arrivals to be linked to at most one candidate source (using only one phase type), while maximizing the number of source-arrival edges retained and applying a negative penalty for all retained sources from . This causes candidate sources to compete with others to obtain the most connections as possible, and results in many candidate sources being discarded, so that only an optimal (small) suite of inferred true sources are retained. In the trivial case of a single earthquakes dataset and its collection of candidate sources, such an optimization would result in the one true source ‘winning’ the connections to each arrival, and all of the false candidate sources being discarded, since this would maximize the number of assigned arrivals while minimizing the number of active sources. The success obtained on the trivial problem, however, extends to more complicated cases by use of the full optimization algorithm, referred to as ‘competitive assignment’, that we now define.

The optimization problem is defined as a constrained integer linear programming problem (ILP) applied directly to the graph edge weights contained in . ILP’s are a type of linear optimization problem in which the solution vector, , is composed of all integers (Nemhauser and Wolsey, 1988; Schrijver, 1998), and in this case is binary and has an entry for each edge in the graph specifying whether to keep ( = 1) or delete ( = 0) that edge. The optimization seeks to maximize , in which vector is comprised of all of the edge weights in , with a constraint matrix acting on that ensures that:

  • each arrival is assigned to (associated with) at most one source, and one phase type,

  • each station contributes at most one of each phase type to each source, and

  • a user-specified penalty, , is applied for each source that retains connections.

The competitive assignment problem can then be written succinctly as:

(6a)
s.t. (6b)
(6c)
(6d)

where the precise development of the constraints, , optimization vector, , and how to read the result from the binary solution vector only involve an indexing scheme. The scheme can be automated for any given dataset, , and set of candidate sources, . The notation in equation (6d) indicates that Algorithms S1-S3 summarize the steps of this scheme, and details may be found in the Supplemental Material. The binary solution vector obtained from maximizing (6) encodes the assignment of all arrivals to at most one source, the phase type of their assignment, and the ‘active’ and ‘inactive’ state of all candidate sources. Arrivals not assigned to a source are flagged as possible false arrivals. The solution represents a structured and physically realistic subgraph that’s extracted from the input graph that maximizes the number and strength of all source-arrival connections retained, while minimizing the number of active sources needed to do so.

Figure 3: Graph construction and ILP optimization result for the two earthquake synthetic example of Fig. 2. (a) The BP-space of the combined dataset, as shown in Fig. 2d. True sources are marked A, B, and a false candidate source is marked C by white boxes. (b) The predicted arrival times of P- and S-waves for each of the three candidate sources (A, B and C), overlaying the observed arrival time dataset. Below the arrival time data, the measured edge weight matrices are shown for each source-arrival pair, and each phase type. In these matrices, arrivals are arranged left to right by the first earthquakes arrivals, followed by the second earthquakes arrivals. Above each matrix the sum of all edge weights contained in the matrix is given. (c) A schematic of the ILP optimization routine applied to the weight matrices shown in (b). By solving the constrained optimization problem (equation 6), the resulting solution identifies the two correct sources, and the correct phase assignments of each arrival.

The competitive assignment algorithm can succeed for even complex arrival and candidate source datasets (and their associated graphs) for intuitive reasons. If multiple earthquakes are present in , optimizing (6) will encourage the identification of all of the true sources, since doing so will enable the assignment of the most arrivals. Since each arrival can only be assigned to one source, false sources are unlikely to be retained, since all of the arrivals that originally caused constructive interference at the false sources in will instead be assigned to their true source, and the cost penalty will outweigh the benefit of keeping the false sources (even if, for instance, a few arrivals happen to be left over and could otherwise still be assigned to those false sources). Because the edge weights are measured as phase-dependent quantities, assigning each arrival to their true source with the correct phase assignment will maximize (6) more than if an incorrect phase assignment is used. In addition, underneath the backprojection mapping, most false arrivals will not align with the moveout of a true source (as long as is not too large), and hence (true) source-(false) arrival edge weights will be predominantly zero, and the majority of false arrivals will not be assigned to any sourceMost ILP solvers will allow a zero-weight assignment (as this neither hurts or improves the optimization), and hence, all zero edge weights should be adjusted to a small negative value before optimizing (6) so that false arrivals will be explicitly ignored..

In Fig. 3, the graph construction and ILP optimization steps are shown for three representative candidate sources (local maxima) detected in the BP-space, for the same example of two earthquakes shown in Fig. 2. In this example, the ILP optimization resolves the ambiguity of the BP-space (Fig. 3a) to determine the correct number of sources, and the phase assignments of each arrival. While the arrivals are all treated as having an unknown phase type during the backprojection step, the solution determines that all observed arrivals are P-waves. Notably, the false candidate source (marked C in Fig. 3a) has higher BP coherency than a true source (marked B in Fig. 3a), yet the optimal solution of the ILP correctly identifies source C as being false, and sources A and B as being true. This is possible because the simultaneous joint inversion for all source-arrival assignments reveals that the maximum number of source-arrival assignments is achieved by retaining sources A and B, rather than retaining source C (in combination with either source A or B), and letting source C split the arrival assignments between the two earthquakes.

Continuous-Time Applications and
Computational Efficiency

The number of candidate sources triggered in discrete backprojection has a strong effect on the computational efficiency, as well as the memory requirements of the method. As the duration of the dataset increases and more earthquakes are included, inevitably the number of triggered sources increases without bound and practical questions surrounding how to apply the method to continuous-time applications (without artificially subdividing ) arise. A related consideration is whether any of the hyperparameters used in the method should change as seismicity conditions vary. To address both issues and maintain as automated a method as possible, we introduce a few modifications and intermediate steps (all of which are outlined in Algorithm 1 for reference). Specifically, we adaptively choose the triggering () and cost () hyperparameters as the rate of observed seismicity (over a time-window) changes, and we also introduce a few graph-based tools to reduce the complexity of the graphs passed to the competitive assignment optimization routine. The former ensures that increased rates of seismicity don’t blow up the number of false candidate sources triggered, while the later steps are beneficial for keeping the graphs competitive assignment is applied to small and tractable. Most ILP solvers have computation time that scales with for nodes, and it is thus computationally advantageous to apply competitive assignment to as sparse and small of graphs as possible.

Adaptive Triggering Threshold and Cost Term

During standard operational settings choosing a fixed triggering threshold for all time-intervals is usually appropriate. However when seismicity rates change during earthquake swarms or aftershock and foreshock sequences (among other possible situations), the number of triggered sources may increase with for earthquakes and soon become computationally prohibitive. The cause of this is that the base-level of the space increases as more spurious arrivals are included in . To maintain a computationally efficient algorithm, it’s helpful to increase proportionally to the background coherency level in , such that the triggered candidate sources still distinctly stand out above the background level of coherency.

To implement a simple adaptive choice for we choose to be linearly proportional to the average rate of arrivals occurring across the network in a time-interval. That is, we set

(7)

where is the average rate of arrivals per station, per day, and the coefficients and are chosen to maintain a balance between accuracy and efficiency. In our application we have found that even a small linear scale of (0.00122, 4.9), allowing to adapt between 6 and 7 as changes between 900 and 1800 is sufficient to allow the method to run for either modest or high rates of seismicity with little changes to the resulting computation time or memory cost of the method. After is decided, we set in all cases. The window size over which to compute is subjective, however in our application we compute over daily seismicity rates.

Disconnected Subgraphs

Mathematical graphs can be separated into distinct subgraphs by searching for populations of nodes that are entirely disconnected from other populations of nodes. When includes arrivals spanning a time-interval many times longer than travel times allow, typically will be highly sparse and contain subsets of nodes that are disconnected from one another (so long as is not too small). Since these divisions already separate groups of sources/arrivals from one another naturally, we extract all such disconnected subgraphs from and run the competitive assignment and graph pruning subroutine (described next) on these subgraphs individually.

Graph Pruning with the Markov Clustering Algorithm

To improve the efficiency of competitive assignment (which may become slow when an input graph has, e.g., 50 sources 150 arrivals), we use a graph theory based tool, known as the Markov Clustering Algorithm (MCA), to discard a number of sub-optimal sources contained in prior to running competitive assignment. This reduction is achieved by identifying which sources in are most likely subsidiary local maxima to different, more optimal sources in . This information can itself be revealed by a particular consideration of the observational backprojection space, , and theoretical BP-space, , for a given dataset and set of candidate sources .

To explain the relationship between MCA, , , and , we briefly describe the essence of the MCA algorithm (and also define it precisely in Algorithm S8). MCA is a clustering algorithm applied to stochastic matrices. Stochastic matrices are matrices which define the discrete transition probabilities between any two nodes in a graph when following a random walk. On a graph, a random walk is the trajectory a particle takes when iterating a random sequence of transitions between its current node location to any of the other nodes it has edge connections with, where the individual discrete transition probabilities to neighboring nodes are equal to the edge weights between those nodes. The purpose of MCA is to identify subsets of nodes (or node populations) in a graph that are more interrelated than others, as encoded by the structure of edge weights describing the transition probabilities between the nodes in the graph. Intuitively, if two populations of nodes have many interconnecting edges and only a few weak connections between the two populations, then it is apparent that random walks spend a significantly higher proportion of time walking among either cluster than compared with transitioning between the different clusters. MCA is simply an efficient analytical method which can be applied to stochastic matrices to extract these type of inferences directly (however, it generalizes to any number of clusters). In addition, it also identifies the centroid of each cluster; i.e., the fixed-point node that the rest of the nodes in the cluster are most likely to visit when following a random walk.

This algorithm can be applied to identify which sources in are sub-optimal and enable the simplification of . The stochastic matrix we use in MCA is a source-source indexed graph and is given by

(8)

which defines the transition probabilities to account for two principles: (1) the transition probabilities are high between pairs of sources , whenever the theoretical pairwise BP-space value of these sources is high (which is a symmetric quantity and observation data independent), and (2) the transition is asymmetrically skewed towards the source of higher observational BP coherency. This type of stochastic matrix encourages transitions between sources that are intrinsically likely to be related (as defined by ), while biasing transitions away from sources of low coherency towards sources of higher coherency (given by ). These properties ensure that cluster centroids returned from MCA should be of higher coherency than other sources contained in that they’re intrinsically related too, given the theoretical BP-space. In practice, MCA also uses a hyperparameter, , to help control how coarse or fine the resulting clusters will be. Since we use MCA only to decrease the size of the graphs (rather than solve the detection problem completely), in general we choose a high value of (e.g., , which promotes returning many clusters, and we discard all sources that are not returned as a cluster centroid. Given the nature of the graphs we typically encounter, even a single earthquake tends to return several cluster centroids, however for some well isolated earthquakes, at times only a single (correct) cluster centroid (or source) for a single earthquake is returned. In either case, we run competitive assignment on the returned reduced graphs, since competitive assignment can enforce physical constraints, while MCA cannot.

Waveform Arrival Picking: Norm Picker

We use a single station energy-based picking method to pick arrivals from raw seismograms in our application to data from the IPOC network. We call the algorithm the ‘Norm Picker’ and it is designed to reveal onsets of energy much like the STA/LTA (Allen, 1978). The method is a recursive technique that steps through waveform data over time by a small time step, and at each time step it: (1) slices a finite window () of data from the mean of the envelope of all three-components, (2) normalizes the slice by the RMS, (3) smooths the normalized data by a small moving average time-window (), and (4) stacks the slice back onto its corresponding time-interval from which it was sliced. Finally, a (5) moving window () fit of the derivative of this time-series is taken, in which local maxima above a threshold are marked as arrival picks. The algorithmic steps are defined in Algorithm S9, and examples of waveform picks are given in Fig. S2. We used the Norm Picker algorithm in our application to northern Chilean seismicity primarily because of its ease of use and its ability to pick relatively low-SNR P- and S-wave arrivals, however other recent developments in arrival picking (e.g., Perol et al. 2018; Ross et al. 2018; Wu et al. 2018; Zhu and Beroza 2018) could also be used with our association technique.

The reason this picking algorithm is effective at detecting low-SNR arrivals, as well is being relatively insensitive to coda-waves masking S-wave arrivals, is that it adapts a local view of the seismicity depending on how much energy is arriving at any given time. By normalizing the envelope of a waveform by its RMS, where is usually taken a few times longer than the duration of a typical earthquake, a high-SNR arrival will be normalized to a similar dynamic range as a low-SNR arrival will be. When a low-SNR arrival is contained inside the window , it will be amplified by the RMS normalization, helping to enable its detection. The last step, which takes a moving window fit to the derivative of stacked RMS-normalized envelope windows, ensures that only when energy is sustained over a time-interval on the order of will it have a high derivative value, which eliminates detecting spurious spikes of energy that do not have a substantial duration in time. Furthermore, the derivative detects the onset of energy, which helps localize the arrival time.

Earthquake Characterization

Once arrival associations and phase types have been determined, earthquakes can be relocated with standard techniques (Tarantola and Valette, 1982; Lomax et al., 2000). In the observational settings that pertain to the IPOC network, we detect earthquakes that are nearby, as well as distant to the seismic network (e.g., 250 km distance), including many earthquakes at depths 200 km. As a result, we have found that the IPOC network is poorly suited to accurately constrain earthquake locations, particularly along the plate interface to the east of the network (even when both P- and S-wave detections have been made), largely because the earthquakes are far away from the seismic stations and the velocity model is poorly known in this region. For the purposes of developing a catalog that remains comparable to existing works (Bloch et al., 2014; Kato and Nakagawa, 2014; León-Ríos et al., 2016; Sippl et al., 2018), we chose to relocate events in the NonLin Loc Bayesian framework (Lomax et al., 2000), in which we implement a prior on the source locations to bias locations closer to the plate interface, which is consistent with the predominant distribution of earthquake locations reported previously (Sippl et al., 2018). Specifically, we define the (unnormalized) prior likelihood of source locations as

(9)

where is set as 15 km, and represents the nearest point on the Slab 1.0 Earth model of the Nazca Plate (Hayes et al., 2012) to the coordinate . The operator in (9) represents euclidean distance between coordinates in a Cartesian (rather than elliptical) coordinate system.

Following estimating earthquake locations with maximum likelihood estimation (equation S4), earthquake origin times are estimated with analytical formula (equation S5), and magnitudes are estimated with a local magnitude scale (equation S6) calibrated to produce magnitudes comparable with the Chilean Centro Sismológico Nacional (CSN) catalog. As shown in the Application section, the use of a prior succeeds in resolving the intrinsic uncertainties of locations (primarily of depths) to the east of the seismic network, while also not having an overwhelming influence on resulting locations in other regions, since we still detect many shallow earthquakes located near the surface, away from the plate interface, in locations previously reported to have seismicity.

Synthetic Tests

We conduct a set of synthetic tests to evaluate the performance of our method under a range of complexity cases. Specifically, we test the performance of the algorithm as the rate of earthquakes (), the rate of false arrivals (), and the uncertainty on arrival time picks () all change. To generate synthetic arrival time datasets that resemble true observational settings, we use the following procedure:

  • Choose and

  • For a time window , sample the number of earthquakes from a Possion distribution with rate

  • Sample earthquake hypocenters from

  • Sample earthquake origin times from a Uniform distribution over interval

  • Calculate theoretical times of all arrivals, and add additional error sampled from a Laplace distribution with mean 0 and scale

  • Sample earthquake magnitudes from a Gutenberg-Richter distribution with

  • For each earthquake, station, and phase-arrival pair, compute the predicted empirical amplitude of each arrival (using equation S6), and if below a threshold, , delete the arrival from that station

  • For each station, sample the number of false arrivals from a Poisson distribution with rate , and sample random false arrival times from a Uniform distribution over interval for each.

These steps create realistic arrival time datasets that easily adapt between complex and simple cases by changing and . In step (7) we use the predicted attenuation of sources to stations to control if arrivals are detected on a station to simulate the real situation in which small earthquakes only produce arrivals on a subset of the nearest stations. The threshold for discarding arrivals, , is chosen such that 30 of all arrivals are missed. For our purposes, we use the same prior, , applied to locate earthquakes to generate the locations of the synthetic sources, which promotes sources that locate nearby the plate interface. It is worth noting that the earthquake origin times and locations are chosen independently, and there is no enforced minimum separation of sources in time or space, which enables many sources to produce overlapping moveouts across the network (Fig. 4).

To run the synthetic tests, we set the hyperparameters to the same values used in our application to the real data. We set the RBF-kernel width to s, and let the triggering threshold follow the linear scaling curve of , where to adaptively adjust for the rate of observed arrivals, , and keep the memory requirements of the method low. The cost term in competitive assignment is always set to . For the experiments, we set seconds (one day), and vary events/day, and seconds.

Figure 4: Example synthetic test results for earthquake occurring at an average rate of 300 events/day, 300 false arrivals/station/day, and random error on arrival times sampled from a Laplace distribution with mean 0 and scale 1 second. (a) Arrivals from a set of eleven synthetic earthquakes and the predicted events that are matched to them. (b,c) Insets of several earthquakes and source predictions in (a). In (a,b,c) hollow black circles represent missed arrivals, and blue or red circles without black edges represent false arrivals incorrectly associated to an earthquake. Blue and red circles with black edges are correctly associated arrivals. Input data to the algorithm are all arrivals (grey, blue or red) shown in (a), all of which initially have an unknown phase type.

Evaluation

The evaluation of performance is split into several components. For earthquake detections, we count the number of true detections (true positives), missed detections (false negatives), and false predicted sources (false positives). From these values we compute the precision (), recall (), and quality metrics (Powers, 2011), where precision is given by the ratio of true positives to true positives plus false positives, recall is given by the ratio of true positives to true positives plus false negatives, and is a summary statistic given by . True detections are marked for all synthetic sources for which the nearest predicted moveout curve is within 6.5 RMS residual of the true moveout curve. False negatives are marked when the nearest predicted moveout curve to a synthetic source has a residual greater than 6.5 RMS. False positives are marked for all other sources we predict that are not matched to a true source. Moveout curve comparisons are used instead of thresholds on nearest location and origin times (between true sources and predicted sources) since moveout curves combine information from both locations and origin times into a single vector, and they are more closely related to the data themselves rather than the specifics of network geometry and source distribution. An additional caveat to the definition of false negatives is that we do not count an earthquake as missing if the attenuation laws in step (7) result in an earthquake having less than arrivals, since in this case the number of arrivals is so few we don’t expect the method to detect the event anyway (recall each arrival can only contribute at most 0.5 to the discrete backprojection metric). This usually occurs for at most 25 of the sources in any of the synthetic catalogs. It would be simple and perhaps clearer to delete these sources entirely, however keeping them in the dataset makes the detection problem harder when these arrivals coincide with other larger magnitude sources, which is a phenomenon that happens in the real data and is thus worth including in these tests. We also count the number of correctly associated P- and S-waves and the number of false arrivals correctly identified as false, versus those mistakenly assigned to an earthquake.

Results

The results of the synthetic tests are summarized in Table 2 and reported in full in Tables S2, S3. We find that as and are increased, the detection accuracy statistic varies between 0.99 and 0.93, when taking the median result over all values tested. is found to have a relatively small effect on detection results (Table S2), causing at most a 0.03 variation in over the range [100, 300, 500 ,700] false arrivals/station/day. In contrast, can vary by up to 0.05 and 0.04 units as , and vary, respectively, over their testing ranges, [100, 300, 500, 700] events/day, and [1.0, 1.5, 2.0, 2.5] seconds. In most cases precision is systematically higher than recall (on average by 0.03 units), indicating that the most common type of error is to miss an earthquake rather than create a false source. Example synthetic earthquakes, along with the solutions obtained, are shown in Fig. 4. In this example, several earthquakes occur very closely in time, with many overlapping moveouts, yet the solution still resolves this ambiguity.

Precision Recall F1 Event/Day, ……. 100 1.0 1.0 1.0 .99 .97 .99 .98 .96 .98 .99 .99 .98 300 1.0 .99 .98 .99 .98 .97 .95 .96 .99 .98 .97 .97 500 .99 .99 .98 .98 .98 .96 .95 .94 .98 .98 .96 .96 700 .98 .99 .98 .96 .96 .96 .95 .91 .97 .97 .96 .94 1.0 1.5 2.0 2.5 1.0 1.5 2.0 2.5 1.0 1.5 2.0 2.5 Arrival Time Uncertainty, (s) P-waves S-waves False Arrivals Event/Day, ……. 100 .99 .96 .90 .85 .97 .94 .89 .84 .99 .99 .99 .99 300 .98 .95 .90 .85 .96 .93 .87 .82 .98 .98 .97 .97 500 .97 .94 .89 .83 .95 .91 .85 .79 .97 .97 .96 .95 700 .97 .92 .88 .82 .94 .89 .84 .78 .96 .95 .94 .94 1.0 1.5 2.0 2.5 1.0 1.5 2.0 2.5 1.0 1.5 2.0 2.5 Arrival Time Uncertainty, (s)
Table 2: A summary of the synthetic test results as the rate of earthquakes () and arrival time uncertainty () change. Precision, recall, and F1 of earthquake detection accuracy shown on top row; proportion of correctly associated P-waves, S-waves, and correctly identified false arrivals shown on bottom row. All values are the median value over all false arrival rates () tested. Full set of all results given in Tables S2,S3.

The result accuracy of classifying phase types and identifying false arrivals shows higher variability than the earthquake detection statistics over the range of testing parameters. We find that the proportion of correctly associated P-waves, S-waves, and false arrivals vary between 0.99-0.82, 0.97-0.78, and 0.99-0.94, respectively, when taking the median value over all values tested. For classifying P- and S-waves, and cause variation of up to 0.06 and 0.05 units, respectively, while has the largest effect and can induce changes of up to 0.17 units. This finding, that phase association accuracy is most effected by , results because if observed arrival times are too far from their theoretical arrival time, the graph edge weights linking them to their correct sources are often zero. In all cases, P-waves are more accurately identified than S-waves (on average by 0.03 units). False arrival identification accuracy changes by up to 0.06, 0.01, and 0.03 units as , and all vary, respectively, over their testing ranges. Notably, the rate of false arrivals, , has the least affect on the ability to accurately identify false arrivals, while has the greatest affect. We interpret the large affect has on false arrival identification accuracy to be that increased numbers of sources make the likelihood for spurious false arrival associations more common.

Application:
Northern Chile Earthquake Catalog

To demonstrate our method we apply it to 8 years of data from the IPOC network between January 1, 2010 and December 31, 2017. We use data from 17 broadband seismic stations of this network (Table S1), and run the Norm Picking algorithm (Algorithm S9) on the raw seismic data to develop a set of arrival picks before using the association algorithm. Before making arrival picks, seismic data is bandpassed between 5 - 22 Hz to help remove teleseismic arrivals and enhance the detection of small micro-seismic earthquakes. As in the synthetic tests, the number of templates in discrete backprojection is set as 500, the RBF-kernel width as s, and the triggering threshold as , where , and is calculated as the average number of arrivals per station, per day. The cost term is always set as .

The source region we consider is between 26.5,16.5, 72.5,66.5, and depths [5 km, -250 km]. For a velocity model we use the trench-perpendicular 2D transect of Model I2 given in Comte et al. 1994, which we extrapolate north and south throughout the study region. We further assume that scales by 1.76 to (Comte et al., 1994). Given this velocity model we calculate on a grid of (latitude, longitude, depth) elliptically distributed points within the source region using the Shortest-Path Ray Tracing method (Moser, 1991; Bai et al., 2007). A two hidden-layer fully connected neural network is fit to model the travel time grid (Fig. S3) to ensure that is differentiable and continuously known between the original grid-points.

Figure 5: Representative set of 16,343 earthquakes from the catalog between the two months of 01/01/2015 - 03/01/2015. (a) Map view of events (circles) colored by depth; background color is the depth to the Slab 1.0 Nazca Plate. Pink lines are cross-section transects labelled A-A’, B-B’, C-C’, and D-D’. (b) Cross-section display of earthquakes within +/- 0.2 of transects A-D from panel (a). Circle color represents negative log-likelihood of the source location as calculated by the probabilistic earthquake relocation routine (equation S4). Pink lines denote the depth to Slab 1.0 Nazca Plate along the transects.

Results and Validation

The newly developed catalog we obtain contains a high rate of seismicity, and contains 817,548 detections. We detect on average 279 earthquakes/day (Table 3), and estimate a magnitude of completion of M1.8. This is in contrast to previous works, in which 35 earthquake/day were detected in the most complete catalog to date (Sippl et al., 2018), and 7 earthquakes/day are reported in the operational CSN catalog.

CSN Sippl et al., (2018) This Study Time Range 2007-2017 2007-2014 2010 - 2017 Number of Events 30,796 101,602 817,548 Events/Day 7 35 279 Matched Events 98.7 98.3 -
Table 3: Catalog comparisons. Time range, number, and event rates listed for CSN, Sippl et al., and this studies catalog. Proportion of matched events between other catalogs and our own with RMS-residual between moveout vectors 6.5 seconds are listed.

The accuracy of the detections in our catalog are validated in several ways. We (1) confirm the broad distribution of earthquakes is largely consistent with previously reported catalogs, (2) verify that we re-detect a majority of the earthquakes in the CSN and Sippl et al. catalogs, (3) visually assess the accuracy of a set of representative earthquakes, (4) ensure the catalog contains the foreshock and aftershock sequences of the April 1, 2014 Mw 8.2 Iquique earthquake that have previously been reported (Kato and Nakagawa, 2014; Kato et al., 2016), and (5) compare the Gutenberg-Richter (GR) curves between this studies catalog and those of Sippl et al. and CSN.

Figure 6: Subset of 531 detected earthquakes/blasts from the events shown in Fig. 5. Events selected have negative log-likelihood location values 10. (a) Map view of events (circles) colored by depth; background color is the depth to the Slab 1.0 Nazca Plate. Pink squares labelled A-F denote locations of quarry’s or industrial sites. (b) Satellite imagery from Google Earth of the six locations marked A-F in panel (a). (c) Histogram of earthquakes/blasts shown in (a) by local time of day (GMT-3). Blasting interval above background level is marked by shaded region.

A representative distribution of 16,343 earthquakes from a two month interval (01/01/2015 - 03/01/2015) is shown in Fig. 5, which shows a spatial distribution broadly in agreement with the CSN and Sippl et al. catalogs (Fig. S4). In particular, the primary bands of trench parallel seismicity, with a transition zone between 70 - 100 km depth is apparent, and the clustering of deep seismicity ( 125 km) is consistent with that found in both catalogs. Earthquakes at depth largely follow the trend of the subducting Nazca Plate (Fig. 5b). Some events, however, also locate nearby the surface, with even fewer events locating deep in the mantle to the west. Because of the probabilistic formulation used to locate earthquakes (equation S4) that incorporates the prior, , earthquakes in either of these locations are easily identified by having large negative log-likelihood values (Fig. 5). The negative log-likelihoods of sources in these regions are intrinsically high (implying a low likelihood value) primarily because the prior likelihood is very small far away from the plate interface.

Figure 7: Earthquake detections in the temporal and spatial vicinity of the Mw 8.2 Iquique earthquake. (a) Each subpanel plots all epicenters of earthquakes -75 km depth in the catalog, in sequential non-overlapping windows of three days. Days above each panel list the days before or after April 1, 2014. Numbers list the number of events shown in the subpanel between the day listed and the day of the next subpanel. Pink square marks epicenter of the Iquique earthquake. Other features include: black line (trench), brown line (coast), and blue and red colors of text indicating time intervals before or after the Iquique earthquake, respectively. (b) Earthquake counts between 01/01/2014 - 08/01/2014 in the spatial domain of panel (a) computed over moving 24-hour windows. (c) Gutenberg-Richter magnitude distribution curves for the CSN, Sippl et al., and this studies catalog. GR curves are calculated using overlapping data intervals (between catalogs), as described in the Application section.

By examination, we find that sources occurring nearby the surface largely cluster nearby open pit quarries and industrial sites (Figs. 6a,b). The sources occurring nearby quarries have a non-uniform distribution of origin times with respect to the local time of day (Fig. 6c), and show clustering between 12 - 9 PM (GMT-3), which is consistent with the predominant blasting times of quarry’s as reported in previous studies (Sippl et al., 2018). By visual inspection we have also identified that the anomalous sources occurring deep in the mantle to the west are generally true earthquakes (actually occurring deep in the mantle to the east) that simply have initial locations (following discrete backprojection) to the west because the north-south alignment of the IPOC network results in a strong symmetry, and thus ambiguity, between moveout vectors on either the east or west side of the IPOC network. The initial locations are also far enough from the non-zero regions of that the gradient based relocation routine cannot relocate the sources (as the gradients are always zero). In a future run of the catalog development we expect that we can account for this discrepancy and relocate these sources by either restricting all templates, , to non-zero regions of , or by estimating maximum likelihood earthquake locations with a global (rather than local) search procedure. Nonetheless, the fraction of events locating in this region is small enough ( 0.5) that we do not believe it impacts the majority of our findings.

We compare the three catalogs directly in terms of matched earthquakes and count an earthquake pair as a match (between our catalog and the other catalogs) when the RMS-residual between the nearest matching moveout curve is 6.5 seconds, in an analogous way as done in the synthetic tests. Doing so shows that we recover 98 of the events reported in both the CSN and Sippl et al. catalogs (Table 3). In addition, for all matched earthquakes we calculate the residual differences between source coordinates, origin times, and magnitudes. The residuals are relatively small, with means near zero and standard deviations of 0.08S, 0.18W, 21 km, 1.28 seconds, and 0.295 units for latitude, longitude, depth, origin time, and magnitude residuals, respectively (Table S4). Most notably, the largest discrepancy is that for deep earthquakes we locate events 23 km, and 13 km shallower than the CSN and Sippl et al. catalogs, respectively, which we infer is caused by using different velocity models between the studies, and potentially by our use of an elliptical (rather than spherical) Earth model.

Figure 8: Representative set of earthquake detections during April 2, 2014. (a) Arrival times and detections of seventeen earthquakes, including many aftershocks of the April 1, 2014 Mw 8.2 Iquique earthquake. (b) Inset of waveforms from an example earthquake shown in (a). Waveforms shown are HHE channel 50 Hz data bandpassed between 5 - 22 Hz, normalized by their maxiumum amplitudes. In both (a,b), predicted moveouts of P- and S-waves are given by blue and red lines, and predicted P- and S-arrival associations are given by blue and red circles, respectively. Grey circles represent arrivals not associated to a source.

We visually assess the detection quality of a representative set of 300 earthquakes during January 20th, 2014, and assigned each a quality of (1,2, or 3), where to our best judgement, 1 = acceptably accurate, 2 = approximately accurate with some ambiguous arrival assignments or phase classifications, and 3 = likely incorrect set of associations/sources. Carrying out this procedure we found we had a quality distribution of (85, 11, 4) for qualities (1,2,3), respectively, where quality 3 cases nearly always occurred when two or more earthquakes were split incorrectly. By inspection, we have found that the seismicity distribution present during January 20th, 2014 is largely consistent with the seismicity distribution found throughout the catalogs duration, and we do not expect significant deviations from these quality distributions during other time intervals.

We further confirm that the foreshocks and aftershocks of the April 1, 2014 Mw 8.2 Iquique earthquake are detected by our catalog. We detect strong clustering of seismicity near the Mw 8.2 hypocenter for several weeks preceding and following the rupture (Figs. 7a,b). The number of detections in the restricted interval of [21.5S, 18.5S], [72W, 69.5W], and depths [5 km, -75 km] exceeds 600 detections per day during the highest rates of seismicity. These findings give confidence that dynamic sequences of earthquakes (i.e., foreshocks and aftershocks) do not impede the performance of our method, and we are capable of detecting events during both highly activate, and standard background seismicity intervals with no modifications to the method. A representative set of the aftershock detections along with an inset of waveforms is shown in Fig. 8.

Lastly, we note that our improved rates of detections compared with previous works results in a GR distribution with lower magnitude-of-completion by 0.8-1 magnitude unit (Fig. 7c). Above the magnitudes-of-completion of the different catalogs, the GR curves of Sippl et al., CSN, and our own are largely parallel, indicating a largely consistent set of common detections between the three. To make these GR comparisons fair, we calculated the GR curves using only earthquakes that occurred in spatial and temporal regions that all of the catalogs covered. Specifically, they were computed over the region [25S, 18S], [71.5W, 66.5W], depths [5 km, -250 km], and years [2010, 2014]. Further, magnitudes of Sippl et al. and our own are offset by -0.108, and -0.130 units, respectively, to align the GR curves with the CSN catalog between the 3.5 - 5.5 magnitude unit interval.

Computational Cost

In this application, we found that by using 20 cores of an Intel Xeon 2.40 GHz CPU, which parallelizes over the discrete backprojection, local gradient optimization, and competitive assignment routines (with no GPU acceleration), we process a days worth of seismicity in 135 seconds, using an average of 35 GB of RAM during processing. On average, each day 11,000 arrivals across the seismic network are processed, resulting in detecting 273 earthquakes/day, and discarding 2/3 of all arrivals originally detected by the Norm Picking algorithm as false arrivals (or those caused by magnitude sources small enough that we cannot catalog).

Discussion

The problem of earthquake detection is of general interest and a large body of research exists that focuses on the problem of arrival picking (Allen, 1978; Gibbons and Ringdal, 2006; Yoon et al., 2015; Perol et al., 2018; Ross et al., 2018; Zhu and Beroza, 2018), with comparatively fewer works on arrival associations (Bache et al., 1993; Johnson et al., 1997; Arora et al., 2013; Bergen and Beroza, 2018; Ross et al., 2019). With the rapidly improving suite of arrival picking methods, development of effective arrival association algorithms is increasingly important, since associations become more difficult as rates of arrival detections increase with a power law proportional to the arrival picking sensitivity. Issues arise in associating arrivals to their correct sources, as well as in the auxiliary problems of determining the number of sources, determining the phase types of arrivals, and identifying false arrivals. By solving this problem as accurately as possible, high fidelity and low-magnitude-of-completion earthquake catalogs can be obtained. Such catalogs can enhance understanding of many types of seismogenic processes (e.g., spatial-temporal clustering, aftershock and foreshock sequences, dynamic triggering, or long term rate changes).

In this paper we have presented an algorithm which uses tools from backprojection and graph theory to address the issues inherent to the problem of associations and enable the development of dense catalogs after arrival time picks have been made with a sensitive arrival picking algorithm (e.g., the Norm Picker described herein, or other alternatives (Yoon et al., 2015; Perol et al., 2018; Ross et al., 2018; Zhu and Beroza, 2018)). We have demonstrated how this technique can have high accuracy on relatively challenging synthetic and real detection problems (Figs. 4,8), and have also used the method to develop a catalog in northern Chile between 01/01/2010 - 12/31/2017 that has a magnitude-of-completion of M1.8, which lowers the magnitude-of-completion by 0.8-1-unit compared with existing catalogs (Table 3, Fig. 7c). Visual verification of a representative set of newly detected events was used to validate the detections to find that we have an approximately (85, 11, 4) distribution of detection qualities (1,2,3), respectively. Other forms of validation are that we detected the aftershocks and foreshocks of the April 1, 2014 Mw 8.2 Iquique earthquake with no modifications to the method, and we also detected a number of sources nearby local quarrys and industrial sites (Fig. 6). We re-detected nearly all previously cataloged earthquakes in the most complete catalog of this region to date (Table 3), while increasing the average rate of detection from 35 events/day to 273 events/day. In the application to northern Chile, we found we could process a days worth of seismicity using 20 cores of an Intel Xeon 2.40 GHz CPU in 135 seconds, enabling the creation of all 8 years of the catalog in 4.5 days of processing time.

One key component of our method that could be improved would be in generalizing our use of the backprojection metric. Currently, equation (1) assumes that all stations can theoretically detect arrivals of all earthquakes. However, for large aperture networks, the majority of small earthquakes will produce arrivals only on subsets of the networks stations due to attenuation. By backprojecting arrivals from the entire network at all times to all templates, the current metric would allow spurious, unrelated arrivals from distant portions of the network (which line up the moveout of a source but are not physically related) contribute to the detection of a source and be falsely assigned to that earthquake. If it is possible to modify (1) such that all sources only ‘look’ locally for the stations that can potentially observe that source (given its magnitude), then the association method could scale to large aperture networks while maintaining its ability to detect both small and large earthquakes. Additionally, if the directivity of arrivals were known, (1) could be modified to only propagate waves in the direction of the source which would significantly reduce the number of false local maxima in the BP-space. Other places in which the method can be improved are in adaptively adjusting the RBF-kernel widths to account for known variability in travel time uncertainties (between different stations and candidate source regions), and also in incorporating additional information into the graphs prior to applying competitive assignment. For example, edge weights could be weighted by the predicted phase-likelihood of each arrival (Zhu and Beroza, 2018), and if pairs of arrivals are known with high certainty to come from a common source (Bergen and Beroza, 2018; McBrearty et al., 2019), a constraint could be added into the constraint matrix of (6) to force these arrivals to be assigned to a common source. An avenue of development, more generally, may be in constructing arrival-arrival indexed graphs (with edge weights proportional to the likelihood those arrivals are associated) which may contain additional, or complimentary information to be used with the current approach. Such graphs have the advantage that the number of nodes is always bounded by the initial dataset and cannot blow up with increasing numbers of false candidate sources.

The earthquake catalog we have produced in this study may contain valuable data for future studies. A large body of work is now employing machine-learning-based methods that benefit from large training datasets. By containing over 800,000 earthquakes, the catalog presented here may be a valuable source of training data for future machine-learning-based studies. Other possible directions of future research would be to investigate the dynamics of small-scale spatial-temporal interactions of earthquakes, monitor rate changes of seismicity, or investigate for the prevalence of dynamic triggering contained in the catalog.

Conclusion

Associating earthquake arrivals remains a key component to developing dense earthquake catalogs, and it is crucial that association methods can resolve uncertainties such as unknown number of sources, unknown phase types, false arrivals, uncertainties on arrival time picks, travel time calculations, and the situation where several earthquakes occur nearby in time and produce overlapping moveouts across the network. We have shown that by combining tools from backprojection and graph theory a robust solution to all of these problems can be obtained. Our solution solves for all sources and their phase assignments simultaneously, rather than inferring each source sequentially in a greedy fashion, as is common in other association routines. We have demonstrated our method on synthetic tests as well as the development of a new catalog in northern Chile between January 1, 2010 and December 31, 2017 that has a magnitude-of-completion of M1.8. The new catalog in northern Chile contains 8 times more earthquakes from previously reported catalogs in this region, and may contain valuable insights into seismogenic processes occurring at the active Nazca-South American plate subduction zone yet to be discovered.

Data and Resources

This work included data from the CX seismic network, obtained from the GFZ Data Service. The catalog developed in this work is given in the Supplemental Materials. All data processing is done in MATLAB and Python. In the future, we plan to release an open-source Python implementation of the association and arrival picking methods described in this work.

Acknowledgments

This work was funded by Institutional Support (LDRD) at the Los Alamos National Laboratory. We thank Alex Hutko and Ben Baker for helpful feedback on our work. We also thank Bertrand Rouet-Leduc, Claudia Hulbert, Christopher Ren, James Theiler, Nicholas Lubbers and Daniel Trugman for many helpful discussions.

References

  • Allen (1978) Allen, R. V. (1978). Automatic earthquake recognition and timing from single traces. Bulletin of the Seismological Society of America, 68(5):1521–1532.
  • Arora et al. (2013) Arora, N. S., Russell, S., and Sudderth, E. (2013). Net-visa: Network processing vertically integrated seismic analysis. Bulletin of the Seismological Society of America, 103(2A):709–729.
  • Bache et al. (1993) Bache, T. C., Bratt, S. R., Swanger, H. J., Beall, G. W., and Dashiell, F. K. (1993). Knowledge-based interpretation of seismic data in the intelligent monitoring system. Bulletin of the Seismological Society of America, 83(5):1507–1526.
  • Bai et al. (2007) Bai, C.-y., Greenhalgh, S., and Zhou, B. (2007). 3d ray tracing using a modified shortest-path method. Geophysics, 72(4):T27–T36.
  • Bergen and Beroza (2018) Bergen, K. J. and Beroza, G. C. (2018). Detecting earthquakes over a seismic network using single-station similarity measures. Geophysical Journal International, 213(3):1984–1998.
  • Bloch et al. (2014) Bloch, W., Kummerow, J., Salazar, P., Wigger, P., and Shapiro, S. (2014). High-resolution image of the north chilean subduction zone: Seismicity, reflectivity and fluids. Geophysical Journal International, 197(3):1744–1749.
  • Comte et al. (1994) Comte, D., Roecker, S. W., and Suárez, G. (1994). Velocity structure in northern chile: evidence of subducted oceanic crust in the nazca plate. Geophysical Journal International, 117(3):625–639.
  • de Hoop and Tittelfitz (2015) de Hoop, M. V. and Tittelfitz, J. (2015). An inverse source problem for a variable speed wave equation with discrete-in-time sources. Inverse Problems, 31(7):075007.
  • Delorey et al. (2015) Delorey, A. A., Chao, K., Obara, K., and Johnson, P. A. (2015). Cascading elastic perturbation in japan due to the 2012 mw 8.6 indian ocean earthquake. Science advances, 1(9):e1500468.
  • DeVries et al. (2018) DeVries, P. M., Viégas, F., Wattenberg, M., and Meade, B. J. (2018). Deep learning of aftershock patterns following large earthquakes. Nature, 560(7720):632.
  • Dongen (2000) Dongen, S. (2000). A cluster algorithm for graphs.
  • Ellsworth (2013) Ellsworth, W. L. (2013). Injection-induced earthquakes. Science, 341(6142):1225942.
  • Fiedler et al. (2018) Fiedler, B., Zöller, G., Holschneider, M., and Hainzl, S. (2018). Multiple change-point detection in spatiotemporal seismicity data. Bulletin of the Seismological Society of America, 108(3A):1147–1159.
  • Gibbons et al. (2016) Gibbons, S. J., Kværna, T., Harris, D. B., and Dodge, D. A. (2016). Iterative strategies for aftershock classification in automatic seismic processing pipelines. Seismological Research Letters, 87(4):919–929.
  • Gibbons and Ringdal (2006) Gibbons, S. J. and Ringdal, F. (2006). The detection of low magnitude seismic events using array-based waveform correlation. Geophysical Journal International, 165(1):149–166.
  • Hayes et al. (2012) Hayes, G. P., Wald, D. J., and Johnson, R. L. (2012). Slab1. 0: A three-dimensional model of global subduction zone geometries. Journal of Geophysical Research: Solid Earth, 117(B1).
  • Hill and Prejean (2007) Hill, D. P. and Prejean, S. G. (2007). Dynamic triggering.
  • Jain (2010) Jain, A. K. (2010). Data clustering: 50 years beyond k-means. Pattern recognition letters, 31(8):651–666.
  • Johnson et al. (1997) Johnson, C. E., Lindh, A. G., and Hirshorn, B. F. (1997). Robust regional phase association. Technical report, US Geological Survey,.
  • Kao and Shan (2004) Kao, H. and Shan, S.-J. (2004). The source-scanning algorithm: Mapping the distribution of seismic sources in time and space. Geophysical Journal International, 157(2):589–594.
  • Kato et al. (2016) Kato, A., Fukuda, J., Kumazawa, T., and Nakagawa, S. (2016). Accelerated nucleation of the 2014 iquique, chile mw 8.2 earthquake. Scientific reports, 6:24792.
  • Kato and Nakagawa (2014) Kato, A. and Nakagawa, S. (2014). Multiple slow-slip events during a foreshock sequence of the 2014 iquique, chile mw 8.1 earthquake. Geophysical Research Letters, 41(15):5420–5427.
  • León-Ríos et al. (2016) León-Ríos, S., Ruiz, S., Maksymowicz, A., Leyton, F., Fuenzalida, A., and Madariaga, R. (2016). Diversity of the 2014 iquique’s foreshocks and aftershocks: clues about the complex rupture process of a mw 8.1 earthquake. Journal of Seismology, 20(4):1059–1073.
  • Li et al. (2017) Li, L., Becker, D., Chen, H., Wang, X., and Gajewski, D. (2017). A systematic analysis of correlation-based seismic location methods. Geophysical Journal International, 212(1):659–678.
  • Lomax et al. (2000) Lomax, A., Virieux, J., Volant, P., and Berge-Thierry, C. (2000). Probabilistic earthquake location in 3d and layered models. In Advances in seismic event location, pages 101–134. Springer.
  • Lubbers et al. (2018) Lubbers, N., Bolton, D. C., Mohd-Yusof, J., Marone, C., Barros, K., and Johnson, P. A. (2018). Earthquake catalog-based machine learning identification of laboratory fault states and the effects of magnitude of completeness. Geophysical Research Letters.
  • McBrearty et al. (2019) McBrearty, I. W., Delorey, A. A., and Johnson, P. A. (2019). Pairwise association of seismic arrivals with convolutional neural networks. Seismological Research Letters.
  • Montoya-Noguera and Wang (2017) Montoya-Noguera, S. and Wang, Y. (2017). Bayesian identification of multiple seismic change points and varying seismic rates caused by induced seismicity. Geophysical Research Letters, 44(8):3509–3516.
  • Moser (1991) Moser, T. (1991). Shortest path calculation of seismic rays. Geophysics, 56(1):59–67.
  • Nakata and Beroza (2016) Nakata, N. and Beroza, G. C. (2016). Reverse time migration for microseismic sources using the geometric mean as an imaging condition. Geophysics, 81(2):KS51–KS60.
  • Nemhauser and Wolsey (1988) Nemhauser, G. L. and Wolsey, L. A. (1988). Integer programming and combinatorial optimization. Wiley, Chichester. GL Nemhauser, MWP Savelsbergh, GS Sigismondi (1992). Constraint Classification for Mixed Integer Programming Formulations. COAL Bulletin, 20:8–12.
  • Paszke et al. (2017) Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. (2017). Automatic differentiation in pytorch. In NIPS-W.
  • Peng and Ben-Zion (2005) Peng, Z. and Ben-Zion, Y. (2005). Spatiotemporal variations of crustal anisotropy from similar events in aftershocks of the 1999 m 7.4 izmit and m 7.1 düzce, turkey, earthquake sequences. Geophysical Journal International, 160(3):1027–1043.
  • Peng et al. (2009) Peng, Z., Vidale, J. E., Wech, A. G., Nadeau, R. M., and Creager, K. C. (2009). Remote triggering of tremor along the san andreas fault in central california. Journal of Geophysical Research: Solid Earth, 114(B7).
  • Perol et al. (2018) Perol, T., Gharbi, M., and Denolle, M. (2018). Convolutional neural network for earthquake detection and location. Science Advances, 4(2):e1700578.
  • Poiata et al. (2016) Poiata, N., Satriano, C., Vilotte, J.-P., Bernard, P., and Obara, K. (2016). Multiband array detection and location of seismic sources recorded by dense seismic networks. Geophysical Journal International, 205(3):1548–1573.
  • Powers (2011) Powers, D. M. (2011). Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation.
  • Ringdal and Kværna (1989) Ringdal, F. and Kværna, T. (1989). A multi-channel processing approach to real time network detection, phase association, and threshold monitoring. Bulletin of the Seismological Society of America, 79(6):1927–1940.
  • Ross et al. (2018) Ross, Z. E., Meier, M.-A., Hauksson, E., and Heaton, T. H. (2018). Generalized seismic phase detection with deep learning. Bulletin of the Seismological Society of America, 108(5A):2894–2901.
  • Ross et al. (2019) Ross, Z. E., Yue, Y., Meier, M.-A., Hauksson, E., and Heaton, T. H. (2019). Phaselink: A deep learning approach to seismic phase association. Journal of Geophysical Research: Solid Earth, 124(1):856–869.
  • Rouet-Leduc et al. (2017) Rouet-Leduc, B., Hulbert, C., Lubbers, N., Barros, K., Humphreys, C. J., and Johnson, P. A. (2017). Machine learning predicts laboratory earthquakes. Geophysical Research Letters, 44(18):9276–9282.
  • Schrijver (1998) Schrijver, A. (1998). Theory of linear and integer programming. John Wiley & Sons.
  • Shapiro et al. (2006) Shapiro, S., Dinske, C., and Rothert, E. (2006). Hydraulic-fracturing controlled dynamics of microseismic clouds. Geophysical Research Letters, 33(14).
  • Sippl et al. (2018) Sippl, C., Schurr, B., Asch, G., and Kummerow, J. (2018). Seismicity structure of the northern chile forearc from¿ 100,000 double-difference relocated hypocenters. Journal of Geophysical Research: Solid Earth.
  • Sippl et al. (2013) Sippl, C., Schurr, B., Yuan, X., Mechie, J., Schneider, F., Gadoev, M., Orunbaev, S., Oimahmadov, I., Haberland, C., Abdybachaev, U., et al. (2013). Geometry of the pamir-hindu kush intermediate-depth earthquake zone from local seismic data. Journal of Geophysical Research: Solid Earth, 118(4):1438–1457.
  • Tarantola and Valette (1982) Tarantola, A. and Valette, B. (1982). Generalized nonlinear inverse problems solved using the least squares criterion. Reviews of Geophysics, 20(2):219–232.
  • Vasuki and Vanathi (2006) Vasuki, A. and Vanathi, P. (2006). A review of vector quantization techniques. IEEE Potentials, 25(4):39–47.
  • Watkins et al. (2018) Watkins, W. D., Thurber, C. H., Abbott, E. R., and Brudzinski, M. R. (2018). Local earthquake tomography of the jalisco, mexico region. Tectonophysics, 724:51–64.
  • Wu et al. (2018) Wu, Y., Lin, Y., Zhou, Z., Bolton, D. C., Liu, J., and Johnson, P. (2018). Deepdetect: A cascaded region-based densely connected network for seismic event detection. IEEE Transactions on Geoscience and Remote Sensing, (99):1–14.
  • Yoon et al. (2015) Yoon, C. E., O’Reilly, O., Bergen, K. J., and Beroza, G. C. (2015). Earthquake detection through computationally efficient similarity search. Science advances, 1(11):e1501057.
  • Zhu and Beroza (2018) Zhu, W. and Beroza, G. C. (2018). Phasenet: a deep-neural-network-based seismic arrival-time picking method. Geophysical Journal International, 216(1):261–273.
  • Zhu et al. (2018) Zhu, W., Mousavi, S. M., and Beroza, G. C. (2018). Seismic signal denoising and decomposition using deep neural networks. arXiv preprint arXiv:1811.02695.
1:procedure initialize
2:     Input: for
3:     Choose
4:     Use K-Means to infer optimal template coordinates given
5:procedure run discrete backprojection
6:     Input: and
7:     Choose and (or adaptively choose given )
8:     Calculate using equation (4)
9:     Record all triggering times where
10:     Convert triggering times into source coordinates;
11:     Optionally optimize sources in in continuous with local gradient ascent
12:procedure compute graph
13:     Input: and
14:     For each source-arrival pair, , calculate edge weights using equation (5)
15:procedure simplify and solve each subgraph
16:     Input:
17:     Choose and
18:     Extract all disconnected subgraphs from
19:     for all disconnected subgraphs do
20:         Run MCA with parameter to simplify each subgraph
21:         Run CA with paramater on the reduced subgraph and record solution      
22:procedure characterize each earthquake
23:     Input: All sources and associated arrivals returned from CA
24:     for all earthquakes do
25:         Relocate with probabilistic formulation (equation S4)
26:         Estimate origin time (equation S5)
27:         Estimate magnitude (equation S6)      
Algorithm 1 The Association and Detection Method. This flowchart summarizes the essential steps to the method: (1) extracting template moveout vectors, (2) running discrete backprojection and detecting all candidate sources, (3) computing the source-arrival graph edge weights, (4) simplifying the source-arrival graphs by extracting disconnected components, running the graph pruning routine (MCA), running competitive assignment to compute the final set of detections, and (5) estimating earthquake locations, origin times and magnitudes.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
349714
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description