Fast error–controlling MOID computation for confocal elliptic orbits

Fast error–controlling MOID computation for confocal elliptic orbits

Roman V. Baluev Saint Petersburg State University, Faculty of Mathematics and Mechanics, Universitetskij pr. 28, Petrodvorets, Saint Petersburg 198504, Russia Central Astronomical Observatory at Pulkovo of the Russian Academy of Sciences, Pulkovskoje sh. 65/1, Saint Petersburg 196140, Russia r.baluev@spbu.ru Denis V. Mikryukov Saint Petersburg State University, Faculty of Mathematics and Mechanics, Universitetskij pr. 28, Petrodvorets, Saint Petersburg 198504, Russia
Abstract

We present an algorithm to compute the minimum orbital intersection distance (MOID), or global minimum of the distance between the points lying on two Keplerian ellipses. This is achieved by finding all stationary points of the distance function, based on solving an algebraic polynomial equation of th degree. The algorithm tracks numerical errors appearing on the way, and treats carefully nearly degenerate cases, including practical cases with almost circular and almost coplanar orbits. Benchmarks confirm its high numeric reliability and accuracy, and that regardless of its error–controlling overheads, this algorithm pretends to be one of the fastest MOID computation methods available to date, so it may be useful in processing large catalogs.

keywords:
close encounters, near-Earth asteroids, NEOs, catalogs, computational methods
journal: Astronomy and Computing

1 Introduction

The MOID parameter, or the minimum distance between points on two Keplerian orbits, has an important value in various Solar System studies. It measures the closeness of two trajectories in the space, and hence ascertains whether two bodies have a risk to collide in some future.

Therefore, computing the MOID is very old task with an application to Potentially Hazardous Objects (PHOs) and Near-Earth Asteroids (NEAs).

This problem is investigated over decades already, see e.g. (Sitarski, 1968; Dybczyński et al., 1986) and more recent works by Armellin et al. (2010); Hedo et al. (2018).

The MOID is a minimum of some distance or distance-like function that depends on two arguments, determining positions on two orbits. The methods of finding the minima of can be split in several general cathegories, depending on the dimensionality of the optimization task to be solved. This depends on how much work is pre-computed analytically.

  1. Global optimization in 2D. As an ultimately simple example this includes e.g. the 2D brute-force (exhaustive) search of on a 2D grid. Thanks to the existence of rigorous and finite upper limits on the gradient of , which appears to be a trigonometric polynomial, we can always limit the finite difference by and . Thanks to such error predictability, algorithms of the 2D class appear the most reliable ones, because we can always determine the MOID, along with the orbital positions and , to any desired accuracy. An advanced method based on the 2D global optimization of , which includes high-order Taylor models and interval arithmetics, was presented by Armellin et al. (2010). Nevertheless, even advanced methods of this type cannot be fast due to the need of considering 2D domains.

  2. 1D optimization. Here we eliminate from the numeric search by solving it from an analytic equation. The remaining orbital position is determined by numeric optimization of the 1D function . In general, this is faster than 2D minimization, but the derivative of is no longer bounded, because may turn infinite sometimes. Such cases may appear even for very simple circular orbits. Therefore, in general this method cannot provide a strict mathematical guarantee of the desired numerical accuracy. However, it appears more reliable than the methods of the next class.

  3. Methods of the 0D class, in which both and are solved for rather than found by numeric optimization. This includes the method by Kholshevnikov and Vassiliev (1999b) and by Gronchi (2002, 2005), because they do not explicitly deal with any numeric optimization at all. The task is analytically reduced to solving a nonlinear equations with respect to and then express also analytically. Methods of this class are ultimately fast but relatively vulnerable to loosing roots due to numerical errors (in nearly degenerate cases). This effect becomes important because the equation for is quite complicated and subject to round-off errors. Also, this equation often have close (almost multiple) roots that are always difficult for numeric processing.

Here we present an efficient numeric implementation of the algebraic approach presented by Kholshevnikov and Vassiliev (1999b), similar to the one presented by Gronchi (2002, 2005). This method belongs to the fast 0D class. It is based on analytic determination of all the critical points of the distance function, , augmented with an algebraic elimination of one of the two positional variables. Mathematically, the problem is reduced to a single polynomial equation of th degree with respect to one of the eccentric anomalies.

Recently, Hedo et al. (2018) suggested different approaches that do not rely on necessary determination of all the stationary points for the distance. Those methods proved even more fast, according to their benchmarks, but the performance differences is still relatively moderate.

However, direct implementation of the methods by Kholshevnikov and Vassiliev (1999b) and Gronchi (2002) might be vulnerable, because finding roots of a high-degree polynomial might be a numerical challenge sometimes. When dealing with large asteroid catalogs, various almost-degenerate cases appear sometimes, if the equations to be solved contain almost-double or almost-multiple roots. Such roots are difficult to be estimated accurately, because they are sensitive to numeric errors (even if there were no errors in the input orbital elements). Moreover, we have a risk of ambiguity: if the polynomial has two or more very close real roots then numeric errors may result in moving them to the complex plane entirely, so that we may wrongly conclude that there are no such real roots at all. Such effect of lost real roots may potentially result in overestimating the MOID, i.e. it may appear that we lost exactly the solution corresponding to the global minimum of the distance. This issue can be solved by paying attention not just to the roots formally identified as real, but also to all complex-valued roots that appear suspiciously close to the real axis. To define formally what means ‘suspiciously close’ we need to estimate numeric error attached to a given root, not just its formal value.

In other words, our task assignes an increased role to the numeric stability of the computation, because errors are known to dramatically increase when propagating through mathematical degeneracies. This motivated us to pay major attention to error control when implementing the method by Kholshevnikov and Vassiliev (1999b) in a numerical algorithm.

The structure of the paper is as follows. In Sect. 2, we give some mathematical framework that our algorithm relies upon. Sect. 3 describes the numeric algorithm itself. Sect. 4 presents its performance tests. In Sect. 5, we describe several auxiliary tools included in our MOID library.

The C++ source code of our MOID library named distlink is available for download at http://sourceforge.net/projects/distlink.

2 Mathematical setting

Consider two confocal elliptic orbits: determined by the five geometric Keplerian elements , and determined analogously by the same variables with a stroke. Our final task is to find the minimum of the distance between two points lying on the corresponding orbits, and the orbital positions where this minimum is attained (here stands for the eccentric anomaly). According to Kholshevnikov and Vassiliev (1999b), this problem is reduced to solving for the roots of a trigonometric polynomial of minimum possible algebraic degree (trigonometric degree ). It is expressed in the following form:

(1)

where

(2)

and , . The quantities , , , represent pairwise scalar products of the vectors and :

(3)

with analogous definitions for and .

When all the roots of are found, for each we can determine the second position from

(4)

where

(5)

The sign of should be chosen to satisfy

(6)

so there is only a single value of that corresponds to a particular solution for .

Finally, after both the orbital positions and were determined, the squared distance between these points is , where

(7)

Therefore, our general computation scheme should look as follows: (i) find all real roots of ; (ii) for each solution of determine its corresponding ; (iii) for each such pair compute ; and (iv) among these values of select the minimum one. This will give us the required MOID estimate. As we can see, the most difficult step is finding all real roots of the trigonometric polynomial , while the rest of the work is rather straightforward.

This trigonometric polynomial can be rewritten in one of the two standard forms:

(8)

where . The coefficients , , and can be expressed as functions of the quantities , , , , and , , . Most of such explicit formulae would be too huge and thus impractical, but nonetheless we computed an explicit form for the coefficient :

(9)

Here the asterisk means complex conjugation.

The number of real roots of cannot be smaller than (Kholshevnikov and Vassiliev, 1999b). Also, this number is necessarily even, since is continuous and periodic. But the upper limit on the number of real roots is uncertain. In any case, it cannot exceed , the algebraic degree of , but numerical simulations performed by Kholshevnikov and Vassiliev (1999b) never revealed more than real roots. Here we reproduce their empirical upper limit: based on a test computation of orbit pairs from the Main Belt (see Sect. 4), we obtained approximately one -root occurence per orbit pairs111This is actually an upper limit on that rate, because our algorithm may intentionally count some complex roots with small imaginary part as real ones. This estimate is sensitive to the selected floating-point precision and to subtle details that affect overall numeric accuracy of the algorithm. It may even appear possible that all these potential -root occurences in actuality contain only real roots.. No cases with or roots were met.222We had one -root occurence using the standard double precision, but this case appeared to have just real roots with long double arithmetic.

Since the number of real roots of is highly variable and a priori unknown, certain difficulties appear when dealing with in the real space. In practice often becomes close to being degenerate, e.g. in the case of almost circular or almost coplanar orbits, which is frequent for asteroids and typical for major planets in the Solar System. In such cases, real roots of combine in close pairs or even close quadruples. The graph of passes then close to the abscissa near such roots. This means that numeric computing errors affect such nearly-multiple roots considerably, implying increased uncertainties. Moreover, we might be even uncertain about the very existence of some roots: does the graph of really intersects the abscissa or it passes slightly away, just nearly touching it? In practice this question may become non-trivial due to numerical errors, that might appear large because is mathematically complicated. Therefore, treating only in the real space might result in loosing its roots due to numerical errors.

But loosing real roots of would potentially mean to vulnerably overestimate the MOID, because there is a risk that the minimum distance occasionally corresponds to a lost root. Then it might be more safe to overestimate the number of real roots of , i.e. we should also test “almost-real” complex roots that correspond to a near-touching behaviour of , even if it does not apparently intersect the abscissa. This would imply some computational overheads and additional CPU time sacrificed for the algorithmic reliability and numeric stability. Also, this would mean to treat in the complex plane and find all its complex roots, rather than just the real ones. As such, we need to swap to complex notations.

By making the substitution or , we can transform to an algebraic polynomial of degree :

(10)

So, the task of finding roots of becomes equivalent to solving or .333Since can take only real values, we always have and . Among all these complex roots we must select those that within numeric errors lie on the unit circle .

Since all and are real, the complex coefficients satisfy the property . Hence, roots of obey the following rule: if is such a root then is also a root of . Therefore, the entire set of these roots includes three families: (i) roots on the unit circle that correspond to real , (ii) roots outside of this circle, , and (iii) roots inside the unit circle, . The roots with are split into pairs of mutually inverse values that have and .

3 Numerical algorithm

3.1 Determining the polynomial coefficients and their uncertainty

First of all, we must represent the polynomial in its canonical form (8). For that, we need to compute the coefficients . The explicit formulae for are too complicated and impractical, except for the case given in (9). Instead of direct computation of , we determine them by means of the discrete Fourier transform (DFT hereafter):

(11)

Here, are computed by using the relatively compact formula (1). Regardless of the use of DFT, this approach appears computationally faster than computing all directly. We do not even use FFT algorithms for that, because of too small number of coefficients . For so small , the FFT technique did not give us any remarkable speed advantage in comparison with the direct application of the DFT (11).

However, the DFT can likely accumulate some rounding errors. The accuracy of so-determined can be roughly estimated by comparing the DFT estimate of with its explicit representation (9), which is still mathematically simple. We may assume that numerical errors inferred by the formula (9) are negligible, and that all the difference between (11) and (9) is solely explained by the DFT errors.

Moreover, we can compute the DFT (11) for any . In such a case, all the coefficients for must turn zero. However, due to numeric errors their DFT estimate may occur non-zero, and in such a case the magnitude of this can be used as a rough error assessment.

Based on this consideration, we adopted the following formulae to estimate the average error in :

(12)

Here, is determined from (9), while are DFT estimations from (11). The formula (12) represents a statistical estimation that treats numerical errors in as random quantities. It is based on the assumption that errors in different are statistically independent (uncorrelated) and have the same variance. In such a case, provides an estimate of that variance.

In our algorithm, we set , thus computing the DFT from points . In practical computation we always obtained not far from the machine precision, except for rare cases. We additionally notice that the error estimation (12) also includes, to a certain extent at least, the numeric error appeared when computing the values of by formula (1), not just the DFT errors inferred by (11).

3.2 Root-finding in the complex plane

When all are determined, along with their probable numerical error, we can determine all complex roots of . This is done via Newtonian iterations and obeys the following numeric scheme:

  1. Initial approximations for the first roots are selected in a specific optimized manner as detailed below.

  2. Initial approximation for each next root is chosen according to the prediction , where is the final estimation of the previous root. Thanks to such a choice, the algorithm will always extract a paired complex root immediately after . The Newtonian iterations for converge in this case very quickly (in two iterations or so). This does not work, if belongs to the family (such roots do not combine into inverse pairs), or if turns out to be that second root in the pair. Then such a starting approximation would be equal to either or , so the next extracted root will likely appear close to one of these.

  3. Each root is refined by Newtonian iterations (i) until some least required relative accuracy is achieved, and then (ii) until we reach the desired target relative accuracy or, at least, the maximum possible machine accuracy, if is unreachable. On the first phase, we iterate until the last Newtonian step falls below . The iterations are restarted from a different starting point, if they are trapped in an infinite loop at this phase (this is the known curse of the Newton method). On the second phase, the stopping criterion relies on the last and pre-last Newtonian steps, and . The iterations are continued either until , or until the relative step change, , drops below the machine epsilon . The latter criterion is motivated as follows. In the middle of iterations, whenever numeric round-off errors are not significant yet, the parameter should remain large positive, since each is much smaller than . But in the end either , if iterations get finally stuck at almost the same numeric value near the root, or occasionally attains negative values, if the iterations start to randomly jump about the root due to numeric errors. A good practical assumption for the accuracy parameters might be and or about .

  4. Whenever we have an accurate estimate of a root , this root is eliminated from through dividing it by via the Horner scheme. The remaining polynomial has a reduced degree. For the sake of numerical stability, we either extract the multiplier from , if , or from , if .

  5. The roots are extracted in such a way until we receive a quadratic polynomial in . Its two remaining roots are then obtained analytically.

The order, in which the roots are extracted, is important. If we extract ‘easy’ roots first, we spend little Newtonian iterations with a high-degree . Also, such ‘easy’ roots should likely be far from degeneracies and hence be numerically accurate. Therefore, they should not introduce big numeric errors when the Horner scheme is applied. The ‘difficult’ roots that require big number of Newtonian iterations should better be extracted later, when the degree of is reduced. If we act in an opposite manner, i.e. extract ‘difficult’ roots first, these difficult roots will inevitably increase numeric errors. After applying the Horner scheme, these increased errors are transferred to the coefficients , reducing the accuracy of all the remaining roots. Also, bad roots always require larger number of Newtonian iterations, which become even more expensive at the beginning, when the degree of is still large and its computation is more slow.

After some tests we decided that the best way is to extract in the very beginning extreme complex roots: and their inversions . Such roots are determined quickly and accurately, and the Horner scheme is very stable for them. Since in practical computations we always revealed at least complex roots, we try to extract these four roots in the beginning. The starting approximation for the first root, , is always set to zero. This will likely give us the root with smallest . The next root, , is started from and will be determined almost immediately. It will be the largest one. Initial approximations for the next too roots, and , are set from our usual rule, . Thanks to this, we obtain yet another smallest root as , and yet another largest root as .

After these four extreme complex roots are removed from , we try to extract the four guaranteed roots that lie on the unit circle. We select their initial approximations to be such that is located at the orbital nodes or from them. This is motivated by the practical observation that the MOID is usually attained near the orbital nodal line, see Sect. 5. Thanks to such a choice, these four roots are determined in a smaller number of Newtonian iterations.

The ninth root is iterated starting from again, and for the rest of roots we follow the general rule . Thanks to such a choice, the algorithm tries to extract the remaining roots starting far from the unit circle , approaching it in the end. Therefore, the most numerically difficult cases, which are usually located at , are processed last, when the degree of is already reduced in a numerically safe manner.

Using this optimized sequence we managed to reduce the average number of Newtonian iterations from per root to per root, according to our benchmark test case (Sect. 4). Also, this allowed to further increase the overall numeric accuracy of the roots and numeric stability of the results, because highly accurate roots are extracted first and roots with poor accuracy did not affect them.

3.3 Estimating roots uncertainty and roots selection

When all complex roots of are obtained, we need to select those roots that satisfy and thus correspond to real values of . However, in practice the equation will never be satisfied exactly, due to numerical errors. We need to apply some criterion to decide whether a particular is close to unit, within some admissible numeric errors, or not.

We approximate the relative error of the root by the following formula:

(13)

Its explanation is as follows.

Firstly, is the smaller (in absolute value) of the roots of a quadratic polynomial that approximates near :

(14)

Thus, the first term in (13), or , approximates the residual error of still remained after Newtonian iterations. It is zero if precisely. Here we use the initial polynomial of th degree, not the one obtained after dividing it by any of . For practical purposes, should be calculated using a numerically stabilized formula that avoids subtraction of close numbers whenever . For example, we can use

(15)

selecting such sign of that maximizes the denominator .

But just is not enough to characterize the uncertainty of in full. In fact, most of this uncertainty comes from the numerical errors appearing in through . Inaccurate computation of leads to errors in the estimated root . Using the quadratic approximation (14), the sensitivity of with respect to varying is expressed by the derivative . Hence, the second error term in (13) appears, , where represents the error estimate of :

(16)

where given in (12).

The quadratic approximation (14) is related to the iterative Muller method that takes into account the second derivative of . We needed to take into account because in practice the real roots of are often combined into close pairs, triggering a close-to-degenerate behaviour with small . In such a case the linear (Newtonian) approximation of yields too pessimistic error estimate for . The use of the quadratic approximation (14) instead allows us to adequately treat such cases with nearly double roots.

However, even with (14) it is still difficult to treat the cases in which the roots combine in close quadruples. Then becomes small too, along with and . The error estimate (13) becomes too pessimistic again. Such cases are very rare, but still exist. They may need to be processed with an alternative computation method (see Sect. 5).

In the error estimate (13), we neglect numerical errors of and of , assuming that these quantities do not vanish in general and thus always keep a satisfactory relative accuracy (this is typically true even for almost double paired roots).

We use the following numeric criterion to identify roots lying on the unit circle:

(17)

Here, is an auxiliary scaling parameter controlling the tolerance of the threshold. Normally, it should be set to unit and its meaning is to heuristically correct the estimated in case if there are hints that this error estimation is systematically wrong. The threshold is supposed to mean the so-called the three-sigma rule. It was selected well above the unit in order to increase the safety of roots selection and hence the reliability of the entire algorithm.

After selecting all the roots that lie close enough to the unit circle, we may determine the corresponding eccentric anomaly , then its corresponding from (4) and from (7). The minimum among all computed yields us the required MOID estimate.

In general, the discriminant in (4) is non-negative if is a root of , but this can be violated in some special degenerate cases (Baluyev and Kholshevnikov, 2005). Formally, negative means that MOID cannot be attained at the given orbital position , even if . This is quite legal, meaning that some roots of may be parasitic, i.e. corresponding to a critical point of for some complex (even if is real). However, may also turn negative due to numeric errors perturbing almost-zero . We could distinguish such cases based on some uncertainty estimate of , but in practice it appears easier to process all them just forcing . In the first case (if is negative indeed), this would imply just a negligible computation overhead because of unnecessary testing of an additional that cannot be a MOID. But in the second case (if appeared negative due to numeric errors) we avoid loosing a potential MOID candidate .

3.4 Estimating uncertainties of the MOID and of the other orbital position

Assuming the uncertainty estimate for to be , we can approximate the implied errors in and by means of their differentiation with respect to :

(18)

where the derivatives can be computed numerically. We took into account that should be zero (or at least numerically negligible) at a local minimum, so appears as the first non-degenerate term in .

However, these formulae are useful only if the final errors are dominated by errors in , i.e. if is relatively large. If appeared negligible then the MOID error may mostly come from the round-off errors appearing in the formulae (4).

This can be important if is close to zero: in such a case becomes sensitive to numeric errors in , , and that may appear significant even if was absolutely precise. To show this, let us note that (4) actually represents the solution of the following trigonometric equation:

(19)

We use the so-called delta method to approximate the variance of the solution . First, by differentiating this implicit equation with respect to , , and , it is easy to express the error of via a linear combination of their errors:

(20)

Secondly, assuming that all the terms are statistically independent, the variance of is

(21)

From the definition (2) it follows that and should be equal or at least should have similar magnitude, while contains only terms proportional to the eccentricities, so its error magnitude is likely small if the orbits are almost circular. Hence, by using the same variance-summing approach we can adopt a rough approximation . The cumulative error from (21) is then approximated as

(22)

As we can see, this part of error is proportional to and may appear large even if is close to the machine epsilon.

But (22) becomes invalid if is zero or too small. For the degenerate case, , the error in is dominated by the error in the combination . The precise value of in such a case is determined from

(23)

However, due to numeric errors in , , and , we may obtain a non-zero value of and a different value of . The linear approximation (21) is no longer valid, but directly from (4) we can obtain

(24)

Here, the individual numeric errors in , , and were neglected, because their remaining contribution is linear, while the error of their combination has the order and thus dominates.

Now, the variance of can be expressed using the delta method, again assuming the statistical independece of the errors in , and :

(25)

Therefore, the variance of becomes

(26)

We may note that (26) can be rewritten in the same form as (22), if we substitute . This also serves as a degeneracy criterion: for we must apply (26) in place of (22). Moreover, we can just merge (22) and (26) into a single universal approximation using the following formula:

(27)

The error estimates for from (18) and (27) should be summed up together to take into account the both error sources (the error coming from and from the additional round-off errors in , , ). The error estimate for from (18) should be summed up with

(28)

where the derivative can be again computed numerically. Here we should substitute , and it is assumed that , because and both have typical magnitude of about unit (at most).

Some additional round-off errors come from the formula (7). If MOID is small than this formula involves subtraction of close numbers, so we must consider separately the errors generated by different terms. Since these terms contain potentially large multipliers and , their numeric errors may dominate in the total error budget and make the relative error of the MOID large, even if both and were determined with an accuracy close to the machine precision.

In practical computations we actually use a more short expression for the MOID than (7), namely the direct one . Leaving aside the errors in and , this sum contains three terms, and each term introduces the round-off error roughly proportional to its magnitude. The worst-case value for the scalar product that does not allow terms cancellation, is . Hence, the worst-case round-off error in corresponds to the formula . This implies:

(29)

The largest possible values for or are the corresponding apocentric distances, or . Assuming their relative errors to be , we obtain the following estimate for round-off errors in :

(30)

This should be added to and defined above, giving the cumulative estimate of numeric uncertainty in .

After that, an indicative uncertainty for the is

(31)

This formula is valid only if is not close to zero (compared to ). Otherwise, the MOID uncertainty is

(32)

By analogy with (27), two latter formulae can be combined in a single one for convenience:

(33)

3.5 Self-testing numerical reliability

Finally, our algorithm includes a self-diagnostic test that verifies the following post-conditions:

  1. All roots that passed (17) must comply with the requested least accuracy: .

  2. The minimum of among all the roots that failed (17) must exceed , meaning that there is no other suspicious root candidates. That is, the families and must be separated by a clear gap.

  3. The number of roots that passed (17) must be even and greater than four (necessary algebraic conditions following from the theory).

If some of these conditions are broken, the algorithm sets a warning flag. Receiving such a signal, the results should be considered unreliable. In practice this is a very seldom case (see next section), but still possible. Then the following sequence can be done to verify the results: (i) run the same algorithm on the same orbits and , but swap them with each other; (ii) if failed again, run the same algorithm using the long double precision instead of double; (iii) run the long double computation on swapped orbits; (iv) if everything failed, invoke an alternative MOID algorithm, e.g. the one from Sect. 5.

We notice that since the task is solved asymmetrically, the algorithm may yield slightly different results when computing and . If the orbital configuration does not imply degeneracies, both computations should offer the same MOID value, within the reported uncertainties. If they differ unacceptably, this can serve as an additional indicator that something went wrong.

However, this notice does not apply to the estimated MOID uncertainty. This uncertainty can appear different when computing and , because the polynomials and may have (and usually do have) different algebraic properties. In fact, if the goal is accuracy rather than speed, one may always compute the MOID in the both directions, and , selecting the value offering better accuracy.

4 Practical validation and performance benchmarks

We tested our algorithm on the first numbered asteroids from the Main Belt, implying orbit pairs. The orbital elements were taken from the catalog astorb.dat of the Lowell observatory.444See url ftp://ftp.lowell.edu/pub/elgb/astorb.html.

Our algorithm succeeded nearly always. When using the standard double floating-point arithmetic, the self-test conditions listed above were failed only once per orbit pairs. In case of such a warning the first attempt was to rerun the same algorithm interchanging the orbits and . Since the method treats orbits asymmetrically, this usually helps. Double-warnings occured in our test once per orbit pairs.

We note that if the algorithms returns a bad self-diagnostic flag, this does not yet mean that it failed to compute the MOID and the result is necessarily wrong or just absent. One of the reasons for a warning is that some root (not even necessarily related to the global minimum) is worse than the required least accuracy . But worse does not mean necessarily useless. This just means that the result needs an attention and probably needs a more detailed investigation using different other methods to confirm or refine the results. Occurences when the resulting MOID appears entirely wrong and has inacceptable accuracy, represent only a small fraction of all those cases when the warning was reported.

Figure 1: The difference between MOID values computed by Gronchi code and by our new algorithm (labelled as distlink). Top: comparing results improved by orbits interchanging and selecting the best MOID of the two. Bottom: using only a single MOID value for comparison. To reduce the figure file size, we removed from the both graphs all differences below AU in absolute magnitude. See text for a detailed explanation.

We also tested the Gronchi FORTRAN code in the same setting. We found only two orbit pairs for which it failed with a error and no-result, and swapping the orbits did not help. A single-fail case, when the first attempt to compute MOID failed, but swapping the orbits did help, occured once per MOID computations. For the majority of orbit pairs this algorithm gave some numeric MOID value at least, but in itself this does guarantees that all these values are accurate.

We provide a comparison of our new algorithm with the Gronchi code in Fig. 1. We compute the differences of the Gronchi MOID minus the MOID obtained by our new algorithm in two settings. In the first case, we run both algorithms for each orbit pair twice, to compute and . With the Gronchi code, we select the minimum MOID between the two, and for our new code we select the best-accuracy MOID. If Gronchi algorithm failed with no-result in one of the two runs, the corresponding MOID value was ignored, and only the other one was used. If the both Gronchi MOIDs were failed, this orbit pair itself was ignored. Additionally, if our new algorithm reported a warning, we either ignored this MOID in favour of the other value, or invoked the fallback algorithm from Sect. 5, if that second MOID estimate appeared unreliable too. The MOID difference between the Gronchi code and new algorithm was then plotted in the top panel of Fig. 1. In the second setting, we plainly performed just a single MOID computation for each orbit pair without orbit interchange, either using the Gronchi code or our new algorithm. Orbit pairs for which Gronchi code failed or our algorithm reported a warning, were ignored and removed. The corresponding MOID difference is plotted in the bottom panel of Fig. 1.

We may see that there are multiple occurences when Gronchi code obtained clearly overestimated MOID value (i.e., it missed the true global minimum). No cases were found when Gronchi algorithm gave significantly smaller MOID than our distlink library (except for negligible small differences due to numeric errors).

Figure 2: Left: distribution of the estimated errors versus an empiric error measure . Most orbit pairs are shown as tiny dots, plotting as points only those pairs for which appeared inadequately small. To keep the figure understandable we used only of orbit pairs from our test case (see text for details). Right: histogram for the number of Newtonian iterations spent per root. The histogram was normalized by the bin width, so it is actually an estimation of the probability density function for the quantity labelled in the abscissa. All computations were done in the double floating-point arithmetic.

In Fig. 2 we compare the quadrature sum of the reported MOID uncertainties, , with the difference that can be deemed as an empiric estimate of the actual MOID error. We may conclude that our algorithm provides rather safe and realistic assessment of numeric errors, intentionally a bit pessimistic.

One can also see that we spend, in average, about Newtonian iterations per each root. One way how we may further increse the speed of computation is to reduce this number. However, this number is already quite small, so there is no much room to significantly reduce it.

Hardware double arithmetic long double arithmetic
distlink Gronchi code distlink Gronchi code
(fast alg.) (fast alg.)
Intel Core i7  s  s  s NA
Supermicro & Xeon  s  s  s NA
AMD FX  s  s  s NA
Table 1: Performance tests on the first asteroids from the Main Belt: average CPU time per MOID.

In Table 1, we present our performance benchmarks for this test application. They were done for the following hardware: (i) Intel Core i7-6700K at  GHz, (ii) Supermicro server with Intel Xeon CPU E5-2630 at  GHz, and (iii) AMD 990FX chipset with AMD FX-9590 CPU at  GHz. The second configuration is rather similar to one of those used by Hedo et al. (2018).

We used either 80-bit long double floating-point arithmetic or the 64-bit double one, and requested the desired accuracy of : or , respectively. We did not use , because in the double case many CPUs hiddenly perform much of the local computation in long double precision instead of the requested double. Newtonian iterations are then continued to this undesiredly increased level of precision, if , thus introducing an unnecessary minor slowdown. The least required accuracy was set to in all tests.

All the code was compiled with GCC (either g++ or gfortran) and optimized for the local CPU architecture (-O3 -march=native -mfpmath=sse). The Gronchi primary computing subroutine compute_critical_points_shift() was called from our C++ program natively, i.e. without any intermediary file IO wrapping that would be necessary if we used the main program CP_comp.x from the Gronchi package.

To accurately measure the time spent purely inside just the MOID computation, and not on the file IO or other algorithmic decorations around it, we always performed three independent runs on the test catalog: (i) an ‘empty’ run without any calls to any MOID algorithm, only iteration over the catalog; (ii) computation of all MOIDs using the algorithm of this paper, without writing results to a file; (iii) same for the Gronchi algorithm. The time differences (ii)-(i) or (iii)-(i) gave us the CPU time spent only inside the MOID computation. We never included the CPU time spent in the kernel mode. We assume this system time likely refers to some memory pages manipulation or other similar CPU activity that appears when the program iteratively accesses data from a big catalog. In any case, this system time would be just a minor addition to the result ( per cent at most).

The reader may notice that the hardware can generate huge performance differences, not necessarily owing to just the CPU frequency. Moreover, even the performance on the same AMD machine differs drastically between the double and long double tests. This puzzling difference appears mainly due to slow 80-bit floating-point arithmetic on AMD, not because of e.g. different number of Newtonian iterations per root (which appeared almost the same in all our tests, iterations per root).

We conclude that our algorithm looks quite competitive and probably even outperforming the benchmarks obtained by Hedo et al. (2018) for their set of tested algorithms ( s per orbit pair on a Supermicro/Xeon hardware). They used double precision rather than long double one.

Therefore, our algorithm possibly pretends to be the fastest one available to date, or at least it belongs to the family of the fastest ones. In the majority of cases it yields considerably more accurate and reliable results, usually close to the machine precision, and its accuracy may seriously degrade only in extraordinary rare nearly degenerate cases, which are objectively hard to process.

5 Additional tools

Our main algorithm based on determining the roots of is fast but might become vulnerable in the rare cases of lost roots. Whenever it signals a warning, alternative algorithms should be used, trading computing speed for better numeric resistance with respect to degeneracies.

In addition to the basic 0D method based on root-finding, our library implements a “fallback” algorithm of the 1D type, based on the brute force-like minimization of . This method is numerically reliable thanks to its simplicity, and its slow speed is not a big disadvantage, because it needs to be run only if the basic fast method failed. In our benchmarking test it appeared times or times slower than our fast algorithm or the Gronchi code, respectively. But this is likely sensitive to its input parameters.

First of all, the algorithm scans only a restricted range in the variable, discarding the values where the MOID cannot be attained. The requied range is determined as follows. Using e.g. formulae from (Kholshevnikov and Vassiliev, 1999a), compute the minimum internodal distance . Since MOID is usually attained near the orbital nodes, this quantity and its corresponding orbital positions already provide rather good approximation to the MOID. Then consider two planes parallel to the orbit , and separated from it by . We need to scan only those pieces of orbit that lie between these planes, i.e. lie within band from the plane. The points on outside of this set are necessarily more distant from than , so the MOID cannot be attained there. This trick often reduces the range dramatically. This optimization was inspired by the discussion given in (Hedo et al., 2018). The detailed formulae for the reduced range of are given in A.

Moreover, this algorithm automatically selects the optimal orbits order or to have a smaller angular range to scan. In the case if the cumulative ranges appear equal (e.g. if we occasionally have the full-circle in the both cases) then the user-supplied order is preserved.

The efficiency of this approach is demonstrated in Fig. 3, where we plot the distribution density for the total range length obtained, as computed for our test case of asteroid pairs. The fraction of the cases in which this range could not be reduced at all (remained at ) is only , and in the majority of occurences it could be reduced to something well below  radian. The efficiency of the reduction increases if MOID is small. Then the total scan range may be reduced to just a few degrees.

Figure 3: The distribution density of the reduced angular range in , as obatined for asteroid pairs.

The minimization of is based on subsequent sectioning of the initial angular range for . The user can set an arbitrary sequence of integer numbers that define how many segments are considered at different stages. The initial angular range is partitioned into equal segments separated by the equidistant nodes , and the node with a minimum is determined. We note that the input parameter is always interpreted as if it corresponded to the entire range, even if the actual scan range is reduced as described above. The segment length on the first step is normally set to regardless of the scan range, unless this scan range is itself smaller than . On the second stage, the segment surrounding the minimum is considered. It is sectioned into equal segments, and the node corresponding to the minimum is determined again. On the third stage, the segment is sectioned into smaller segments, and so on. On the th stage, the length of the segment between subsequent nodes is reduced by the factor , so only are meaningful. Starting from the stage number , the segments are always partitioned into subsegments, until the global minimum of is located with a desired accuracy in and . It is recommended to set large enough, e.g. , in order to sample the objective function with its potentially intricate variations densely enough, whereas the last can be set to , meaning the bisection.

We notice that this method was not designed for a standalone use. It was not even supposed to be either more reliable or more accurate in general than our primary fast method. It was supposed to provide just yet another alternative in rare special cases when the primary method did not appear convincingly reliable. Its practical reliability depends on the input parameters very much: too small may lead to frequent loosing of local minima in , if they are narrow. Hence we may miss the correct global minimum sometimes. But this effect can be always suppressed by selecting a larger . In our tests, with this algorithm generated one wrong MOID per trials, so it is not recommended for a general sole use. This could be improved by implementing an adaptive sampling in the variable, e.g. depending on the derivative , but we did not plan to go that far with this method. We note that narrow local minima of are, informally speaking, in some sense antagonistic to almost-multiple critical points, so this fallback algorithm is vulnerable to rather different conditions than our primary fast method. Therefore it can serve as a good complement to the latter.

Also, we include in the library several fast tools that may appear useful whenever we need to actually compute the MOID only for those object that are close to an orbital intersection. These tools may help to eliminate most of the orbit pairs from the processing. The first one represents an obvious pericenter–apocenter test: , and . If any of these quantities appeared positive and above some upper threshold then surely , and one may immediately discard such orbital pair from the detailed analysis.

Our library also includes functions for computing the so-called linking coefficients introduced by Kholshevnikov and Vassiliev (1999a). The linking coefficients are functions of two orbits that have the dimension of squared distance, like , and they are invariable with respect to rotations in . Kholshevnikov and Vassiliev (1999a) introduced three linking coefficients that should be selected depending on whether the orbits are (nearly) coplanar or not. See all the necessary formulae and discussion in that work.

For our goals it might be important that at least one of these linking coefficients, , can be used as an upper limit on the MOID. It represents a signed product of two internodal distances from (41), so the squared MOID can never be larger than . This allows us to limit the MOID from another side, contrary to the pericenter-apocenter test. Moreover, based on we introduce yet another linking coefficient defined as

(34)

This modified provides even tighter upper limit on the squared MOID, but still preserves the sign that indicates the orbital linkage in the same way as did.

It is important for us that all linking coefficients are computed very quickly in comparison to any MOID algorithm, because they are expressed by simple elementary formulae.

The linking coefficients were named so because their original purpose was to indicate whether two orbits are topologically linked like two rings in a chain, or not. The intermediate case between these two is an intersection, when the linking coefficient vanishes together with the MOID. Therefore, these indicators can be potentially used as computationally cheap surrogates of the MOID. But in addition to measuring the closeness of two orbits to an intersection, linking coefficients carry information about their topological configuration. Also, these quantities can be used to track the time evolution of the mutual topology of perturbed non-Keplerian orbits, for example to locate the moment of their intersection without computing the MOID.

6 Further development and plans

Yet another possible way to extend our library is to implement the method by Baluyev and Kholshevnikov (2005) for computing the MOID between general confocal unperturbed orbits, including hyperbolic and parabolic ones. This task can be also reduced to finding real roots of a polynomial similar to .

In a future work we plan to provide statistical results of applying this algorithm to the Main Belt asteroids, also including the comparison of the MOID with linking coefficients and other indicators of orbital closeness.

Acknowledgements

This work was supported by the Russian Science Foundation grant no. 18-12-00050. We express gratitude to the anonymous reviewers for the fruitful comments and useful suggestions on the manuscript.

Appendix A Reducing the scan range for the eccentric anomaly

According to Baluyev and Kholshevnikov (2005), the radius-vector on a Keplerian elliptic orbit is

(35)

with a similar expression for . Let us introduce , which is a unit vector orthogonal to the orbital plane of . The vectors , , form an orthonormal basis in . Then let us compute the dot-product

(36)

which represents a projection of the distance vector on the basis vector . Note that the dot-product is always zero. Now, we need this distance projection to be within from zero, because otherwise the absolute distance can be only larger than . This yields two inequality constraints

(37)

implying an elementary trigonometric equation that can be solved via arcsines.

The final set of computing formulae can be expressed as follows. Let us introduce the vector

(38)

which is directed to the ascending node of assuming reference . The angle is mutual inclination between the orbits. Then determine the angle from

(39)

It represents the true anomaly on , where that ascending node is located, and the location on the other orbit can be determined in a similar way. Explicit formula for the scalar product is given in (Kholshevnikov and Vassiliev, 1999a) via orbital elements, though we prefer to multiply the vectors directly, using the following expression for :

Then compute

(41)

where and are orbital parameters, .

Now, the inequalities (37) may be simplified if we decompose the vectors and in the basis :

(42)

Writing down the orthogonality condition between and and the norm of in these coordinates, we have

(43)

Therefore, we may set and in (37), and the sign choice is not important here.

Finally, let us define the quantity and the angle from

(44)

and (37) becomes

(45)

where .

In general, we have three types of solution for .

  1. If