A Framework for Telescope Schedulers: With Applications to the Large Synoptic Survey Telescope

A Framework for Telescope Schedulers: With Applications to the Large Synoptic Survey Telescope

Elahesadat Naghib    Peter Yoachim    Robert J. Vanderbei    Andrew J. Connolly    R. Lynne Jones
Abstract

How ground-based telescopes schedule their observations in response to competing science priorities and constraints, variations in the weather, and the visibility of a particular part of the sky can significantly impact their efficiency. In this paper we introduce the Feature-Based telescope scheduler that is an automated, proposal-free decision making algorithm that offers controllability of the behavior, adjustability of the mission, and quick recoverability from interruptions for large ground-based telescopes. By framing this scheduler in the context of a coherent mathematical model the functionality and performance of the algorithm is simple to interpret and adapt to a broad range of astronomical applications. This paper presents a generic version of the Feature-Based scheduler, with minimal manual tailoring, to demonstrate its potential and flexibility as a foundation for large ground-based telescope schedulers which can later be adjusted for other instruments. In addition, a modified version of the Feature-Based scheduler for the Large Synoptic Survey Telescope (LSST) is introduced and compared to previous LSST scheduler simulations.

Artificial intelligence, autonomous telescope, LSST, reinforcement learning, scheduling, stochastic optimization
\move@AU\move@AF\@affiliation

Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08540, USA enaghib@princeton.edu

\move@AU\move@AF\@affiliation

Department of Astronomy, University of Washington, Box 351580, U.W., Seattle, WA 98195, USA yoachim@uw.edu

\move@AU\move@AF\@affiliation

Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08540, USA rvdb@princeton.edu

\move@AU\move@AF\@affiliation

Department of Astronomy, University of Washington, Box 351580, U.W., Seattle, WA 98195, USA ajc26@uw.edu

\move@AU\move@AF\@affiliation

Department of Astronomy, University of Washington, Box 351580, U.W., Seattle, WA 98195, USA ljones@astro.washington.edu

1 Introduction

The Large Synoptic Survey Telescope (LSST), is a large, ground-based optical survey that will image half of the sky every few nights from Cerro Pachon in Northern Chile. The LSST comprises an 8.4 meter primary mirror and a 3.2 Gigapixel camera. With a 9.6 field-of-view, it will visit each part of its 18000 primary survey area  1000 times over the course of 10 years. Each visit will likely comprise a 15 second pair of exposures with a single visit depth of  24.5 magnitudes (AB) (in the six bands u, g, r, i, z, and y). The revolutionary role of this telescope calls for no less than optimal operation.

The algorithm that makes the sequential decisions of which filter to use, and which direction to point the telescope to, is called scheduler. A scheduler has to maximize the scientific outcome of the telescope during its limited period of operation.

There are four primary science drivers for the LSST project: the characterization of dark energy through the multiple cosmological probes (e.g. gravitational weak lensing, luminosity distances from Type Ia supernovae, and Baryon Acoustic Oscillations); mapping the 3D distribution of stars within our Galaxy; a census of solar system objects within the Solar System; and a detailed study of the transient and variable universe. Each of these objectives has a different set of constraints and requirements on how the observations are made (e.g. the cadence of the observations, the number of filters as a function of time, the acceptable airmass range for an observation). The mission of modern, large, ground-based telescopes, such as LSST, is constrained with various stochastic factors, and contains competing objectives which are vastly different in nature, due to the different nature of the scientific expectations. In this paper, we propose a framework to formulate the problem of scheduling for the new generation of ground-based telescopes then introduce a scheduler based on the proposed model.

The first generation of schedulers for astronomical instruments were developed for space missions mainly to automate their operation. ROSAT mission’s scheduler in 1990 (nowakovski1999using), Spike (johnston1994spike), Hubble Telescope’s scheduler in 1994, and HSTS (muscettola1995automating) in 1995 pioneered many of the developments in algorithmic scheduling of observations for the space missions.

Despite the similarity of the science objectives for space and ground-based telescopes, the determining factors for the purpose of scheduling are fundamentally different. While space telescopes are required to respect kinematic and dynamical constraints, weather is the main challenge in the scheduling of ground-based telescopes. The former is predictable and efficiently computable, while the later involves both inherent uncertainties, and uncertainties due to computational limitations.

Earlier algorithmic approaches to the scheduling of ground-based telescopes are heavily based on observation proposals which are hand-crafted sequences of scripted astronomical observations. Proposals are generally tested only for feasibility (e.g. that a set of fields were visible, or lay within a specified airmass range, or within a window in time), but not necessarily for optimality. For instance, the operation of Keck Telescope, 1993 (nelson1985design), is 100% based on proposals, while the Hobby-Eberly Telescope, 1997 (shetrone2007ten), has a semi-manual scheduling scheme.

More recently, the development of more expensive ground-based instruments with complex missions made it impossible to rely solely on hand-crafted proposals. The need for more efficient use of the instrument’s time led to the development of decision making algorithms to optimize science output. For instance, the scheduler of the Liverpool robotic telescope was designed in 1997 to automate and optimize time allocation into chunks of scripted schedules. The time allocation strategy was preferred to the scheduling at the single visit level. The latter approach is referred to as optimum scheduling by the authors (steele1997control). The reason for choosing the time allocation scheme instead of optimum scheduling is stated to be the lack of recoverability of the latter choice in case of an interruption, because it potentially leads to excessive computational cost to reevaluate the sequence once it is interrupted. In this paper, we show that the scheduling in the single visit level can be quickly recovered in a memoryless framework, thus the optimality is not necessarily required to be sacrificed. Another example is the Las Cumbres Observatory Global Telescope Network (LCOGT) with one of the most advanced telescope scheduling algorithms (Boroson14; Saunders14). LCOGT uses an integer linear programming (ILP) model, solved with the Gurobi algorithm (gurobi), to optimize the scheduling of observations over a global network of telescopes (Lampoudi15). Due to the success of this approach the Zwicky Transient Facility at Palomar Observatory (Bellm14) has also adopted an ILP scheduler666GitHub repository: https://github.com/ZwickyTransientFacility/ztf˙sim. The ILP scheduling model performs well for observatories where slew overheads are small compared to exposure execution time. LCOGT is able to schedule observations in long contiguous blocks. In contrast, the LSST plans to have observations of about 30 seconds with slew times up to 2 minutes (when there’s a filter change), hence it requires a scheduling algorithm that can explicitly minimize the slew times between successive observations. One could use the ILP scheduling approach to schedule large prescript blocks of observations for LSST. The blocks could be set to follow a path that only includes short slews (this is similar to the strategy taken in the scheduler developed by D. Rothchild et al.). The disadvantage of this approach and any other prescript schedule is principally the lack of recoverability from unpredictable conditions, for instance inability to dodge the clouds.

Even given the reliability of fully automatic scheduling technologies, remain a number of modern telescopes such as SALT, 2005 (brink2008salt), and ALMA, 2013 (wootten2003atacama), which are being operated based only on traditional hand-crafted proposals. ALMA in particular, requires a highly regulated structure for proposals that potentially leads to suboptimality as discussed in (alexander2017enabling), where the authors suggest a number of corrections for the scheduling regulations to provide adaptivity to time-sensitive observations.

The LSST community, however, in addition to a proposal-based scheduler, introduced in (delgado2016lsst), have been supporting the design and implementation of proposal-free decision machines, such as Feature-Based scheduler, first introduced in (naghib2016feature), as well as a semi-scripted cadence by D. Rothchild et al. (2018, in preparation) that explore the possibility of a new generation of the schedulers for fast, multi-mission, big-data collecting instruments.

In Section 2, first we explain the choice of Markovian framework for the Feature-Based scheduler, and in Sections 2.1 and 2.2 we provide the mathematical details of the scheduler model in that framework. Section 3 presents two approaches for the optimization of the model’s parameters. Sections 4 demonstrates the application of the Feature-Based scheduler on the LSST which is then followed by a comparison between a modified version of the Feature-Based scheduler and LSST’s current official scheduler in Section 5. Finally, Section 6 presents our concluding remarks.

2 Scheduling framework

To run a ground-based telescope with multiple science objectives, such as LSST, the scheduler has to offer controllability, adjustability, and recoverability.

  • Controllability: The high-level science objectives have to appropriately respond to the variation of the model parameters. Otherwise, either the choice of the input information is irrelevant to the mission objectives, or the structure of the scheduler is degenerate. Controllability is necessary for a scheduler to be optimizable by the choice of the model parameters.

  • Adjustability: For a complex, multi-objective mission it is common that the high-level objectives are required to be modified in the middle of the operational period. Adjustments take place according to updates of scientific goals, or changes in the mechanical performance of the system. Regardless of the reason for the adjustments, a scheduler must offer flexibility to be adjusted with a reasonable computational cost, and preferably no expert intervention. Hand-tuned scheduling strategies and verbal policies for instance, are not fully adjustable.

  • Recoverability: The presence of unpredictable factors in the operation of ground-based telescopes are due to the natural stochastic processes (such as the weather), and complexity of the mechanical facility. Unscheduled downtime and instrument failures are examples of the many unpredictable survey interruptions. On the other hand, there are inherently predictable interruptions, such as maintenance downtimes, and cable winding that also, due to the complexity of the mechanical system, are not computationally affordable and/or valuable to keep track of. Therefore, they are considered as stochastic variables as well. Given all the stochastic factors, a scheduler is required to be able to quickly make an alternative decision, once a previously unpredictable event occurred. A scripted sequence of decisions for instance, lacks the recoverability attribute. Also, strategies that need to look back at historical sequences of events or look forward through possible sequences of events are not fast recoverables.

The Feature-Based scheduler is designed based on a Markovian model in which the flow of the input information, the decision procedure and it’s relationship to the mission’s objective are coherently expressed. Therefore, controllability of the scheduler is well-defined and verifiable. And it is adjustable, because of the Markovian structure that offers an explicit derivation of the design elements from high-level objectives, and finally is swiftly recoverable due to the inherent memorylessness of the Markovian Decision Process which for a decision at any time, only requires the current state of the system.

2.1 Markovian representation

Definition 1.

Let be a stochastic process for which represents the state of the system at , and be the set of all possible states that the system can take. Let , be the probability distribution of on , then is a Markovian process, if and only if, it satisfies the following memorylessness property,

where is the conditional probability distribution of the system’s state at given it’s state at , and is the conditional probability distribution of the system’s state at , given all of the states that the system has been in until .

Memorylessness property asserts that the system’s next state only depends on its current state and is independent of its earlier history. This property, is in fact, the main reason for choosing a Markovian framework for the scheduler.

Definition 2.

Let , be a Markovian Decision Process (MDP), where is the set of actions, and transition probability from state to which is equal to , the conditional probability of transition from state to state given action . Finally the transition reward is denoted by , and is the discount factor.

Definition 3.

Action is admissible for , if it is feasible, thus it is possible to be taken at , and progressively measurable, hence only dependent on the current state of the system .

To control the system is to take an action at all decision steps . For a Markovian control, the actions are required to be dependent only on the current state to preserve the memorylessness property of the closed-loop system, that’s why we require the action to be progressively measurable. Notice that the decision steps , are not necessarily uniformly spaced, and are determined by the time that each transition takes. In this representation, is the start of the process and is the finite time horizon of the process.

Definition 4.

A deterministic policy , is a mapping from to the set of all admissible actions at , denoted by .

A policy provides a time invariant control law that for all possible suggests an admissible action for transition to the next state, which automates the control of the system.

Definition 5.

A deterministic optimal policy is a solution to the following optimization problem,

(1)

where is a given initial state.

In other words, a deterministic optimal policy maximizes the expected discounted sum of the rewards. By discounting the later rewards versus earlier rewards through , we tune the priority of the overall gain versus instant gains.

Proposition 1.

For the Markov decision process of , there exists a deterministic optimal policy, and it can be written as follows,

(2)

where is a function of the following form,

(3)

Proof. See Appendix.

For the telescope scheduler we require the policy to be deterministic, because the simulations have to be repeatable for comparison and evaluation purposes, however it can be shown that the deterministic optimal policy is not only optimal amongst deterministic policies, but also is optimal amongst stochastic policies. Therefore the choice of deterministic policy does not harm the optimality of the control.

As a result of Proposition 1, the solution space of the optimization problem (1), can be reduced from search over the set of policies (all possible mappings) to search over functions.

2.2 Markovian approximation

For a decision that is inherently time dependent, such as scheduling an observation, only a maximal definition of the system’s state yields a perfect Markovian system, in which case, the state-space includes all of the possible decision sequences. In particular, LSST requires a sequence of about 1000 decisions at each night. Therefore, storing all possible scenarios requires a state space of size , where is the number of possible printings on the visible sky. No matter how one tessellates the sky, number of scenarios is neither tractable nor storable in a realistic memory. In order to overcome the curse of dimensionality, we have designed a set of features to summarize the most important information required for the scheduling. Thus, the system of telescope-environment is only an approximated Markovian system, once its state-space is replaced by a feature-space.

On the other hand, Proposition (1) shows that the optimal scheduler lies within the set of functions instead of the much larger set of all possible mappings. Despite this reduction, Problem (1) is still an infinite dimensional optimization problem because its variable is a function. To be able to numerically compute the function, we propose a parametrized function approximation for ,

where is the vector of variables that characterize , and ’s are the basis functions that are designed to modify the features in order to incorporate the astronomical observational knowledge into the decision maker’s structure. With this approximation, the space of search is reduced from the space of functions to a finite dimensional vector space. This approximation substitutes the original optimal policy (2) with the following approximate policy,

(4)

where is a solution to the following optimization problem, in which policy is fully determined by .

(5)

3 Scheduler optimization

Given the approximated optimal policy in Equation (4), the only remaining fundamental step to have a scheduler, is to find by solving Problem (5). The following two sections describe two different approaches to find a . The first approach requires a specific class of the high-level mission objective function, and is faster. The second approach is applicable for all types of the high-level mission objective functions, however requires more computational resources.

3.1 Reinforcement Learning

Assume that there exists a well-defined notion of the instant reward for each state transition, then by definition given in Equation (3) is,

(6)

Accordingly, for the parameterized function, we require the following,

(7)

In reinforcement learning, the main idea to find is to start the process of the decision making with an arbitrary set of variables, , and make the decisions according to Policy (4) with the associated , then update the variables in each decision step, so that ’s linear approximation gradually respects Equation (7) for all .

Note that at , after the transition from to we have the value of already evaluated in the decision making procedure, and can be approximated by which is also evaluated in the decision making process, where is the last version of the optimization variables at . Using the desired value given in Equation (7), the update is as follows,

(8)

in which, is the learning rate. The first term in the right hand side of the equation is the latest approximated value of the function associated with with amount of contribution, and the second term is the value of function according to Equation (7) with amount of contribution. Clearly for smaller ’s this update imposes smaller adjustments. Accordingly, updates of the variables can be expressed as follows,

(9)

This Learning method is called Temporal-Difference (TD) learning with function approximation (tsitsiklis1997analysis). Variants of this reinforcement learning method have been successfully applied to real-life problems such as training of a backgammon player (tesauro1995temporal).

Note that to be able to use the TD reinforcement learning method, it is necessary to have a well-behaved notion of the reward that reflects the instant gain of any decision at all of the decision steps. Moreover, the discounted sum of the instant rewards has to reflect the objective of the mission. For instance, in the LSST scheduling problem, after each visit, the negative of the slew time, is a well-defined instant reward that reflects how time-efficiently the telescope is being used. This however does not reflect all aspects of the mission’s objective such as, the need to re-observe a field within a valid time window (explained in Section 4), for which there is no equivalent instant reward. For this reason, we also implemented a black-box function optimizer for the LSST scheduler that does not require a notion of the instant reward and directly optimizes the mission’s objective function over a limited episode of the simulated scheduling.

3.2 Global Optimization

In the absence of a well-defined instant reward, instead of solving problem (5), The following problem can be solved,

(10)

where is a utility function that measures the performance of the scheduler on a simulated episode of the operation from to by policy . In this approach, a general , can not be explicitly expressed as a function of , therefore a global optimizer that can maximize a black-box function is required. Evolutionary optimizers have successfully been applied to numerous real-life problems involving black-box function optimization, and specifically astronomical mission planning such as the scheduling of Exoplanet Characterisation Observatory (garcia2015artificial). We used the DE evolutionary optimizer (naghib2016entropic) which is an adaptive version of the Differential Evolution (DE) algorithm ((storn1997differential)). DE is generally one of the most efficient evolutionary algorithms and the DE variant uses a notion of entropy to automatically preserve the diversity of the candidate solutions. As a result, in contrast with DE, it does not require the user to choose any tuning parameters for the algorithm, which is the most time-consuming task in using an evolutionary optimizer. In addition, DE, similar to any other evolutionary algorithm is highly parallelizable, and the computational time can be almost linearly decreased with respect to the number of computational cores.

4 Problem of Scheduling for LSST

\H@refstepcounter

table \hyper@makecurrenttable

Table 0. \Hy@raisedlink\hyper@@anchor\@currentHrefKey terms and notations used in the definition of the features and the basis functions

fields fixed point configuration on the sky such that the visible sky could be completely covered by pointing
the telescope toward all of those directions,
total number of the fields,
Coordinated Universal Time (UTC),
() beginning (end) of the night that lies within,
rising (setting) time of field-filter above (below) the acceptable airmass horizon at current night,
time of the last visit of field-filter before ,
ID number of the field that is visited at ,
camera’s filter at ,
total number of the visits of field-filter before ,
slew time from field to field in seconds,
mechanical settling time after slewing from field to field ,
time needed to change filter, a constant value about 2 minutes,
time needed to move the dome to make field visible to the telescope,
hour angle of the center of field at in hours, ,
airmass of the center of field at ,
brightness of the sky at the center of field at ,
seeing of the sky at the center of field at ,
atmospheric extinction coefficient
given constant time window within which a revisit is valid,

The LSST’s mission is to uniformly scan the visible sky within 5 different regions shown in Figure 4. Each region, also referred to as survey, has certain science-driven goals and constraints, defined and precisely described in (ivezic2008large).

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefRegions of the sky with different requirements and constraints for scheduling: (1) Galactic Plane Region (GP), (2) Universal or Wide Fast Deep (WFD), (3) South Celestial Pole (SCP), (4) North Ecliptic Spur (NES), and (5) Deep Drilling Fields (DD).

The notion of the features, enables the scheduler to systematically fetch all of the various requirements and turn them into comparable quantities for the purpose of the decision making in the Markovian framework. The proposed feature-space of the LSST contains seven features, each can be evaluated given a field , a filter , and a time . The fields discretize the visible sky through a fixed partitioning. Each field can be captured through a single visit, and there are 6 possible filters, , for each visit. And finally the time domain is discretized by the natural timing of the process. In other words, time intervals at which you have to make a decision are heterogeneous. Given that a consecutive visit of the same field-filter is not allowed in the main survey, there is a slew time between any two decisions, therefore . On the other hand, the operation is over a limited time horizon, , thus the number of the decision time steps is finite. In conclusion, a finitely discretized sky, a finite number of the filters and a finite number of the time steps, pose a finite feature-space, denoted by . Then the implication of the policy, stated in Equation (4), for the LSST scheduler would be as follows,

(11)

where is the 7-dimensional state at , and is a feasible pair of field-filter. Section 4.3 introduces the constraints under which a field-filter pair, is feasible at . Accordingly, is the set of all field-filter pairs that are feasible at . Table  4.3 contains the feasibility conditions.

For a modular approach to the implementation of the scheduler, expected values of the basis functions, for , are evaluated in separate modules. Those basis functions that address the environmental parameters are developed by the LSST community. See (2014SPIE.9145E..1AG) for the parameters capture the status of the LSST site, (sebag2008lsst) and (sebag2007lsst) for cloud cover measurements that were used to develop a cloud model and, see (yoachim2016optical) for the sky brightness model. Generally speaking, making a decision for a visit at for the LSST scheduling problem is mainly determined by the following factors,

  1. The amount of the time it takes to redirect the telescope and the dome to move from one target to the next target.

  2. The short-term science-driven requirements, such as the same-night revisit of a field.

  3. The long-term mission-driven requirements, such as maintaining a uniform coverage of all field-filter pairs within each region.

  4. The relative quality of visiting a field-filter compared to other field-filters at the time of observation, for overall efficiency of the operation.

  5. The general preference for observing the fields around the meridian.

Accordingly, the basis functions of the LSST scheduler, are designed to formalize the above factors. For the full definition and description of the basis functions of the Feature-Based scheduler for LSST refer to Section 4.2.

The last step is to implement the training procedures (described in Section 3) on the LSST model to derive . Training of the LSST scheduler is explained in Section 4.4 with two sample objective functions. However, the LSST community, and basically any individual can design their own mission objective function, whether it allows for well-defined instant rewards or not, then train the scheduler through our open source training code, and find a new set of that principally leads to a different behavior of the scheduler due to different objectives.

4.1 Features of the telescope scheduler

For designing the features, it is important to avoid redundancy in the information that features contain. It is also critical to hold a modular approach in the delivery of the information to the decision procedure. For instance, consider the amount of time, , it takes for a telescope to move on from a visit. In the LSST problem, mainly depends on the slew time, mechanical settling time, the dome placement time, and the time it takes to change the filter. All of these timings are available through a precise simulation of the LSST model (delgado2014lsst). A modular design would be to bring the summation of the operational timings to the stage of the decision, instead of bringing them separately as different features. This approach makes the implementation significantly simpler, and more readable, and in a conceptual level, makes it possible to track the effect of the operational timing in the overall outcome of the decision maker. Particularly, since the operational cost is independent of the amount to which each cause contributes to the overall , bringing the timing of the separate procedures separately in the decision making level adds unnecessary complications to the design, and consequently, makes it hard to back track the output-input behavior of the scheduler.

This section proposes seven features for the description of the LSST-environment state in Table 4.1, with the key terms and notations defined in Table 4. They are designed to efficiently carry the determining information with a modular approach. Each feature is denoted by for , and indexed by the triplet of , field, filter, and time. To make a decision at , the scheduler computes principally all of the seven features for all of the pairs, however, there are some features that don’t change in every time step, for instance, if is not visited at , then . For such cases, the implementation has a categorized updating procedure to avoid redundant computations.

\H@refstepcounter

table \hyper@makecurrenttable

Table 0. \Hy@raisedlink\hyper@@anchor\@currentHrefFeatures of the approximated Markovian model for the telescope-environment system. Features provide a memoryless, approximate description of the system’s state.

Notation Definition\Description

(): either the time required to point the telescope to , and change the filter to , or the time required to relocate the dome to make visible. Whichever that is larger.

the total number of the same-night visits of field-filter until .

, time since the last same-night visit of

remaining time for field-filter to become invisible, either by passing the airmass or the moon-separation limit, or being covered by temporary objects such as clouds, as projected at .

co-added depth, a measure of cumulative quality of past visits of field-filter until .

-depth, a measure for quality of visiting field-filter at , depending on seeing, sky brightness, and airmass. where is a scaling coefficient.

hour angle of field at .

4.2 Basis functions of the telescope scheduler

Basis functions are fully determined by the value of the features, and are denoted by , for . Each , is indexed by a triplet of , time, field, filter, and is evaluated at every decision making step, , for all field-filter pairs . Similar to the update procedure of the features, for a decision at time , all six basis functions should be evaluated for all pairs of , except for the field-filters that are infeasible at . Feasibility of a field-filter can be evaluated after the evaluation of the features, by applying the constraints of the region that the field belongs to (see Section 4.3 for the list of constraints). Thus, while it is required to evaluate the features for all possible pairs of at all decision steps, the number of the basis function evaluation is approximately a factor of three less than the number of the features.

Common basis functions are shared amongst all of the regions. They are designed to reflect the five general decision factors described in Section 4. The exact definitions of the common basis functions are reflected in Table 4.2, and the key terms and notations can be found in Table 4. Note that In the definition of , scale factor is empirically evaluated to ensure that 80% of the observed values of this basis function, for the visited states, in the simulations are between 0 and 1. Without loss of generality, scaling the values of the basis functions is a regulation that improves the rate of training convergence with a uniform numerical scale of the solution path.

\H@refstepcounter

table \hyper@makecurrenttable

Table 0. \Hy@raisedlink\hyper@@anchor\@currentHrefBasis functions of the Feature-Based scheduler, the building blocks of the decision function.

Notation Definition\Description

, the cost of the required time for visiting field-filter .

,

reflects the short term visit/revisit priority of field , conditioned on the total number of the previous same-night visits.

, reflects the long-term visit priority of field-filter , based on the ratio of its co-added depth to the maximum co-added depth of all pairs of field-filter until .

, empirical complimentary CDF of -depth of all pairs at . assigns a cost to field-filter based on its relative visiting quality compared to the other field-filter pairs at .

, encourages visiting of the fields near the meridian.

As mentioned in Section 4, the LSST’s mission poses different requirements on different regions of the sky. First we modify , for the Wide Fast Deep (WFD) and North Ecliptic Spur (NES) regions. Because they require the telescope to observe a field twice at a same night, and within a valid time window, . The following modification prioritizes the fields that have received a first visit, but not a second visit.

where, , if , to rank the fields that have received their first visit of the night (hence the condition). For any other cases, is zero.

For the Deep Drilling Field (DDF) survey which contains a very small fraction of the visible sky’s area, instead of adjusting the basis functions, we treat the DDFs as interruptions to the scheduler operations (with each interruption comprising a sequence of DDF observations). Fortunately, the recoverability attribute of the Feature-Based scheduler enables the scheme of the interruptions to be a part of the decision making procedure as long as the interruptions are not too frequent.

4.2.1 Controllability of the scheduler

As discussed in Section 2, the mission’s objective has to be controllable via the design parameters, . In this section we present an empirical observation of the mission’s objective controllability. If the value of the objective function does not sufficiently respond to the variations of the design parameters, it is a sign of poor choice of the basis functions and/or input information. Either of these factors lead to a structure that does not admit a sufficiently optimal solution, and if on the other hand, the objective function is extremely variable with respect to the changes in the design parameters, the solution of the training is not reliable because the objective is not a well-behaved function of the optimization variables.

Fig 4.2.1, shows a few one dimensional slices of the two following simple objective functions (12) and (13), evaluated after about five hours of scheduling simulation, with different ’s in range (’s are the scheduler parameters and determine its behavior). To observe the variability of the objective function with changes in the ’th basis function, we define a sequence of equidistant values for and keep the other ’s fixed, then run the scheduler for hrs with all resulting ’s that are different by the ’th element, then evaluate , and , defined as follows. reflects the slew time, , and airmass, , aversion, and reflects the time-efficiency of the operation by counting the total number of observation.

(12)
(13)

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefOne dimensional slices of the two simple objective functions (12) and (13). The variation of the objective functions, specially in the mid-range slices (solid line) suggests that the scheduler’s performance is controllable with the design parameters within the proposed range of the search. This is of course valid for the performance that is measured with either of , or .

Figure 4.2.1, contains slices of the 5-dimensional and . Both of the simple objective functions reasonably respond to the changes in all five dimensions of the variable , which can be an evidence of the controllability of the objective function. Moreover, the smaller variations for slices closer to the boundaries of the search space, suggest that the design and scaling of the basis functions provide a desirable behavior within the proposed search space.

4.3 Survey-specific constraints

The scheduler’s decision at each time step is an admissible (feasible and measurable) pair of field-filter , thus before each decision, one needs to specify the set of feasible actions. Feasibility of a candidate pair is driven by the following measurable factors:

  • Visibility: The candidate field-filter has to be visible.

  • Quality: The expected observational quality of a field-filter has to be better than the given lower threshold.

  • Survey’s timing: The science-driven revisit constraints has to be respected.

Exact expression of the proposed constraints for the LSST scheduler are presented in Table 4.3.

\H@refstepcounter

table \hyper@makecurrenttable

Table 0. \Hy@raisedlink\hyper@@anchor\@currentHrefFeasibility of field-filter for a visit at , as evaluated at .

Constraints Description region
1

field-filter has to be above the acceptable airmass horizon at

All regions
2

field-filter is not temporarily masked (e.g. by the moon) at .

All regions
3

poses a region dependent upper-bound on the number of the visits for each field. , and

All regions
4

the expected quality of visiting field-filter at has to be better than the given threshold, , that depends on the survey and the filter.

All regions
5

consecutive visit of a same field is not allowed.

All regions
6

if then

the first visit of field has to occur time before it becomes invisible, so that the second visit of can be scheduled in the valid time window.

WFD and NES
7

if then

if there has been a same-night visit of field until , then the next same-night visit has to occur in the valid time window.

WFD and NES
8

if

then

if there is a same-night visit of field until , then the next same-night visit cannot be with either of u or y filters.

WFD
9

visits with u filter and y filter is not allowed.

NES

4.4 Scheduler optimization

In this Section, we present two simple choices of high-level mission objectives to demonstrate the application of the proposed optimization approaches, discussed in Section 3. (More sophisticated mission objective functions can be defined based on LSST performance studies such as (2016AJ.151..172G), (2018AJ.155..1G), (2017AJ.153..186J), and (2012AJ.144..9O)). The choice of optimization algorithm depends on the nature of the mission objective. The first experimental mission objective function in this section can be expressed as the discounted sum of instant rewards , thus the reinforcement learning is applied to find the scheduler’s parameters . The second objective function cannot be decomposed as a discounted sum of instant rewards, thus we used the global optimizer approach. From the computational point of view, the first approach is preferred. For the following experiment, the reinforcement learning is about 10 times faster than the global optimization, and requires 50 times less memory. From the practical point of view however, for some missions, it is impossible to define an objective function that can be expressed as discounted sum of instant rewards. In which case, the mission objective can be optimized only via global optimizer.

In the following experiments, for both of the optimizations we used a simulated model of the telescope (2014SPIE.9150E..14C) and the environment, including the brightness of the sky and coverage of the clouds which are developed based on the measurements at the LSST site.

4.4.1 Reinforcement learning for the first choice of mission objective.

Let the instant reward, , be . It is defined to be a linear combination of the slew time to point the telescope from the th field to the th field, and the airmass of the destination. Since both factors have negative effect on the quality of the observation, the reward would be measured by the negative of each. Then the mission objective function can be simply defined as .

The simulation for the reinforcement learning starts at (2021 January 1), with , initialized at the mid-range values, and continues until converges. Figure 4.4.1, is the training curve for all of the variables over a course of 3000 decisions, . The discount rate , and learning rate are chosen empirically.

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefReinforcement learning of the scheduler’s parameters, . All of the parameters are initialized at the mid-range value, 5. During the simulation, at each step there is a reward associated with the decision which implies a small adjustment on each of the five parameters. Then the next decision will be taken with a slightly different set of variables. This procedure continues until the adjustments on the variables are negligible.

For the above choices of reward, learning rate, discount factor and initialization, converges to . In this approach, the computational time to evaluate the update is negligible with respect to the time required to evaluate the features, therefore, the computational time of the reinforcement learning is almost equal to the scheduling simulation (without the updates on ) which is linear with respect to the number of the decisions. With a personal computer777Processor: 1.6 GHz, Memory 1600 MHz DDR3, each decision takes about second, thus the time of the convergence for the simulation, presented in this section is .

4.4.2 global optimization for the second choice of mission objective.

One of the important simple objective functions that cannot be expressed as the discounted sum of the rewards is the total number of the observations from any to any that can be expressed as a utility function, .

To find a set of parameters, , that optimize the above objective function we applied the global optimization approach, explained in Section 3, with the following regulatory constraints.
(1) : Positive coefficients for the basis functions are assumed in the design of the basis functions. Because in the context of the telescope scheduling, it is more natural to create the basis functions to reflect the cost of the operation. (2) : Without loss of generality, we fix the value of the first element of to reduce the dimension of the optimization problem by one. Because, homogeneity of the policy implies that if yields an optimal scheduler, then for yields an optimal scheduler too.

We used the above objective function, , for (2021 January 1) and (2021 January 11). Figure 4.4.2, shows the value of this objective function over the iterations of the DE algorithm. The solution , yields the best after 50 iterations for .

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefProgress of the black-box objective function over the iterations of the DE algorithm. , for this simulation is the total number of the observations for 10 nights starting from (2021 January 1).

DE is a population-based metaheuristic algorithm, and for the result shown in Figure 4.4.2, the number of population is set to be 50. Each function evaluation is in fact, the simulation of 10 days of scheduling with a candidate scheduler which takes about 8 minutes, therefore each iteration, in total, takes minutes with a personal computer. The optimization can be manually terminated if the result is satisfactory, or can be continued until a full convergence is achieved. In DE (and all genetic algorithms in general), function evaluation for each individual is independent from other individuals, therefore the parallel implementation of the same algorithm can be faster up to a factor of .

5 Performance of a Modified Feature-Based scheduler for LSST

In this section, the LSST Metric Analysis Framework (jones2014lsst) is used, to compare888The sky background models and weather downtime used to benchmark the algorithms are not exactly identical because of the practical difficulties in the separation of the environment and the baseline scheduler implementation at the time. However, for the purpose of the comparisons in this paper, the behavior of our sky and observatory model is sufficiently close to the official model. See (2016SPIE.9910E..13D), and (2016SPIE.9911E..25R) for the official operations simulator. the performance of a modified version of the Feature-Based scheduler with opsim V4 and opsim V3, the most recent baseline schedules of the LSST, over a 10-year period of scheduling simulations.
The Modified Feature-Based scheduler is under active development999GitHub repository: https://github.com/lsst/sims˙featureScheduler., and addresses the observational details of the LSST’s mission through the adjustment of the constraints and the basis functions. It is designed to produce a software that can be used in practice.

The default sky tessellation adopted in the baseline scheduler results in 23% of the sky being covered by more than one field. In Modified Feature-Based scheduler, we adopt a finer discretization of the sky, and do not require the partitions to be necessarily sized based on the telescope’s field of view. The fact that the policy is not computationally expensive to evaluate makes it possible to use a finer discretization of the feature-space. This approach allows the scheduler to handle the field’s overlaps which cause inhomogeneity in the coverage of the sky.

In addition to adopt a finer discretization, we use a spatial dithering scheme to randomize the final pointing of the telescope by a small amount around the center of the partitions to further assist the homogeneity of the coverage. Adopting the dithering scheme, the median number of observations at a typical point in the sky increases by %. Dithering is also essential for removing systematic effects for science cases such as measuring galaxy counts, (see (Awan2016) for more details). Moreover, the Modified Feature-Based scheduler uses a separate process to track if an observation will need to be observed in a pair, and a separate processes to decide if a Deep Drilling sequence should be executed by interrupting the normal operation of the telescope.

5.1 Sky coverage uniformity

For a survey telescope, such as the LSST, the density of the co-added depth over the visible sky should ideally be uniform in each filter and within each of the five survey regions. Figure 5.1 compares the values of the co-added depth on a discretized sky map. Figure 5.1, demonstrates the smoothness of the coverage in a smaller scale for opsim V4, with and without dithering, and compares it with that of the Modified Feature-Based scheduler in the r band, all around the boundary of WFD and GP regions. Smoother coverage that the Modified Feature-Based scheduler offers is due to the fine discretization of the sky in the decision making stage, in addition to dithering which is applied after the decision is made.

Figure 5.1 compare the distribution of the co-added depth on a (finely) discretized sky. The Modified feature-Based scheduler has paved the left-most peak that appears in the distribution of opsim V4. This peak is the result of the field’s overlaps that receive more visits than specified in the configuration of the scheduler. Table 5.1 contains the median and variance of the co-added depth for both schedules in each of the main sky regions and in each filter. Modified Feature-based scheduler provides deeper (higher median), and more uniform (lower variance) coverage in most of the cases.

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefEach plot compares the distributions of the co-added depth coverage in one of the six filters. A dithering scheme in the Modified Feature-Based scheduler in addition to a finer tessellation of the sky smoothens the density of the coverage where the fields overlap.

\H@refstepcounter

table \hyper@makecurrenttable

Table 0. \Hy@raisedlink\hyper@@anchor\@currentHrefThe median and variance of the co-added depth distribution on a finely discretized sky. Modified Feature-Based scheduler closely matches the footprint of the official survey, and in addition outperforms opsim V4 in terms of the uniformity of the coverage, with lower variances, specially in WFD and SCP regions.

Median, Variance
opsim V4 Modified Feature-Based
filter WFD GP SCP NES WFD GP SCP NES
u 25.63, 0.04 25.12, 0.11 24.91, 0.12 - 25.68, 0.01 25.32, 0.05 25.10, 0.04 -
g 27.13, 0.04 26.41, 0.09 26.32, 0.12 26.30,0.13 27.18, 0.01 26.69, 0.04 26.56, 0.04 26.47, 0.09
r 27.19,0.04 26.01,0.16 25.84,0.24 26.38, 0.12 27.14, 0.01 26.21, 0.07 26.08, 0.05 26.43, 0.09
i 26.60, 0.04 25.44, 0.15 25.28, 0.22 25.82, 0.12 26.56, 0.01 25.68, 0.08 25.43, 0.07 25.88, 0.09
z 25.73, 0.04 24.62,0.17 24.57, 0.21 24.90, 0.14 25.87, 0.01 25.03, 0.09 24.81, 0.05 25.16, 0.10
y 24.92, 0.04 23.81, 0.16 23.72, 0.21 - 24.92, 0.02 24.01, 0.09 23.88, 0.06 -

5.2 Pairs

In addition to uniformity of the coverage, the LSST mission calls for pairs of visits within a valid time window at the same night. The main reason is to detect the transient objects such as asteroids. Since, the moving objects usually belong to the solar system, the pair constraint was initially imposed only on the WFD and NES regions. However, there are interesting solar system objects such as interstellar asteroids that can be observed in any direction of the sky, besides identification of the other varying objects, such as super novae, can benefit from a follow up visit, especially if the second visit is with a different filter. Thus in the Modified Feature-Based scheduler we made the pair constraint a universal constraint for all of the regions. The downside of this extension is the fact that it constrains the scheduler even more and the performance can be potentially less than it could be. Note that the structure of the Feature-Based scheduler, allows for extension or restriction of the constraints down to the individual field’s level, with neither contradicting any of the Markovian framework assumptions, nor breaking the structure of the implementation. Figure 5.2 demonstrates the distribution of the observations in pairs (in the , , and filters) to the total number of the observations. For the regions that the pair constraint is applied, this ratio can be interpreted as the success rate of the scheduler satisfying the pair constraint. Figure 5.2, compares the distribution of the pairs ratio of the modified Feature-Based scheduler and opsim V4. Note that the peak of the density for Figure 5.2 is closer to , which means a larger area of the sky is covered by successful pairs, however, the Modified Feature-Based scheduler offers a sharper concentration of the values that can be interpreted as a more homogenous pairs ratio. In other words, the Modified Feature-Based scheduler, sacrifices perfect pairs observation for a limited area of the sky to maintain a uniform ratio of pairs for a larger area of the sky.

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefDistribution of the pairs ratio to the total number of the observations, in any of the g, r, and i filters. The Feature-Based scheduler in compare with opsim V4, sacrifices perfect pairs observation for a limited area of the visible sky (with a lower mean) to maintain a uniform ratio of pairs for a larger area of the sky (with a sharper distribution).

5.3 AltAz and airmass distributions

For a ground-based instrument, airmass is one of the major obstacles for high-quality observation. Although zenith observations have the minimum airmass, off-zenith observations cannot be avoided, in which case, observations around the meridian provide high quality data and consequently result in more efficient operation of the telescope. Figure 5.3 compares the number density of the visits on an altitude-azimuth sky map in each of the six filters . Clearly in all of the filters, the modified Feature-Based scheduler schedules more visits around the desirable meridian zone. In addition, it offers a consistent concentration peak on the east wings, which is essential for a higher success rate for the pairs constraint. Because, if the first visit of the night occurs when the field is on the east side of the sky, it provides a longer opportunity for the second visit of the same night. Figure 5.4, demonstrates the density of visits collectively in all filters for opsim V3 and opsim V4, and the Modified feature-Based scheduler. Note that, adjustability of the Feature-Based scheduler allows for a significant change in the behavior, in this case by defining a new basis function, the Modified Feature-Based scheduler prefers to observe a contiguous set of fields that is then re-observed later in the same order.

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefEach plot is the distribution of the visits on an altitude-azimuth sky map in one of the six filters. The two left columns belong to opsim V4, and the two right columns belong to a simulation of the Modified Feature-Based scheduler. The higher concentration on the meridian (vertical axis) for the Modified feature-Based scheduler shows a more desirable behavior. Moreover, consistent concentration of the visits on the east wing can potentially provide a better pairs observation.

5.4 Signal-to-Noise ratio

For a multi-objective survey telescope, such as LSST, comparing the overall performance of the different schedules is a difficult task. Particularly because of the large number of the competing factors that are involved in the performance evaluation. In some cases involving the importance of each area of astronomy, the criteria are not even objective. Nevertheless we conclude this section with a general comparison by the median throughput (signal-to-noise ratio) as a general measure for the quality of a schedule. Table 5.4 reflects the value of median throughput for three different schedules in r and g bands. Modified Feature-Based scheduler significantly outperforms both of the baseline schedules. In addition note that the throughput is mainly determined by the combination of the surveys’ open shutter fraction (OSF), and the airmass. The open shutter fraction is the total time that the telescope camera shutter was open divided by the total time it could have been open. This reveals how time-efficiently the observations have been scheduled. The median airmass reflects the overall quality of the collected data. As mentioned before, observations in lower airmass allows for a higher data quality. Comparing the values of OSF and the airmass for both of the baseline schedules, opsim V3 and opsim V4 shows that there is a trade-off between the two values. While opsim V4 offers better median airmass, its OSF is decreased. However, its median throughput is very close to that of opsim V3. This comparison reveals that the change of meta-parameters and objectives in the scheduler of opsim V3 and opsim V4, changes the balance of trade-off between OSF and airmass, but not the actual performance of the scheduling, measured by the median thtoughput.

\H@refstepcounter

table \hyper@makecurrenttable

Table 0. \Hy@raisedlink\hyper@@anchor\@currentHrefComparison of three different survey algorithms in a section of the LSST Wide-Fast-Deep survey area.

Survey median throughput OSF median Airmass dithered
(%) (%)
Modified F-B 63.7 47.0 0.705 1.1 yes
opsim V3 55.3 40.0 0.736 1.2 no
opsim V4 54.4 40.8 0.715 1.1 no

6 Concluding remarks

This study demonstrated that a Markovian scheduler based on expert-designed features, and a parametrized linear decision making policy can be successfully applied to multi-mission, ground-based telescopes such as LSST. Unlike the mainstream telescope schedulers, Feature-Based scheduler does not rely on hand-crafted observation proposals. Instead, by bringing the decision making process to the individual observation level, improves the efficiency of the telescope’s operation. In particular, because with this approach the telescope’s schedule can be designed for optimality in addition to feasibility, and in general, because schedulers which rely on human interaction are fundamentally prone to potential suboptimality. This is mainly due to the manual tailoring which is performed based on the inspections of the instances. Moreover, adjusting the behavior of human dependent schedulers are inconvenient and time consuming in practice. Furthermore, being modeled as a Markovian Decision Process, the Feature-Based scheduler offers a systematic approach to the optimization of the scheduler’s behavior under uncertainties and interruptions.

On the other hand, the decision elements in the Feature-Based scheduler are designed, and separated in an intuitive way for the astronomy community. This property allows for expert intervention if needed, however in a regulated way, and only on the parameters of the policy. Manual adjustments of the parameters of the policy does not break measurability, linearity, and memorylessness of the process. Thus all of the simplicity, optimality conditions and modularity of the design remain valid. In addition, due to the coherent structure, from training to online decision-making, the Feature-Based scheduler is easy to understand, implement, and troubleshoot. Simplicity of the design and implementation, also provides a desirable environment for a wide-range of programming expertise in astronomy community to install the python packages on a local computer, define a custom mission objective, train a scheduler and examine the behavior of the scheduler with various mission objectives. Similarly, in a particular project, when a change in the mission’s objective is necessary, deriving a new scheduler that optimizes the new objective is principally automated. Furthermore, for the mission planning stage of a future instrument, a scheduler with adjustable objective can be extremely helpful, because it can answer the high-level trade-off questions, such as the amount of the time efficiency loss in the different strategies of capturing transient objects.

Computationally, the required resources for the training/optimizing of the Feature-Based scheduler is versatile, depending on the purpose. If many different objective functions are being tested for planning a mission, then a quick DE optimization for a short scheduling episode can find a sufficiently good scheduler for each mission. Even a quick manual hand tuning that reflects the intuitive importance of each basis function is possible, because they carry an astronomical observation meaning. On the other hand if the objective is known and fixed, and the scheduler is being trained for real-time decision making, then one might even categorize the observation nights based on their main differences, such as the moon-phase, seasonal variations, and weather patterns then train a scheduler specifically for each category to further increase the efficiency of the scheduler.

\onecolumngrid

APPENDIX

Proof.

(of Proposition 1) Consider a function , defined as follows,

Then by applying the the law of total expectation on the second term,

and by assuming a finite state space,

Then by the Markov property,

where, is the transition probability from to under the action of . Now, let then,

(1)

Note that both of the one step reward and transition probability depend only on which is the action taken at . For the next time step, one can construct a function such that, , then,

On the other hand,

Therefore,

By substituting the second term of the right hand side of Equation (1) with the right hand side of the above equation,