Linear approach to the orbiting spacecraft thermal problem
Abstract
We develop a linear method for solving the nonlinear differential equations of a lumpedparameter thermal model of a spacecraft moving in a closed orbit. Our method, based on perturbation theory, is compared with heuristic linearizations of the same equations. The essential feature of the linear approach is that it provides a decomposition in thermal modes, like the decomposition of mechanical vibrations in normal modes. The stationary periodic solution of the linear equations can be alternately expressed as an explicit integral or as a Fourier series. We apply our method to a minimal thermal model of a satellite with ten isothermal parts (nodes) and we compare the method with direct numerical integration of the nonlinear equations. We briefly study the computational complexity of our method for general thermal models of orbiting spacecraft and conclude that it is certainly useful for reduced models and conceptual design but it can also be more efficient than the direct integration of the equations for large models. The results of the Fourier series computations for the tennode satellite model show that the periodic solution at the second perturbative order is sufficiently accurate.
Nomenclature
=  outwardfacing area of ithnode, m  
=  thermal capacitance diagonal matrix, J/K  
=  thermal capacitance of ithnode, J/K  
=  eigenvector of , K  
=  jthcomponent of eigenvector , K  
=  driving function vector, K/s  
=  th Fourier coefficient of , K/s  
=  complex conjugate of , K/s  
=  driving function for thmode, K/s  
=  time derivative of , K/s  
=  driving function for thnode, K/s  
=  th Fourier coefficient of , K/s  
=  driving function for second order temperature vector, K/s  
=  Jacobian matrix, s  
=  Jacobian matrix ijelement, s  
=  number of steps in numerical integration of ODEs  
=  conduction coupling matrix, W/K  
=  matrix of conductances from linearized radiation coupling terms, W/K  
=  thnode conductance from linearized environmentradiation terms, W/K  
=  conduction coupling matrix ijelement, W/K  
=  ijelement of conductance matrix from linearized radiation terms, W/K  
=  number of samples for discrete Fourier transform  
=  number of nodes  
=  eigenvector matrix for J, K  
=  ithcomponent of eigenvector for thmode, K  
=  auxiliary heat input vector, K/s  
=  heat input to ithnode, W  
=  mean value of over period , W  
=  ithnode coefficient of radiation to the environment, W/K  
=  radiation coupling matrix, W/K  
=  time, s (min in figures)  
=  temperature vector, K  
=  vector of “errors” w.r.t. ESATAN solution, K  
=  steadystate temperature vector, K  
=  temperature of inode, K  
=  cosmic microwave background radiation temperature, K  
=  nth order term in jthnode temperature expansion, K  
=  stationary solution temperature vector, K  
=  nth order of stationary solution temperature vector, K  
=  time derivative of ithnode temperature, K/s  
=  steadystate temperature of ithnode, K  
=  orbital period, s  
=  matrix of independent solutions to homogeneous equations, K  
=  solar absorptivity  
=  Kronecker’s delta  
=  antisymmetric part of , s  
=  ijelement of , s  
=  infrared emissivity of ithnode outwardfacing surface  
=  perturbation parameter  
=  eigenvalue of J for th mode, s  
=  perturbation of Jeigenvalue for th mode, s  
=  StefanBoltzmann constant, W mK 
I Introduction
The thermal control of a spacecraft ensures that the temperatures of its various parts are kept within their appropriate ranges Kreith (); thermcontrol (); thermcontrol_2 (); thermcontrol_3 (). The simulation and prediction of temperatures in a spacecraft during a mission are usually carried out by commercial software packages. These software packages employ “lumped parameter” models that describe the spacecraft as a discrete network of nodes, with one energybalance equation per node. The equations for the thermal state evolution are coupled nonlinear firstorder differential equations, which can be integrated numerically. Given the thermal parameters of the model and its initial thermal state, the numerical integration of the differential equations yields the solution of the problem, namely, the evolution of the node temperatures. However, a detailed model with many nodes is difficult to handle, and its integration for a sufficiently long time of evolution can take considerable computer time and resources. Therefore, it is very useful to study simplified models and approximate methods of integrating the differential equations.
Many spacecraft missions, in particular, satellite missions, consist of an initial transient part and then a stationary part, in which the spacecraft just goes around a closed orbit, in which the heat inputs are periodic. These periodic heat inputs are expected to induce periodic temperature variations, with a maximum and a minimum temperature in each orbit. This suggests a conservative approach that consists in computing only the temperatures for the hot and cold cases of the given orbit, defining them as the two steady cases with the maximum and minimum heat loads, respectively. Naturally, the real temperature variations in the orbit are smaller, because there is not enough time for the hot and cold cases to establish themselves. In fact, the temperature variations can be considerably smaller, to such a degree that it is necessary to integrate the differential equations, at least approximately.
The differential equations for energy balance are nonlinear due to the presence of radiation couplings, which follow the StefanBoltzmann quartic law. A common approach to these equations involves a linearization of the radiation terms that approximate them by heat conduction terms analsat (); thermcontrol_2 (); analsat_2 (); IDR (). This approach transforms the nonlinear equations into standard linear heat conduction equations. But this approach has not been sufficiently justified, is of a heuristic nature and does not constitute a systematic approximation.
In fact, nonlinear equations are very different from linear equations and, in particular, a periodic driving may not induce periodic solutions but much more complex solutions, namely, chaotic solutions. Therefore, we have carried out in preceding papers a full nonlinear analysis of spacecraft thermal models NoDy (); NoDy1 (). The conclusion of the analysis is that the complexities of nonlinear dynamics, such as multiple equilibria and chaos, do not appear in these models. While the existence of only one equilibrium state can be proved in general, the absence of chaos under driving by variable external heat loads can only be proved for a limited range of magnitudes of the driving loads. This range presumably includes the magnitudes involved in typical spacecraft orbits. The proofs in Refs. 9 and 10 are constructive and are based on a perturbation method that is expected to be sound when the linear equations corresponding to the first perturbative order constitute a good approximation of the nonlinear equations. This implies that the fully nonlinear solution describes a weakly nonlinear oscillator. Since the perturbative approximation is mathematically rigorous and systematic, it is worthwhile to study in detail the scope of the perturbative linear equations and, furthermore, to compare them with previous linear approaches of a heuristic nature.
The main purpose of this paper is to study the linear method of predicting the thermal behavior of spacecraft in stationary orbits (Sect. II and III) and to test it on a minimally realistic thermal model of a satellite in a circular orbit. Since the general one and twonode models analyzed in Refs. 9 and 10, respectively, are too simple, we define in this paper a tennode thermal model of a small Moonorbiting satellite (Sect. IV). This model is simple enough to allow us to explicitly show all the quantities involved (thermal couplings and capacities, heat inputs, etc.) and it is sufficient for illustrating the main features of the linear approach. As realistic thermal models have many more nodes, we consider in Sect. V the important issue of scalability of the method and, hence, its practical applications. Computational aspects of the steadystate problem have been studied by Krishnaprakas Kris1 (); Kris2 () and by Milman and Petrick MiPe (), while computational aspects of the direct integration of the nonlinear equations for the unsteady problem have been studied by Krishnaprakas Kris3 (). Here we focus on the linear equations for the stationary but unsteady case and survey its computational aspects.
A note on notation: In the equations that contain matrix or vector quantities, sometimes we use component notation (with indices) while other times we use compact matrix notation (without indices), according to the nature of the equations.
Ii Linearization of the heatbalance equations
A lumpedparameter thermal model of a continuous system consists of a discrete network of isothermal regions (nodes) that represent a partition of the total thermal capacitance and that are linked by thermal conduction and radiation couplings Kreith (); thermcontrol (); thermcontrol_2 (); thermcontrol_3 (); analsat (). This discretization reduces the integrodifferential heattransfer equations to a set of energybalance ODEs, one per node, which control the evolution of the nodes’ temperatures analsat ():
(1) 
where is the number of nodes and contains the total heat input to the thnode from external radiation and from internal energy dissipation (if there is any). The conduction and radiation coupling matrices are denoted by and , respectively; they are symmetric ( and ) and ; so there are independent coupling coefficients altogether, but many vanish, usually. The temperature K is the temperature of the environment, namely, the cosmic microwave background radiation. The thnode coefficient of radiation to the environment is given by , where denotes the outward facing area, its (infrared) emissivity, and is the StefanBoltzmann constant. The constant term can be included in or ignored altogether, if each . Equations (1) coincide with the ones implemented in commercial software packages, for example, ESATAN ESATAN ().
There is no systematic procedure for finding the analytical solution of a system of nonlinear differential equations, except in some particularly simple cases. Of course, nonlinear systems can always be integrated numerically with finite difference schemes. Methods of this kind are employed in commercial software packages. When a nonlinear system can be approximated by a linear system and, hence, an approximate analytic solution can be found, this solution constitutes a valuable tool. Actually, one can always resort to some kind of perturbation method to linearize a nonlinear system. Therefore, we now study the rigorous linearization of Eqs. (1) based on a suitable perturbation method, and we also describe, for the sake of a comparison, a heuristic linearization, which actually is best understood in light of the results of the perturbation method.
ii.1 Perturbative linearization
If we assume that the heat inputs in the energybalance Eqs. (1) are periodic, namely, that there is a time interval such that , then it seems sensible to study first the effect of the mean heat inputs in a period. This averaging method, introduced in Refs. 9 and 10, relies on the fact that the autonomous nonlinear system of ODEs for constant can be thoroughly analyzed with analytical and numerical methods. For example, it is possible to determine that there is a unique steady thermal state and that it is (locally) stable MiPe (); NoDy1 (). The actual values of the steady temperatures can be found efficiently with various numerical methods Kris1 (); Kris2 (); MiPe (). Furthermore, the eigenvalues and eigenvectors of the Jacobian matrix of the nonlinear system of ODEs provides us with useful information about the dynamics, in particular, about the approach to steadystate: the eigenvectors represent independent thermal modes and the eigenvalues represent their relaxation times NoDy1 ().
Once the averaged equations are solved, the variation of the heat inputs can be considered as a driving of the averaged solutions. Thus, we can define the driving function
where denotes the mean value of over the period of oscillation. A weak driving function must not produce a notable deviation from the averaged dynamics. In particular, the longterm thermal state of an orbiting spacecraft must oscillate about the corresponding steadystate. To embody this idea, we introduce a formal perturbation parameter , to be set to the value of unity at the end, and write Eqs. (1) as
(2) 
Then, we assume an expansion of the form
(3) 
When we substitute this expansion into Eqs. (2), we obtain for the zeroth order of
(4) 
that is to say, the averaged equations. The initial conditions for these equations are the same as for the unaveraged equations.
For the first order in , we obtain the following system of linear equations:
(5) 
Here, is the Jacobian matrix
where is the solution of the zeroth order equation. Equations (5) are to be solved with the initial condition .
The elements of the Jacobian matrix at a generic point in the temperature space are calculated to be:
(6)  
(7) 
This matrix has interesting properties. First of all, it has negative diagonal and nonnegative offdiagonal elements. In other words, is a matrix BerPlem (). Furthermore, it fulfills a semipositivity condition that qualifies it as a nonsingular matrix NoDy1 (). Since the eigenvalues of an matrix have positive real parts, the opposite holds for , namely, its eigenvalues have negative real parts. One more interesting property of , related to semipositivity, is that it possesses a form of diagonal dominance: it is similar to a diagonally dominant matrix and the similarity is given by a positive diagonal matrix. Naturally, this property is shared by . These properties are useful to prove some desirable properties of the solutions of Eqs. (5).
The chief property of is that is a nonsingular matrix. In particular, it implies that is nonnegative and, therefore, that the PerronFrobenius theory is applicable to it BerPlem (). The relevant results to be applied are: (i) Perron’s theorem, which states that a strictly positive matrix has a unique real and positive eigenvalue with a positive eigenvector and that this eigenvalue has maximal modulus among all the eigenvalues; (ii) a second theorem, stating that if a matrix that is a nonsingular matrix is also “irreducible”, then its inverse is strictly positive. The irreducibility of follows from the symmetry of the matrices and NoDy1 (). As the positive (Perron) eigenvector of is the eigenvector of that corresponds to its smallest magnitude eigenvalue, it defines the slowest relaxation mode (for a given set of temperatures). Therefore, in the evolution of temperatures given by Eqs. (4), steadystate is eventually approached from the zone corresponding to simultaneous temperature increments (or decrements).
The matrix in Eqs. (5) is obtained by substituting for in Eqs. (6) and (7). Then, the nonhomogeneous linear system with variable coefficients, Eqs. (5), can be solved by variation of parameters NoDy1 (), yielding the expression:
(8) 
where is a matrix formed by columns that are linearly independent solutions of the corresponding homogeneous equation, with the condition that (the identity matrix). The difficulty in applying this formula lies in computing , that is, in computing the solutions of the homogeneous equation. Moreover, this computation demands the previous computation of the solution for .
Since we are only interested in the stationary solutions of the heatbalance equations rather than in transient thermal states, it is possible to find an expression of these solutions that is more manageable than Eq. (8). The transient thermal state relaxes exponentially to the stationary solution, which is a limit cycle of the nonlinear equations, technically speaking NoDy (); NoDy1 (). Therefore, the stationary solution is given by the solution of Eqs. (5) with the constant Jacobian matrix calculated at the steadystate temperatures, which we name .^{1}^{1}1The solution can also be derived as the limit of Eq. (8) in which . This solution is simply NoDy1 ():
(9) 
with calculated at the point . Furthermore, the periodic stationary solution is obtained by extending the upper integration limit from to infinity:
(10) 
This function is indeed periodic, unlike the one defined by Eq. (9), so it is determined by its values for . Note that . For numerical computations, it can be convenient to express the integral from 0 to as an integral from 0 to , taking advantage of the periodicity as follows:
(the series converges because the eigenvalues of have negative real parts). In the last integral, the argument of can be transferred to the interval :
where . Note that the oneperiod shift in the argument of the last is necessary for the argument to be in .
Some remarks are in order. First of all, we have assumed that there is one asymptotic periodic solution of the nonlinear Eqs. (2) and only one (a unique limit cycle). Equivalently, we have assumed that the perturbation series converges. This assumption holds in an interval of the amplitude of heat inputvariations NoDy1 (). Besides, for the integrals in Eq. (10) and the following equations to make sense, it is required that as . This is guaranteed, because the eigenvalues of have negative real parts, as is necessary for the steadystate to be stable. In fact, the eigenvalues are expected to be negative real numbers and is expected to be diagonalizable but both properties are not rigorously proven NoDy1 () (however, see Sect. II.2).
If is diagonalizable, that is to say, there is a real matrix such that is diagonal, then the calculation of the integrals is best carried out on the eigenvector basis, given by the matrix . Using this basis, Eq. (10) is expressed as
(11) 
where the first sum runs over the eigenvectors and their corresponding eigenvalues . Expression (11) allows us to compare the contribution of the different thermal modes. In particular, for the fast modes, such that is large, we can use Watson’s lemma pertmethods () to derive the asymptotic expansion:
where . When is large, the first term suffices (unless is also large, for some reason); and the first term is small, unless is large. In essence, if the fast modes are not driven strongly, they can be neglected in the sum over in Eq. (11).
ii.1.1 Second order perturbative equation
For second order in , a straightforward calculation NoDy1 () yields the following linear equation:
(12) 
where is the same Jacobian matrix that appears in the firstorder Eq. (5) and
(13) 
The initial condition for Eq. (12) is , as for Eqs. (5). Therefore, the firstorder and secondorder equations have identical solutions in terms of their respective driving terms, although , Eqs. (13), is a known function of only when the lower order equations have been solved. The integral expression, Eqs. (10), of the stationary solution is also valid for , after replacing with and using in Eq. (13) the stationary values and (which make periodic).
It is possible to carry on the perturbation method to higher orders, and it always amounts to solving the same linear equation with increasingly complicated driving terms that involve the solutions of the lower order equations. The example of Sect. IV shows that, in a typical case, is a small correction to , and further corrections are not necessary. This confirms that the perturbation method is reliable for a realistic case.
ii.2 Heuristic linearization
A linearization procedure frequently used in problems of radiation heat transfer thermcontrol_2 (); analsat_2 (); IDR () consists of using the algebraic identity
to define an effective conductance for the radiation coupling between nodes and . The equation
defines the effective conductance
for specified values of the node temperatures and . For an orbiting spacecraft, the natural base values of the node temperatures are the ones that correspond to the steadystate solution of the averaged equations, namely, In the special case of radiation to the environment, can be replaced with linear terms such that for
The resulting linear equations are:
(14) 
These equations have only conduction couplings, so they are a discretization of the partial differential equations of heat conduction. As a linear system of ODEs, the standard form is
(15) 
where (the Jacobian matrix) is now given by:
(16)  
(17) 
The linear system of Eqs. (15) can be solved in the standard way, yielding:
(18) 
where we have introduced the vector , with components . We can also express the solution in terms of the driving function :
(19)  
(20) 
For large , this solution tends to the periodic stationary solution
(21) 
assuming that as . This is a consequence of the structure of , as in the preceding section. In the present case, the eigenvalues of , beyond having negative real parts, are actually negative real numbers, as we show below.
The total conductance matrix is symmetric but this does not imply that is symmetric. Nevertheless, if we define , the matrix is symmetric, because its offdiagonal matrix elements are:
Hence, the matrix , similar to , has real eigenvalues. Furthermore, is diagonalized by an orthogonal transformation; that is to say, there is an orthogonal matrix such that
is diagonal. Therefore, the thermal modes are actually normal; that is to say, the modes, which are the eigenvectors of and hence the columns of the matrix , are related to the eigenvectors of , which are normal and are given by the columns of . Alternatively, one can say that the eigenvectors of are normal in the “metric” defined by ; namely,
which can be written in matrix form as . Naturally, the orthogonality of modes greatly simplifies some computations.
Furthermore, the symmetry of the conductance matrix implies that the sum in Eq. (14) can be written as the action of a graph Laplacian Chung () on the temperature vector. Naturally, the graph is formed by the nodes and the linking conductances. A graph Laplacian is a discretization of the ordinary Laplacian and is conventionally defined with the sign that makes it positive semidefinite. The zero eigenvalue corresponds to a constant function, that is, a constant temperature, in the present case. A vector with equal components, say, equal to , is the positive (Perron) eigenvector of the matrix. With more generality, the Laplacian of a graph can be defined as a symmetric matrix with offdiagonal entries that are negative if the nodes are connected and null if they are not graph_eigenvec (). This definition does not constrain the diagonal entries and, therefore, does not imply that a graph Laplacian is positive semidefinite. It can be made positive definite (or just semidefinite) by adding to it a multiple of the identity matrix, which does not alter the eigenvectors. Of course, the eigenvector corresponding to the smallest eigenvalue does not have to be constant, but the PerronFrobenius theorem BerPlem () tells us that it is positive. By this general definition of a graph Laplacian, the matrix is a different Laplacian for the same graph, and Eqs. (15) contain the action of this Laplacian on the vector . Notice that this general definition of a graph Laplacian is connected with the definition of a matrix BerPlem () and, actually, a symmetric matrix is a graph Laplacian. If such a matrix is positive definite, then it is equivalent to a Stieltjes matrix, namely, a symmetric nonsingular matrix BerPlem (). The general Jacobian obtained in Sect. II.1 is also such that and also are both nonsingular matrices, but they need not be symmetric.
To investigate the accuracy of the approximation of the radiation terms by conduction terms, let us compare the periodic solution given by Eq. (21) with the firstorder perturbative solution found in Sect. II.1, namely, . Of course, the Jacobian matrices in the respective integrals differ, as do the temperature vectors added to the integrals, namely, or . While corresponds to the authentic steadystate of the nonlinear averaged equations, corresponds to the steadystate of Eqs. (14) after averaging, which is a state without significance, since we have already used the set of temperatures of the authentic steadystate to define the radiation conductances in Eq. (14). Therefore, the only sensible linear solution is the perturbative solution , even if we replace the Jacobian matrix given by Eq. (6) and (7) with the one given by Eq. (16) and (17).
In our context, the notion of radiation conductance actually follows from the symmetry of the matrices or . Therefore, the most natural definition of radiation conductance probably is , that is, the symmetrization of the term in Eq. (6). This symmetrization has been tested by Krishnaprakas Kris2 (), considering the steadystate problem for models with up to nodes and working with various resolution algorithms. He found that the effect of symmetrization is not appreciable. To estimate the effect of the antisymmetric part of the matrix , namely, , on the eigenvalue problem for the Jacobian, we proceed as follows. We formulate this eigenvalue problem in terms of the matrix , so that it is an eigenvalue problem for a symmetric matrix perturbed by a small antisymmetric part. This problem is well conditioned, because the eigenvectors of the symmetric matrix (the columns of the matrix ) are orthogonal. In particular, the perturbed eigenvalues are still real. Furthermore, the firstorder perturbation formula for the eigenvalue associated with an eigenvector pertmethods () yields:
vanishing because the perturbation matrix is antisymmetric. So the nonvanishing perturbative corrections begin at the second order in the perturbation matrix, and, in this sense, they are especially small.
Iii Fourier analysis of the periodic solution
Given that is a periodic function, it can be expanded in a Fourier series. To derive this series, let us first introduce the Fourier series of ,
Inserting this series in the integral of Eq. (10) and integrating term by term, we obtain the Fourier series for . Alternatively, we can substitute the Fourier series for both and into Eqs. (5), where is taken to be constant; then we can solve for the Fourier coefficients of . The result is
(22) 
The Fourier coefficients are obtained by integration:
(23) 
Given that is a real function,
(24) 
Furthermore, implies
(25) 
So is defined by the sequence of Fourier coefficients for positive . This sequence must fulfill the requirement that so a limited number of the initial coefficients may suffice.
Actually, for numerical work, Eq. (23) can be conveniently replaced by the discrete Fourier transform
(26) 
which only requires sampling of the values for , but also only defines a finite number of independent Fourier coefficients, because . Notice that we usually have available just a sampling of the heat inputs at regular time intervals, rather than the analytical form of . To calculate the exact number of independent Fourier coefficients provided by Eq. (26), we must take into account Eqs. (24) and (25). If is an odd number, the independent Fourier coefficients are the ones with ; that is to say, there are independent real numbers. If is even, the independent Fourier coefficients are the ones with , and
is real, so there are independent real numbers as well. For definiteness, let be odd. Then, we can express as
Of course, the values of at are the sampled values employed in Eq. (26), but the expression is valid for any and constitutes an interpolation of the sampled values. Naturally, the higher the sampling frequency , the more independent Fourier coefficients we have and the more accurate the representation of is.
As is well known, the Fourier series of a function that is piecewise smooth converges to the function, except at its points of discontinuity, where it converges to the arithmetic mean of the two onesided limits Fourier (). However, the convergence is not uniform, so that partial sums oscillate about the true value of the function near each point of discontinuity and “overshoot” the two onesided limits in opposite directions. This overshooting is known as the Gibbs phenomenon, and, in our case, produces typical errors near the discontinuities of the driving function . These discontinuities are due to the sudden obstructions of the radiation on parts of the aircraft that occur at certain orbital positions, for example, when the Sun is eclipsed.^{2}^{2}2Strictly speaking, the function is always continuous but it undergoes sharp variations at some times. These sharp variations can be considered as discontinuities, especially, if the function is sampled. Section IV.2 shows that the Gibbs phenomenon at eclipse points can be responsible for the largest part of the error of the linear method when the discrete Fourier transform is used.
The approximation of provided by the samples of is, of course,
(27) 
and is valid for any . However, if we are only interested in at , we can compute these values with the inverse discrete Fourier transform
(28) 
where, for , and where gives the remainder of the integer division by . This inverse discrete Fourier transform can be more convenient for a fast numerical computation. Regarding computational convenience, the discrete Fourier transform, be it direct or inverse, is best performed with a fast Fourier transform (FFT) algorithm. The classic FFT algorithm requires to be a power of two Num_rec (); GolVan (); in particular, it has to be even.
The function , computed by Fourier analysis from samples of , is to be compared with the one computed by a numerical approximation of the integral formula, Eq. (10), in terms of the same samples. Naturally, we can use instead of the integral over the integral over below Eq. (10). This integral can be computed from the samples of by an interpolation formula, say the trapezoidal rule. It is not easy to decide whether this procedure is more efficient than Fourier transforms. Considering that the substitution of the continuous Fourier transform, Eq. (23), by the discrete transform, Eq. (26), is equivalent to computing the former with the trapezoidal rule, the integral formula may seem more direct. In particular, this formula allows us to select the values of for which we compute independent of the sampling frequency, so we can choose just a few distinguished orbital positions and avoid the computation of all the integrals (one is removed by the condition ). Note that the computation of all of the independent with Eq. (26) is equivalent to the computation of precisely integrals. However, the efficiency of the FFT reduces the natural operation count of this computation, of order , to order ; so its use can be advantageous, nevertheless.
Iv Tennode model of a Moonorbiting satellite
To test the previously explained methods, we construct a small thermal model of a simple spacecraft, namely, a tennode model of a Moonorbiting satellite. Our satellite tennode model supports a basic thermal structure and is simple enough for allowing one to explicitly display the main mathematical entities, e.g., the matrices , and . The satellite consists of a rectangular parallelepiped (a cuboid) of square base plus a small cylinder on one of its sides that simulates an observation instrument, as represented in Fig. 1. In addition, at a height of two thirds of the total height, there is an inner tray with the electronic equipment. The dimensions of the cuboid are , and the cylinder has a length of 0.1 m and a radius of 0.04 m. The satellite’s frame is made of aluminum alloy, using plates 1 mm thick, except the bottom plate, which is 2 mm thick. This plate plays the role of a radiator and its outer surface is painted white to have high solar reflectance. The cylinder is made of the same aluminum alloy, as well as the tray; they are 0.5 mm and 2 mm thick, respectively. The sides of the satellite, except the one with the instrument, are covered with solar cells, which increase the sides’ thickness to 2.25 mm.
The thermal model of the satellite assigns one node to each face of the cuboid, one more to the cylinder and another to the tray, that is, eight nodes altogether. Furthermore, to conveniently split the total heat capacitance of the electronic equipment, it is convenient to add two extra nodes with (large) heat capacitance but with no surface that could exchange heat by radiation. Nodes of this type are called “nongeometrical nodes”. In the present case, they represent two boxes with equipment placed above and below the tray, respectively. We order the ten nodes as shown in Fig. 1. The lower box (node 10) is connected to the radiator by a thermal strap. Given the satellite’s structure and assuming appropriate values of the specific heat capacities, it is possible to compute the capacitances with the result given in Table 1. Using the value of the aluminumalloy heat conductivity and assuming perfect contact between plates, we compute the conduction coupling constants between nodes . The remaining conduction coupling constants are given reasonable values, shown in Eq. (29). The computation of the radiation coupling constants and and indeed the computation of the external radiation heat inputs requires a detailed radiative model of the satellite, consisting of the geometrical view factors and the detailed thermooptical properties of all surfaces. This radiative model allows us to compute the respective absorption factors Kreith ().
Node  (J/K)  (W)  () 

1  331.7  15.18  2.6 
2  147.4  2.30  3.6 
3  331.7  15.17  2.6 
4  331.7  14.80  2.3 
5  196.6  3.91  0.2 
6  98.3  0.63  2.2 
7  196.6  0  6.3 
8  31.9  1.70  4.7 
9  800.0  4.35  15.9 
10  1400.0  6.15  11.1 
The thermooptical properties of the surfaces are assumed to be as realistic as possible, given the simplicity of the thermal model. All radiation reflection is assumed to be diffuse, as is common for many types of surfaces. The inner surfaces are painted black and have high emissivity, , to favor the uniformization of the interior temperature. The outer surfaces are of three types. The three sides covered with solar cells also have high emissivity, , to favor the cooling of the solar cells. On the other hand, they have high solar absorptivity, . Of this 0.75, 0.18 is processed into electricity and the remaining 0.57 dissipates as heat in the solar cells. The top surface, the surface with the cylinder, and the cylinder itself (its two sides) have low emissivity, , and low solar absorptivity, , which are chosen to simulate the effect of a multilayer insulator. In contrast, the bottom surface simulates a radiator, with and (like an optical solar reflector). All of these thermooptical properties are summarized in Fig. 2. For the computation of the corresponding absorption factors, we employ the raytracing MonteCarlo simulation method provided by ESARAD (ESATAN’s radiation module) ESATAN ().
Taking into account the above information, one obtains the following conduction (in W/K) and radiation (in W/K) matrices:
(29)  
(30)  
(31) 
The satellite’s thermal characteristics are defined by the data set but the radiation heat exchange depends on the nodal temperatures, which in turn depend on the heat input. As explained in Sect. II.1, the appropriate set of nodal temperatures corresponds to the steadystate for averaged heat inputs, given by the algebraic equation that results from making in Eq. (4). Since we need the external heat inputs and, therefore, the orbit, we proceed to define the orbit characteristics.
We choose a circular equatorial orbit m above the Moon’s surface, such that s. The radiation heat input to the satellite consists, on the one hand, of the solar irradiation and the Moon’s albedo, and, on the other hand, of the Moon’s constant IR radiation. We take 0.12 for the mean Moon’s albedo and 270 K for the blackbody equivalent temperature of the Moon. There is also heat produced by the dissipation of electrical power in the equipment (nodes 9 and 10). For the sake of simplicity, the dissipation rate is assumed to be constant, equal to the mean electrical power generated in an orbit. In a part of the orbit, the Moon eclipses the Sun, so the satellite receives no direct sunlight or albedo, although there is always IR radiation from the Moon. The satellite is stabilized such that the cylinder (the “observation instrument”) always points to the Moon and the longer edges are perpendicular to the orbit. The radiation heat input can be computed by taking into account the given orbital characteristics and the satellite’s thermooptical characteristics, in particular, the absorption factors. It has been computed with ESARAD, taking 111 positions on the orbit, that is, at intervals of one minute.
In Fig. 3, all 111 positions are plotted. The initial position of the satellite is at the subsolar point and it moves towards the east. The total external radiation heat input to the first eight nodes (the ones that receive radiation) is plotted in Fig. 4 (only at every other position, for clarity). Note the symmetry between nodes 1 and 3, which denote the lateral faces, covered with solar cells. Node 4 corresponds to the back side, also covered with solar cells. So the external radiation load on it has a similar time variation, but it is displaced. The solar radiation absorbed by all solar cells results in an orbital mean power rate of 10.5 W, dissipated in the equipment and split between nodes 9 and 10, which receive 4.35 and 6.15 W, respectively. The external radiation absorbed by the side with the cylinder (node 2) is considerably smaller than the radiation absorbed by the sides with solar cells, due to the low value of (and of , as well) for the corresponding surface. The bottom and top outer surfaces, which belong to nodes 5 and 6, respectively, have view factors for the external radiation that are much less favorable than those of the side surfaces. Nevertheless, the amount of lunar IR radiation absorbed by the bottom surface, due to its high , is such that the orbital mean of the external heat input to node 5 is, in fact, larger than the one for node 2 (see Table 1). Naturally, node 7, with no outer surfaces, does not absorb any external radiation.
To determine the hot and cold cases of the orbit, we compute the total heat load on the satellite for each position in the orbit, finding a maximum of 90.59 W at position 14 and a minimum of 18.65 W at any position in the eclipse, during which all the heat loads stay constant. The solution of the corresponding steadystate problems at position 14 and at a position in the eclipse yields the two sets of nodal temperatures (for the given node order):
 Hot

.
 Cold

.
The results in Sect. IV.1 show that the periodic thermal state does not reach these extreme temperatures, which could endanger the performance of the satellite.
iv.1 Jacobian matrix and periodic solution
We compute the averages and substitute for them and the data set in Eq. (4) to find the steady state temperatures. The results are given in Table 1. Then, according to Eqs. (6) and (7), the Jacobian matrix is (in s)
By inspection, one can check that it has nonnegative offdiagonal and negative diagonal elements, that is to say, is a matrix. It is also diagonally dominant, namely, . The eigenvalues of are
Their inverses (in absolute value) give us the typical relaxation times of the corresponding thermal modes. Thus, we deduce that relaxation time of the fastest mode is about 55 s, whereas the relaxation time of the slowest one is s. The latter time is similar to s.
The eigenvalues are real numbers and, furthermore, is diagonalizable, because the eigenvalues are different. Both properties also follow from being almost symmetric: its antisymmetric part, , is relatively small, namely, , where the matrix norm is the Frobenius norm (other standard matrix norms yield similar values). Therefore, the notion of “radiation conductance” (Sect. II.2) is appropriate in this case, as concerns its use in the linear equations. The thermal modes are almost normal, namely, the eigenvector matrix is such that with an error The most interesting eigenvector of is, of course, the positive (Perron) eigenvector, which corresponds to the slowest mode. The normalized positive eigenvector is
Note that the temperature increments are of a similar magnitude, except the ones of node 7 and, especially, nodes 9 and 10, which are associated, respectively, for the tray and the boxes of electronic equipment. The next mode, corresponding to the eigenvalue , has one negative component (the ninth), and the remaining modes have more than one.
To calculate , we choose the Fourier series of Eq. (27) or, rather, the inverse discrete Fourier transform of Eq. (28), which can be computed with a FFT algorithm. The Fourier coefficients can also be computed with the FFT, according to Eq. (26). Once the vector at the 111 positions is available, the set of nodal temperatures corresponding to the firstorder perturbative solution is plotted in Fig. 5. A measure of the accuracy of this perturbative calculation is given by the secondorder calculation in the next section. The truncation of the Fourier series imposed by the sampling of also is a source of error, unrelated to perturbation theory. The piecewise smoothness of the function suggests that the error is small (but see Sect. IV.2).
It is also interesting to see if the firstorder perturbative calculation is affected by neglecting the fastest modes: according to the analysis at the end of Sect. II.1, these modes are expected to contribute in proportion to their relaxation times. The fastest mode relaxes in about 55 s, a short but nonnegligible time. As a consequence, its contribution to , which we find to have a maximum magnitude of 0.8 K, is small but nonnegligible. But we can deduce that still faster modes, which would appear in a thermal model of the satellite with more nodes, are hardly necessary.
From the engineering standpoint, note that this satellite thermal model is successful, insofar as it predicts that all nodal temperatures stay within adequate ranges. In particular, nodes 9 and 10, corresponding to the boxes with electronic equipment, stay within the range from to 23 . These nodes are inner nodes with large thermal capacity and, hence, are protected against the larger changes in the external heat inputs. In contrast, the outer nodes are very exposed and undergo considerable variation in temperature, with especially sharp changes at the beginning and end of the eclipse.
iv.1.1 Secondorder correction
According to Sect. II.1.1, the secondorder perturbative correction to the periodic stationary solution is obtained by the same procedure as that for but using a different driving function that is computed from and from itself. The computations are straightforward and they yield the correction plotted in Fig. 6. This correction is always negative, because the negative term in the expression for , Eq. (13), dominates over the positive term. The equation (10) for and the corresponding equation for are both linear, so