Approximated segmentation considering technical and dosimetric constraints in intensity-modulated radiation therapy with electrons
In intensity-modulated radiation therapy, optimal intensity distributions of incoming beams are decomposed into linear combinations of leaf openings of a multileaf collimator (segments). In order to avoid inefficient dose delivery, the decomposition should satisfy a number of dosimetric constraints due to suboptimal dose characteristics of small segments. However, exact decomposition with dosimetric constraints is only in limited cases possible. The present work introduces new heuristic segmentation algorithms for the following optimization problem: Find a segmentation of an approximated matrix using only allowed fields and minimize the approximation error. Finally, the decomposition algorithms were implemented into an optimization programme in order to examine the assumptions of the algorithms for a clinical example. As a result, identical dose distributions with much fewer segments and a significantly smaller number of monitor units could be achieved using dosimetric constraints. Consequently, the dose delivery is more efficient and less time consuming.
Institute for Mathematics, University of Rostock, Rostock, Germany
Department of Radiotherapy and Radio-Oncology, University Medical Center
Hamburg-Eppendorf, Hamburg, Germany
Keywords: IMRT planning, intensity matrix, approximated segmentation, dosimetric and technical constraints, multileaf collimator
Mathematics Subject Classification (2000): MSC 90C90, MSC 92C50, MSC 49M25, MSC 49M27
In intensity-modulated radiation therapy (IMRT), intensity matrices with nonnegative integer entries are computed for each irradiation field. After discretization of the field into bixels, each entry of the matrix corresponds to the required intensity within this bixel. The segmentation step consists in decomposing the matrix into a linear combination of subfields (segments) shaped by a multileaf collimator (MLC). The first intuition is that a treatment plan is optimal, if the linear combination of the chosen segments equals the matrix. Such a plan consists of various segments possibly including those segments where most of the irradiation field is covered and only few bixels receive radiation.
For dosimetric reasons, however, the model assumption is not given in practice. Irradiation of small photon or electron segments result in a much lower dose output compared to conventional conformal fields. Therefore, the linearity assumption, that irradiating one segment is equivalent to dividing it into two parts and irradiating them separately, only holds, if the two parts are still sufficiently large. In addition, the penetration depth of electrons decreases with decreasing field size and is almost independent of the beam energy for approximately cm cm fields. However, the energy dependence of the penetration depth is necessary for our new IMRT technique with electron beams to adjust the dose to the target volume by use of various beam energies. Figure 1 shows that electron fields of approximately cm cm are necessary to keep an output factor of nearly and an energy-dependent penetration depth.
As a consequence, a treatment plan should consist of segment shapes satisfying certain constraints that ensure a minimum field size. For practical purposes it is also necessary that the field openings are connected and do not degenerate into two or more parts. Besides those dosimetric constraints, there are also technical constraints reducing the number of allowed shapes. One is the leaf overtravel constraint that accommodates the fact that the left (respectively right) leaf of the MLC cannot be shifted further than a threshold to the right (respectively left). These constraints have the consequence, that not every intensity matrix is decomposable in segments satisfying the constraints. This leads us to the task to find an approximation matrix and its decomposition into ”good” segments, that differs from the given intensity matrix as little as possible. The aim is to generate equivalent treatment plans with good segments leading to a reduction in the segment number and monitor units, respectively.
The decomposition problem for the exact case without concerning any additional constraints is well studied. Algorithms for the minimization of the beam-on time can be found in [2, 3, 6, 14, 15, 21]. Approaches for minimizing the number of used segments are given in [7, 18, 26]. A variety of technical constraints are considered, see [5, 16] for the interleaf collision constraint, that prohibits an overlap of adjacent leaf pairs, and [16, 17, 22, 23, 27] for the tongue-and-groove constraint. Kamath et. al.  also investigate the minimum separation constraint that requires a minimum leaf opening in each row and develop a criterion for a matrix being decomposable under this constraint. Engelbeen and Fiorini  deal with the interleaf distance constraint where the allowed difference between two left (respectively right) leaf positions is bounded by some given threshold.
An approximation problem with the aim of reducing the total beam-on time was first formulated in  and generalized to approximated decomposition with interleaf-collision constraint in  and . The dependence between field size and output factors, penetration depth and depth-dose fall-off is outlined in . These considerations lead to the decomposition problem using segments that satisfy some minimum field size constraints. Under these constraints, an exact decomposition of the intensity matrices is, in general, no longer possible (cf. ) and an approximation problem has to be formulated.
Another algorithmic approach that aims at minimizing the number of segments while keeping the quality of the treatment plan is the direct aperture optimization that combines the choice of beams, apertures and weights without computing a leaf sequencing step. Shepard et al.  allow only a limited number of apertures for each beam, Bedford and Webb  also integrate constraints on the segment shape and size in the direct aperture optimization approach. Our algorithm is applicable if one uses intensity profile segmentation and wants to compute segmentations satisfying certain field size constraints and reducing the complexity of the plan. Matuszak et al.  deal with the minimization of the monitor units by smoothing the intensity profiles.
The paper is organized as follows. Section 2 gives two definitions of what we call a segment with good dosimetric properties, one basic definition and an extended one including one further constraint. We concretely define the approximation problem and in Section 3 propose heuristic algorithms for both definitions, each of them consisting of seven different steps. The different parts of the algorithm and their properties are analyzed in Section 4. We especially outline, that the solutions of the subproblems in step and are indeed optimal. Section 5 introduces the clinical case we used for testing the quality of our segmentations. Section 6 gives computational results for the test case and detailed numerical results for the segmentation of clinical matrices from different IMRT treatment plans.
2 Problem formulation and definitions
Throughout the paper we use the notation
for integers and , . Let denote the given fluence matrix of size . Feasible leaf positions of the MLC are modeled as binary matrices , called segments, that satisfy the consecutive-ones-property in each row. In other words, is a segment, if there are integral intervals for all , representing the positions of the left and the right leaf, such that
Furthermore, for each segment , we define for all . For the described reasons, we introduce five parameters , , , and representing the following constraints:
Left Leaf Overtravel Constraint: For all , we require . In each row the left leaf cannot be shifted more to the right than to the bixel with index .
Right Leaf Overtravel Constraint: For all , we require . In each row the right leaf cannot be shifted more to the left than to the bixel with index .
Minimum Separation Constraint and Row Overlap: If a row is not totally covered, we require . Similarly, if rows and are not completely covered, we claim . At least consecutive bixels in each row receive radiation and the irradiated area of two consecutive rows overlaps in at least bixels.
Minimum Vertical Gap: We require a minimum vertical field size and a minimum vertical size of the covered regions, i.e. in each column consecutive ones or zeros should have a minimum number . In detail, if , and for some column , we have . Analogously, we require the same for consecutive zeros framed by ones.
Minimum Total Field Height: At least consecutive rows of the field are not totally covered, i.e. there are at least consecutive rows with . This ensures, that the total size of the field is reasonably large.
Of course, these parameters only make sense if , as well as and .
The case that one row of the field is totally covered and receives no radiation at all, is throughout this paper represented by the leaf positions and . In practice, one will of course choose leaf positions of the form with and that respect the leaf overtravel constraints (i) and (ii).
Remark 1. The Minimum Vertical Gap can be formulated in terms of the leaf position as follows: If for some , then we also require . Analogously, if for some , we also have . This ensures, that in vertical direction, we always have at least bixels open. Therefore, we additionally not allow that or for and forbid also as well as for . Similarly, we require at least bixels closed in vertical direction, if there are open bixels above and below in this column of the matrix. Thus, we make sure that also thin shapes in vertical direction, having negative dosimetric properties as discussed in the introduction, are forbidden.
A segment is called connected if the irradiated area that corresponds to its leaf positions does not resolve into two or more parts, i.e. if the corresponding rectilinear polygon (considered as an open set) is connected.
As the realization of the minimum vertical gap turns out to be the most difficult task, we introduce two different definitions of “good” segments.
Given the parameters with , as well as , a segment with good dosimetric properties is a connected segment satisfying the constraints (i), (ii),(iii) and (v).
Given the parameters with , as well as and , a segment with very good dosimetric properties is a connected segment satisfying the constraints (i)-(v).
For brevity of notation we will call the segments with good dosimetric properties from now on simply segments and the segments with very good dosimetric properties advanced segments.
All in all, we have two optimization problems: Given a matrix with positive integer entries and suitable parameters , find a segmentation
where the are positive integers and
Approximated Segmentation into segments (ASS): the are segments
Approximated Segmentation into advanced segments (ASAS): the are advanced segments
The value of the objective function of the optimization problem is called total change. The delivery time of the segmentation is and the number of segments is .
3 Approximated segmentation
Now we introduce two basic algorithms for ASS and ASAS, each consisting of seven different steps. Two of the steps are identical in both algorithms as they are computed before the segmentation step and therefore do not affect the parameter . Five of the steps differ subject to whether the constraint is regarded or not (namely steps -). Steps and will be solved exactly, whereas steps - are heuristic.
The basic structure of our algorithms is as follows:
Solve the Leaf Overtravel Constraint Problem (LOC) on : Given with , find an approximation with nonnegative integer entries that can be decomposed with respect to the Leaf Overtravel Constraint such that the total change is minimal.
Solve the Minimum Separation Constraint Problem (MSC) on : Given , find an approximation that can be decomposed into segments with for all such that the total change is minimal.
ASS: Compute an approximated segmentation into connected segments satisfying (i)-(iii), but not necessarily (v).
ASAS: Compute an approximated segmentation into connected segments satisfying (i)-(iv), but not necessarily (v).
The approximated segmentation may violate (v) and belongs to an approximation matrix that might have a large total change.
Combination of Fields: Combine stepwise two disjoint fields, if this is possible with small total change. For ASAS, make sure that the new field satisfies (iv).
Make-two-of-two: For , where violates (v) and satisfies (v), compute a substitution , such that and satisfy (v). For ASS, make sure that and satisfy (i)-(iii). For ASAS, and must also satisfy (iv).
Handle Critical Segments: If there are still segments violating (v), try to combine them with a feasible segment such that the total change of the combination is smaller than omitting the critical segment. Take care, that (i)-(iii) (resp. (i)-(iv)) hold.
Total Change Improvement: For all segments and all rows , check whether an increase or decrease of (respectively ) reduces the total change. For ASS, change or only if (i)-(iii) still hold. For ASAS, change or only if (i)-(iv) still hold. Look at all the segments cyclically until no more changes are possible.
The output of the algorithm is a segmentation of an approximation matrix that consists only of segments for ASS and only of advanced segments for ASAS. The LOC- and the MSC-approximation aim at producing a first approximation that can be better decomposed with the given constraints (i)-(iii) in the segmentation step than the initial matrix. The combination of fields, the make-two-of-two-step and the handling of critical segments try to provide segments satisfying (v) without producing too much total change. Finally, the total change improvement is computed in order to improve the approximation.
A very important feature of our heuristic is that further constraints can easily be taken into consideration. For example, if adjacent left and right leafs may not overlap (interleaf collision constraint) one can allow only leaf positions that respect this constraint in the optimization steps -.
Now we describe in detail the subalgorithms and outline some of their basic properties. Throughout the steps that follow the leaf overtravel approximation, we allow only leaf positions that respect the leaf overtravel constraint. For simplicity, we will not mention this basic fact in every step.
4.1 Leaf Overtravel Constraint
As the leaf overtravel constraint only affects a single row of the matrix, the problem LOC can be solved for each row independently. Thus, we compute an optimal approximation of a vector . Segmentations reduce to sums of intervals . Segments are simply --vectors with consecutive ones.
A vector has a segmentation with corresponding leaf positions and iff for all and for all .
Let and . On the one hand, the algorithm of Bortfeld (see ) provides a segmentation where the left leaf position is for exactly segments. Analogously, the right leaf position is for segments and no other leaf positions occur. On the other hand, it is obvious that if (respectively ) there will be a segment with left (respectively right) leaf position in every segmentation. This concludes the proof. ∎
Therefore, we have to find an approximation vector, that has no up-steps after index and no down-steps before index . As we assume , we can use symmetry to solve the approximation problem for the right leaf positions. Besides, the criterion from Lemma 1 shows, that for for each optimal solution of LOC. We simply need to solve the following problem for the subvector and the left overtravel constraint:
LOC-left: Given a vector , find an approximation vector with for such that .
The algorithm for solving the problem LOC-left is described in Algorithm 1 in the appendix. It uses a graph theoretical approach and computes a shortest path in a layered digraph, where the -th layer consists of nodes representing the possible entries of the -th component of the approximation vector. The problem LOC-left is similar to the Monotone Discrete Approximation Problem (MDAP) formulated in  and the algorithm follows the same idea.
Let and and let be the objective value of an optimal solution of LOC-left with . Let be the corresponding predecessor . With respect to Algorithm 1 (that uses the notation above) we yield the following
Algorithm 1 computes an optimal solution of LOC-left.
The initial values are trivially correct. Let now and let be an optimal approximation of with . By induction, is computed correctly and thus
Therefore is a lower bound for the total change. The choice of makes sure, that the optimal value of is chosen and obviously, the approximation vector from Algorithm 1 realizes the lower bound for the total change of . ∎
4.2 Minimum separation constraint
Like the Overtravel Constraint, the Minimum Separation Constraint can be handled independently for each row of the matrix. Thus the task is the following:
MSC-Row Given a vector with nonnegative integer entries, find an approximation vector with nonnegative integer entries, such that has a decomposition into intervals of length and .
We know from Kamath et al. (see ) that a vector can be decomposed without violating the minimum separation constraint, if the optimal decomposition of their algorithm SINGLEPAIR does not violate the minimum separation constraint. For example, the vector cannot be decomposed with , as the optimal decomposition is and the used intervals do not have a minimum length of . This motivates the approximation problem defined above.
Obviously, the problem MSC-Row can be formulated as an integer linear programming problem as follows:
Let be the approximation matrix resulting from the MSC-step. The basic idea of the segmentation is to consider the current total change to be the sum of the absolute values of the entries of and to iteratively compute a segment whose subtraction reduces the current total change. In each step, the matrix is updated by setting . At the end of the segmentation, a positive entry in represents a bixel with underdose and a negative entry a bixel with overdose.
4.3.1 Segmentation for ASS
Let denote the -th row of for all . A segmentation consists of segments each represented by its leaf positions for . The main body for the segmentation step is described in Algorithm 2 in the appendix. This algorithm uses the subroutine Find interval ASS that is precisely described in Algorithm 3.
The idea behind this heuristic choice of the segment being subtracted from in each step is, that we compute the first interval from a sliding window segmentation (see again ), with as index of the first up-step and as index of the first down-step in the corresponding row. If these values already satisfy all requirements, we stop. Otherwise, we lengthen the interval by changing , such that the overlap with the previous row increases. In order to keep the total change small, we neglect this approximation and close the row, if there is no overlap at all.
The segmentation resulting from Algorithm 2 satisfies the constraints (i)-(iii) and the connectedness, but may contain segments that do not have the minimum total field height .
4.3.2 Segmentation for ASAS
The segmentation step for ASAS differs a little bit from the ASS segmentation. For ASS, we always find a segment that has its first nonzero row exactly in that row where the current matrix has its first nonzero row. Going through the rows, we add further ones to the segment if the current sliding window interval overlaps with the previous row.
Computational tests have shown that for ASAS a different technique makes sense because the vertical criterion (iv) plays a role. In detail, whenever we decide for (respectively ), this immediately implies (respectively ) for . Additionally, as we need at least consecutive ones in each row, we also know (respectively ). Therefore, we use a matrix to store unavoidable ones, i.e. if , we put
and if , we define
As we require consecutive ones in each row, we also put
if and at the same time.
We also have to take care that the covered regions have a vertical minimum size. If (respectively ), we analogously put unavoidable zeros into our matrix using the corresponding rules to (2) and (3). (4) is not necessary here, as zeros do not have to be consecutive in the rows.
Example 1. Let and let the -entries of the matrix above denote the open bixels for row and . Before choosing the leaf positions for row , there are some unavoidable ones and zeros that have to be respected.
Thus, the choice of the leaf positions in one row produces unavoidable ones or zeros in other rows. Our algorithm will choose and such that the total change of this row and the corresponding unavoidable ones is minimal. Therefore it might happen, that we do not use the first nonzero row and we also do not use the sliding window technique anymore, as the minimum vertical gap constraint (iv) prohibits so many leaf positions that we can compare the remaining ones with regard to the resulting total change. Algorithm 4 and 5 in the appendix show the corresponding segmentation steps.
The idea behind this algorithm is, that for each segment and in each row we look at all feasible leaf positions. For each pair , we compute the value , which is the difference between the number of positive entries and the number of nonpositive entries in this row as well as in the corresponding unavoidable ones. The larger this value is, the better the pair suits to the segmentation. The unavoidable zeros are not taken into account because it is not necessarily bad if an entry is closed, as this entry can be part of the following segments. We close a row, if the corresponding optimal value of benchmark is zero.
Finally, our procedure leads to a segmentation satisfying (i)-(iv) and the connectedness, but not necessarily satisfies the minimum total field height .
Remark 4. One might argue that it is possible for ASS to compute exactly the segment, that reduces the total change in this step as much as possible. For example, one can consider a layered digraph with layers of nodes. In layer , we have nodes representing feasible leaf positions and we draw an edge between and , if the combination of these two leaf positions satisfies all constraints. Furthermore, we draw edges from a source to all nodes in the first layer and from all nodes in the last layer to a sink. The edge weights are just the total change reductions caused by the leaf positions of the end node of the edge and zero for all edges whose end node is the sink. For a detailed description of the graph see . The optimal segment can then be found by shortest path computation in the digraph. But indeed, such a choice is not a good idea because reducing the total change as much as possible leads to badly decomposable residual matrices. For example, for , would be reduced by and the residuum is badly decomposable. Our used sliding-window-technique is better and leads to .
For ASAS the constraints are too complex anyway to compute the optimal segment in one step, that reduces the total change by a maximum value.
4.4 Combination of fields
Given two segments and , let us consider the open regions and of and , precisely
If and , we merge and and get one new segment with
We iterate this step, until no two segments can be merged by this procedure. Obviously, this step does not affect the total change.
Afterwards, we compute a second combination step and merge segments if and . This means, there is only one closed row between the two segments. We compute leaf positions and such that putting ones to the interval in row produces the smallest total change with respect to the current approximation matrix.
Again, we drop and out of our segmentation and this time add with
Again, we iterate this procedure, until no more such merges are possible. Obviously, this second combination step affects the total change, as we increase the approximation matrix by adding ones to the segments. After both of our combination steps, all segments still satisfy (i)-(iii), while even more segments satisfy (v) now. Another positive consequence is a reduction of the total number of (not necessarily pairwise different) segments, called the Delivery Time. The combination step is demonstrated in Figure 3.
For ASAS, we only combine two segments according to one of the two steps described above, if the criterion with the minimum vertical zeros (iv) is not violated after the combination.
We now define a substitution step . For ASAS, the same step is computed if and still satisfy (iv).
Note that in this case, we have and due to . Thus, the set of open rows of is a subset of the set of open rows of . If
we substitute and by segments and
defined as follows
The result is that we add the upper part of segment to segment in order to enlarge , while remains sufficiently large. If (7) is not satisfied, we can have a second try and check whether
If this condition is true, we can analogously add the lower part of to and close all rows of . The Make-two-of-two-procedure is illustrated in Figure 4.
The make-two-of-two-step is computed for all critical segments. If we find a partner, we compute the substitution immediately. If no partner is found throughout the segmentation, the segment is dropped and stored in a new list of critical segments violating (v). Whereas the substitution with a partner does not affect the total change, the elimination of segments from the segmentation that find no partner leads to an increase of the total change.
4.6 Handle Critical Segments
Let be a critical segment stored in the make-two-of-two-step and let be its open region defined by (5). We try to combine with a partner from the segmentation with open region defined by (6) if this combination step causes lower total change then simply omit .
More precisely, if and , we define a new segment with
The resulting segment emanates from by attaching certain (or all) parts of and possibly also adding connecting ones. We now drop from our segmentation and from the list of critical segments and add to the segmentation, if the total change that is caused by this decision is smaller than simply omitting . If no partner for is found, we simply delete from the list of critical segments and accept the corresponding total change.
For ASAS, we execute the step if (iv) is not violated.
4.7 Total Change Improvement
As the previous segmentation steps are indeed not optimal, it may happen, that some leaf positions can be increased (respectively decreased) without violating the constraints and the total change becomes smaller.
Let be the approximation matrix that corresponds to the current segmentation. The improvement step is computed using Algorithm 6 which is described in the appendix and adjusts the segmentation such that gets closer to with regard to the total change.
5 Clinical case
A clinical case was set-up to examine the efficiency of our proposed segmentation algorithms. For a patient with cancer of the right breast, electron irradiation plans using various segmentation settings and different optimization constraints were created with a self-designed IMRT optimization programme based on our previous studies . The planning target volume was the right breast, which should receive a total dose of 50.4 Gy (1.8 Gy per fraction). In addition, the target volume should be covered by the isodose line ( of the prescribed dose). The ipsilateral lung was considered to be organ at risk.
The optimization programme provides simultaneous optimization of beam orientation, energy and intensity for dose delivery with an add-on MLC for electrons (Euromechanics, Schwarzenbruck, Germany) presented in Figure 6 and [12, 13]. Electron dose calculation was performed by Monte Carlo simulations with the treatment planning system Pinnacle from Philips (Version 8.1s). Final dose calculation of the treatment plans was conducted using a dose grid size of mm and a dose calculation uncertainty of . The following optimization steps are necessary to generate an electron IMRT plan:
Simultaneous optimization of beam orientation, energy and intensity: A set of radiation incidence angles (typically 10–15) is determined given by table and gantry angles [9, 24]. For each configuration, the algorithm calculates the optimal fluence distribution, given by a nonnegative integer matrix.
The intensity matrices are approximately decomposed into a superposition of allowed segments such that the deviation between desired and actual fluence is minimal. The result is a set of segments, each of them is given by the corresponding MLC leaf positions and its dose weight.
The segments from step 2 are treated as candidates for the treatment plan. In a third optimization step, the dose of the candidates are calculated for all beam energies and then optimized for a given weight proportion between best target coverage and minimum dose to critical organs in order to find the final set of segments with optimal beam energies and their corresponding monitor units.
In this paper, we have focused on step 2 and introduced optimization algorithms for an approximate decomposition of intensity matrices.
Until now, the segmentation step consisted in exactly decomposing the intensity matrices using all deliverable segments. In our approach, we admit a decrease in the decomposition accuracy in order to obtain segments which satisfy the dosimetric and technical constraints. Step 3 justifies the approximation approach in Step 2, as a larger approximation error does not necessarily result in a suboptimal treatment plan. Indeed, larger segments produce homogeneous dose distributions and thus, the same final fluence can be generated using fewer larger segments. The acceptability of a treatment plan is decided after step 3 by means of dose volume histograms (see Section 6) and a plan is only presumed if the required dose constraints are not exceeded. Therefore, the danger of cumulative deviation in the approximation step does not really exist, as the computed segments are just candidates for the treatment plan that pass through a further optimization step.
At first, we compare electron IMRT plans created with different segmentation settings for the clinical case prescribed in Section 5. The comparison was conducted using two different optimization settings: one setting to achieve with a better dose coverage of the breast and the other one to reach a better sparing of the lung. Note, that the segmentation settings refer to parameters in the segmentation step, which is discussed here, whereas the optimization settings play a role in the final optimization step that is not part of this paper. Finally, we give a detailed evaluation for the results of the decomposition step.
A treatment plan with a segmentation setting uses the decomposition algorithm with a minimum total field height of , a minimum separation constraint and row overlap of and a minimum vertical gap of . The decomposed matrices vary in their vertical size and their horizontal size , as they describe only parts of the beam head where the target volume is located. Thus, in practice, the overtravel parameters and will depend on the positioning of the matrix and are put individually for each matrix. Our electron MLC is capable of shifting the leaves edges to of the radiation field.
The plan quality was evaluated by means of dose volume histograms that indicate the amount of dose delivered to a certain volume of the patient (here: the right breast and the right lung). Thus, dose homogeneity in the target volume and dose exposure to the organs at risk can be examined. In Figure 7, the dose volume histograms for both optimization settings demonstrate that almost identical dose distributions can be achieved using smaller or larger minimum MLC openings (cf. setting and ). In fact, the treatment plan could be slightly improved by use of a minimum vertical gap parameter of which avoids single leaf openings and closings.
Table 1 and 2 illustrate the main benefit of our approach, as identical results can be achieved with approximately two thirds fewer segments and a significantly smaller number of monitor units by use of dosimetric constraints. As a result, the dose delivery is more efficient and less time consuming. The minimum number of segments is reached for setting and computational tests have shown, that this configuration produces the optimal results. As the leaf width is cm, fields with a horizontal and vertical height of bixels have a size of approximately cm cm and this confirms our dosimetric constraint of cm cm minimum segment size (cf. Figure 1). It can be also demonstrated that minimum segment sizes greater than setting do not necessarily result in fewer segments (cf. Table 1), although the number of segments in Table 2 is slightly lower for setting . For both optimization settings, the dose volume histograms were considerably better when using minimum segment sizes smaller than setting .
It is important to underline that the number of segments and the number of monitor units in Table 1 and 2 belong to the final IMRT plan and result from the third optimization step and not from the decomposition step of our algorithms. In fact, the monitor units have another scale here and are not directly comparable with the delivery time from the segmentation. In contrast, the total change information stems from the decomposition step. Note, that the total change of the segmentation itself is not a significant quantity, because if the matrix entries are large, a larger total change is acceptable. Therefore, we compute the total sum of entries for each intensity matrix and then calculate the relative total change which is the ratio between total change and total sum of entries. The smaller the relative total change, the better is the decomposition.
|Setting||Mean Relative Total Change||Number of Segments||Number of Monitor Units|
|Setting||Mean Relative Total Change||Number of Segments||Number of Monitor Units|
For the detailed evaluation of our algorithms, we use a set of clinical intensity matrices that originate from electron treatment plans for different patients and beam angles. The matrices are produced during the optimization step 1 of the treatment planning that was introduced in Section 5 and uses the algorithm from . Exemplarily, we compute segmentations for the settings , and as well as and . The values of the overtravel parameters are also produced in the pre-segmentation step. The results are shown in Table 3 and 4 and demonstrate how much total change is caused respectively avoided by the steps of the algorithms. The overtravel-approximation and MSC-approximation lead to a certain total change of the matrix that is put into the segmentation step. As an exact decomposition in the segmentation step is impossible, the total change increases here again. Both the combination step and the make-two-of-two step try to eliminate segments not satisfying the parameter and again cause some total change. Finally, the last two steps of the algorithm improve the performance and reduce the approximation error as much as possible. One can see that the combination step and the make-two-of-two-step are performed more often for ASS, as for ASAS the vertical gap ensures that the fields already have a reasonable size after the segmentation step. Of course, the larger the parameters and thus the minimum field size, the larger becomes the total change.
The first column in Table 3 and 4 gives the average results over the matrices, while the second (respectively third) column represent the single results for the matrix with the smallest (respectively largest) relative total change. Homogeneous matrices with large nonzero areas can be decomposed quite well, while matrices with only few nonzero entries that do not span connected areas lead to unacceptable results. As a treatment plan is a superposition of several intensity profiles from different beam angles, the approximation errors balance each other and lead to applicable treatment plans as described above. Furthermore, a certain part of the total change is unavoidable if one requires the constraints (i)-(v), e.g. the total change after MSC-approximation is a good lower bound for the achievable total change. All in all, taking the vertical gap into account increases the total change while reducing the number of used segments and the monitor units.
|Average||Rel. TC min.||Rel. TC max.|
|Total sum of entries||886.4||1329||152|
|Number of segments||15||13||3|
|TC after Overtravel-Approximation||26.34||6||75|
|TC after MSC-Approximation||57.09||11||88|
|TC after Segmentation||74.33||13||91|
|TC change after combination||80.96||28||91|
|TC after make-two-of-two||93.77||28||101|
|TC after handle critical segments||88.72||28||94|
|TC after improvement||82.24||12||94|
|Relative total change||0.16||0.009||0.62|
|Average||Rel. TC min.||Rel. TC max.|
|Total sum of entries||886.4||997||384|
|Number of segments||6.82||3||4|
|TC after Overtravel-Approximation||26.34||7||89|
|TC after MSC-Approximation||44.39||7||134|
|TC after Segmentation||144.8||46||267|
|TC change after combination||144.8||46||267|
|TC after make-two-of-two||159.2||46||270|
|TC after handle critical segments||158.8||46||270|
|TC after improvement||157.7||46||269|
|Relative total change||0.27||0.05||0.70|
In the present study, dosimetric and technical constraints have been taken into consideration in intensity-modulated radiation therapy (IMRT). A set of parameters has been introduced, two of them for modelling the leaf overtravel constraint, the other three to ensure a minimum field size and to avoid thin field shapes. We proposed algorithms for approximate segmentation of intensity matrices using segments that satisfy the constraints. We basically distinguish between two approximation problems depending on whether the vertical gap parameter is considered or not. The objective function of the optimization is the deviation between the desired and the approximated intensity profile that has to be minimized. The segmentation step is part of an IMRT optimization process which was examined by comparisons of dose volume histograms of treatment plans with small and large segments as well as with and without thin segment shapes. The histograms show that the use of larger segments results in equal IMRT plans with fewer segments and monitor units respectively. Although the approximation error of the segmentations rises with increasing minimum field size, equivalent or even better dose distributions could be achieved. Concluding, this first approach to approximated segmentation in IMRT planning shows the potential of these ideas and there is a need for further research in related approximation problems.
-  T. Achterberg. Constraint Integer Programming. PhD thesis, Technische Universität Berlin, 2007. http://opus.kobv.de/tuberlin/volltexte/2007/1611/.
-  R.K. Ahuja and H.W. Hamacher. A network flow algorithm to minimize beam-on time for unconstrained multileaf collimator problems in cancer radiation therapy. Networks, 45(1):36–41, 2005.
-  D. Baatar, H.W. Hamacher, M. Ehrgott, and G.J. Woeginger. Decomposition of integer matrices and multileaf collimator sequencing. Discrete Appl. Math., 152(1-3):6–34, 2005.
-  J.L. Bedford and S. Webb. Constrained segment shapes in direct-aperture optimization for step-and-shoot IMRT. Med. Phys., 33(4):944–958, 2006.
-  N. Boland, H. W. Hamacher, and F. Lenzen. Minimizing beam-on time in cancer radiation treatment using multileaf collimators. Networks, 43(4):226–240, 2004.
-  T.R. Bortfeld, D.L. Kahler, T.J. Waldron, and A.L. Boyer. X–ray field compensation with multileaf collimators. Int. J. Radiat. Oncol. Biol. Phys., 28:723–730, 1994.
-  K. Engel. A new algorithm for optimal multileaf collimator field segmentation. Discrete Appl. Math., 152(1-3):35–51, 2005.
-  K. Engel and A. Kiesel. Approximated matrix decomposition for IMRT planning with multileaf collimators. OR Spectrum, DOI 10.1007/s00291-009-0168-5, 2009.
-  K. Engel and E. Tabbert. Fast simultaneous angle, wedge, and beam intensity optimization in inverse radiotherapy planning. Optimization and Engineering, 6(4):393–419, 2005.
-  C. Engelbeen and S. Fiorini. Constrained decompositions of integer matrices and their applications to intensity modulated radiation therapy. Networks, DOI 10.1002/net.20324, 2009.
-  C. Engelbeen, S. Fiorini, and A. Kiesel. A closest vector problem arising in radiation therapy planning. Journal of Combinatorial Optimization, DOI 10.1007/s10878-010-9308-8, 2010.
-  T. Gauer, D. Albers, F. Cremers, R. Harmansa, R. Pellegrini, and R. Schmidt. Design of a computer-controlled multileaf collimator for advanced electron radiotherapy. Phys. Med. Biol., 53:5987–6003, 2006.
-  T. Gauer, J. Sokoll, F. Cremers, R. Harmansa, M. Luzzara, and R. Schmidt. Characterization of an add-on multileaf collimator for electron beam therapy. Phys. Med. Biol., 53:1071–1085, 2008.
-  T. Kalinowski. A duality based algorithm for multileaf collimator field segmentation with interleaf collision constraint. Discrete Appl. Math., 152(1-3):52–88, 2005.
-  T. Kalinowski. Reducing the number of monitor units in multileaf collimator field segmentation. Phys. Med. Biol., 50(6):1147–1161, 2005.
-  T. Kalinowski. Multileaf collimator shape matrix decomposition. In Optimization in Medicine and Biology. G.J. Lim and E.K.Lee, Auerbach Publishing: 253–286, 2008.
-  T. Kalinowski. Reducing the tongue-and-groove underdosage in MLC shape matrix decomposition. Algorithmic Operations Research, 3(2), 2008.
-  T. Kalinowski. The complexity of minimizing the number of shape matrices subject to minimal beam-on time in multileaf collimator field decomposition with bounded fluence. Discrete Appl. Math., 157:2089–2104, 2009.
-  T. Kalinowski. A min cost network flow formulation for approximated MLC segmentation. Networks, DOI 10.1002/net.20394, 2010.
-  T. Kalinowski and A. Kiesel. Approximated MLC shape matrix decomposition with interleaf collision constraint. Algorithmic Operations Research, 4(1):49–57, 2009.
-  S. Kamath, S. Sahni, J. Li, J. Palta, and S. Ranka. Leaf sequencing algorithms for segmented multileaf collimation. Phys. Med. Biol., 48(3):307–324, 2003.
-  S. Kamath, S. Sahni, J. Palta, S. Ranka, and J. Li. Optimal leaf sequencing with elimination of tongue–and–groove underdosage. Phys. Med. Biol., 49:N7–N19, 2004.
-  S. Kamath, S. Sahni, S. Ranka, J. Li, and J. Palta. A comparison of step–and–shoot leaf sequencing algorithms that eliminate tongue–and–groove effects. Phys. Med. Biol., 49:3137–3143, 2004.
-  J. Lim, M.C. Ferris, S.J. Wright, D.M. Shepard, and M.A. Earl. An optimization framework for conformal radiation treatment planning. Informs Journal on Computing, 19(3):366–380, 2007.
-  M.M. Matuszak, E.W. Larsen, K. Jee, D.L. McShan, and B.A. Fraass. Adaptive diffusion smoothing: A diffusion based method to reduce imrt field complexity. Med. Phys., 35(4):1532–1546, 2008.
-  M. Nußbaum. Min cardinality C1 decomposition of integer matrices. Master’s thesis, Faculty for Mathematics, TU Kaiserslautern, 2006.
-  W. Que, J. Kung, and J. Dai. ‘Tongue-and-groove’ effect in intensity modulated radiotherapy with static multileaf collimator fields. Phys. Med. Biol., 49:399–405, 2004.
-  D.M. Shepard, M.A. Earl, X.A. Li, S. Naqvi, and C. Yu. Direct aperture optimization: A turnkey solution for step-and-shoot IMRT. Med. Phys., 29(6):1007–1018, 2002.
-  R. Wunderling. Paralleler und objektorientierter Simplex-Algorithmus. PhD thesis, Technische Universität Berlin, 1996. http://www.zib.de/Publications/abstracts/TR-96-09/.
Appendix A Appendix
Remark 2. The interval for the first open row is computed analogously to the interval for the other rows, only ignoring the overlap constraint and instead requiring .