Convergecast and Broadcast by Power-Aware Mobile AgentsA preliminary version of this paper appeared in Proc. 26th International Symposium of Distributed Computing (DISC 2012).

# Convergecast and Broadcast by Power-Aware Mobile Agents††thanks: A preliminary version of this paper appeared in Proc. 26th International Symposium of Distributed Computing (DISC 2012).

Julian Anaya, Jérémie Chalopin, Jurek Czyzowicz,
Arnaud Labourel, Andrzej Pelc111Partially supported by NSERC discovery grant and by the Research Chair in Distributed Computing at the Université du Québec en Outaouais. , Yann Vaxès
Université du Québec en Outaouais, C.P. 1250, Gatineau, Qc. J8X 3X7 Canada.
E-mails: ingjuliananaya@gmail.com, jurek@uqo.ca, pelc@uqo.ca
LIF, CNRS & Aix-Marseille University, 13288 Marseille, France.
E-mails: {jeremie.chalopin,arnaud.labourel,yann.vaxes}@lif.univ-mrs.fr
###### Abstract

A set of identical, mobile agents is deployed in a weighted network. Each agent has a battery – a power source allowing it to move along network edges. An agent uses its battery proportionally to the distance traveled. We consider two tasks : convergecast, in which at the beginning, each agent has some initial piece of information, and information of all agents has to be collected by some agent; and broadcast in which information of one specified agent has to be made available to all other agents. In both tasks, the agents exchange the currently possessed information when they meet.

The objective of this paper is to investigate what is the minimal value of power, initially available to all agents, so that convergecast or broadcast can be achieved. We study this question in the centralized and the distributed settings. In the centralized setting, there is a central monitor that schedules the moves of all agents. In the distributed setting every agent has to perform an algorithm being unaware of the network.

In the centralized setting, we give a linear-time algorithm to compute the optimal battery power and the strategy using it, both for convergecast and for broadcast, when agents are on the line. We also show that finding the optimal battery power for convergecast or for broadcast is NP-hard for the class of trees. On the other hand, we give a polynomial algorithm that finds a 2-approximation for convergecast and a 4-approximation for broadcast, for arbitrary graphs.

In the distributed setting, we give a 2-competitive algorithm for convergecast in trees and a 4-competitive algorithm for broadcast in trees. The competitive ratio of 2 is proved to be the best for the problem of convergecast, even if we only consider line networks. Indeed, we show that there is no ()-competitive algorithm for convergecast or for broadcast in the class of lines, for any .

## 1 Introduction

### 1.1 The model and the problem

A set of agents is deployed in a network represented by a weighted graph . An edge weight is a positive real representing the length of the edge, i.e., the distance between its endpoints along the edge. The agents start simultaneously at different nodes of . Every agent has a battery: a power source allowing it to move in a continuous way along the network edges. An agent may stop at any point of a network edge (i.e. at any distance from the edge endpoints, up to the edge weight). The movements of an agent use its battery proportionally to the distance traveled. We assume that all agents move at the same speed that is equal to one, i.e., we can interchange the notions of the distance traveled and the time spent while traveling. In the beginning, the agents start with the same amount of power noted , allowing all agents to travel the same distance .

We consider two tasks: convergecast, in which at the beginning, each agent has some initial piece of information, and information of all agents has to be collected by some agent, not necessarily predetermined; and broadcast in which information of one specified agent has to be made available to all other agents. In both tasks, agents notice when they meet (at a node or inside an edge) and they exchange the currently held information at every meeting.

The task of convergecast is important, e.g., when agents have partial information about the topology of the network and the aggregate information can be used to construct a map of it, or when individual agents hold measurements performed by sensors located at their initial positions and collected information serves to make some global decision based on all measurements. The task of broadcast is used, e.g., when a preselected leader has to share some information with others agents in order to organize their collaboration in future tasks.

Agents try to cooperate so that convergecast (respectively broadcast) is achieved with the smallest possible agent’s initial battery power (respectively ), i.e., minimizing the maximum distance traveled by an agent. We investigate these two problems in two possible settings, centralized and distributed.

In the centralized setting, the optimization problems must be solved by a central authority knowing the network and the initial positions of all the agents. We call strategy a finite sequence of movements executed by the agents. During each movement, starting at a specific time, an agent walks between two points belonging to the same network edge. A strategy is a convergecast strategy if the sequence of movements results in one agent getting the initial information of every agent. A strategy is a broadcast strategy if the sequence of movements results in all agents getting the initial information of the source agent. We consider two different versions of the problem : the decision problem, i.e., deciding if there exists a convergecast strategy or a broadcast strategy using power (where is the input of the problem) and the optimization problem, i.e., computing the smallest amount of power that is sufficient to achieve convergecast or broadcast.

In the distributed setting, the task of convergecast or broadcast must be approached individually by each agent. Each agent is unaware of the network, of its position in the network and of the positions (or even the presence) of any other agents. The agents are anonymous and they execute the same deterministic algorithm. Each agent has a very simple sensing device allowing it to detect the presence of other agents at its current location in the network. Each agent is also aware of the degree of the node at which it is located, as well as the port through which it enters a node, called an entry port. We assume that the ports of a node of degree are represented by integers . Agents can meet at a node or inside an edge. When two or more agents meet at a node, each of them is aware of the direction from which the other agent is coming, i.e., the last entry port of each agent.

Since the measure of efficiency in this paper is the battery power (or the maximum distance traveled by an agent, which is proportional to the battery power used) we do not try to optimize the other resources (e.g. global execution time, local computation time, memory size of the agents, communication bandwidth, etc.). In particular, we conservatively suppose that, whenever two agents meet, they automatically exchange the entire information they hold (rather than the new information only). This information exchange procedure is never explicitly mentioned in our algorithms, supposing, by default, that it always takes place when a meeting occurs. The efficiency of a distributed solution is expressed by the competitive ratio, which is the worst-case ratio of the amount of power necessary to solve the convergecast or the broadcast problem by the distributed algorithm with respect to the amount of power computed by the optimal centralized algorithm, which is executed for the same agents’ initial positions.

It is easy to see, that in the optimal centralized solution for the case of the line and the tree, the original network may be truncated by removing some portions and leaving only the connected part of it containing all the agents (this way all leaves of the remaining tree contain initial positions of agents). We make this assumption also in the distributed setting, since no finite competitive ratio is achievable if this condition is dropped. Indeed, two nearby anonymous agents inside a long line need to travel, in the worst case, a long distance to one of its endpoints in order to meet.

### 1.2 Related work

Rapidly developing network and computer industry fueled the research interest in mobile agents computing. Mobile agents are often interpreted as software agents, i.e., programs migrating from host to host in a network, performing some specific tasks. However, the recent developments in computer technology bring up problems related to physical mobile devices. These include robots or motor vehicles and various wireless gadgets. Examples of agents also include living beings: humans (e.g. soldiers in the battlefield or disaster relief personnel) or animals (e.g. birds, swarms of insects).

In many applications the involved mobile agents are small and have to be produced at low cost in massive numbers. Consequently, in many papers, the computational power of mobile agents is assumed to be very limited and feasibility of some important distributed tasks for such collections of agents is investigated. For example [AA06] introduced population protocols, modeling wireless sensor networks by extremely limited finite-state computational devices. The agents of population protocols move according to some mobility pattern totally out of their control and they interact randomly in pairs. This is called passive mobility, intended to model, e.g., some unstable environment, like a flow of water, chemical solution, human blood, wind or unpredictable mobility of agents’ carriers (e.g. vehicles or flocks of birds). On the other hand, [SY] introduced anonymous, oblivious, asynchronous, mobile agents which cannot directly communicate, but they can occasionally observe the environment. Gathering and convergence [AOSY, CFPS, CP, C15], as well as pattern formation [DFSY, FPSW, SY, YS] were studied for such agents.

Apart from the feasibility questions for limited agents, the optimization problems related to the efficient usage of agents’ resources have been also investigated. Energy management of (not necessarily mobile) computational devices has been a major concern in recent research papers (cf. [Albers]). Fundamental techniques proposed to reduce power consumption of computer systems include power-down strategies (see [Albers, AIS, ISG]) and speed scaling (introduced in [YDS]). Several papers proposed centralized [Bunde, SL, YDS] or distributed [Albers, Ambuhl, AIS, ISG] algorithms. However, most of this research on power efficiency concerned optimization of overall power used. Similar to our setting, assignment of charges to the system components in order to minimize the maximal charge has a flavor of another important optimization problem which is load balancing (cf. [Azar]).

In wireless sensor and ad hoc networks the power awareness has been often related to the data communication via efficient routing protocols (e.g. [Ambuhl, SL]. However in many applications of mobile agents (e.g. those involving actively mobile, physical agents) the agent’s energy is mostly used for it’s mobility purpose rather than communication, since active moving often requires running some mechanical components, while communication mostly involves (less energy-prone) electronic devices. Consequently, in most tasks involving moving agents, like exploration, searching or pattern formation, the distance traveled is the main optimization criterion (cf. [AH, AG, ABRS, BCR, BeRS, BlRS, DP, DKS, FGKP, MMS]). Single agent exploration of an unknown environment has been studied for graphs, e.g. [AH, DP], or geometric terrains, [BCR, BlRS].

While a single agent cannot explore a graph of unknown size unless pebble (landmark) usage is permitted (see [BFRSV]), a pair of robots are able to explore and map a directed graph of maximal degree in time with high probability (cf. [BS]). In the case of a team of collaborating mobile agents, the challenge is to balance the workload among the agents so that the time to achieve the required goal is minimized. However this task is often hard (cf. [FHK]), even in the case of two agents in a tree, [AB]. On the other hand, the authors of [FGKP] study the problem of agents exploring a tree, showing competitive ratio of their distributed algorithm provided that writing (and reading) at tree nodes is permitted.

Assumptions similar to our paper have been made in [ABRS, BlRS, DKS] where the mobile agents are constrained to travel a fixed distance to explore an unknown graph [ABRS, BlRS], or tree [DKS]. In [ABRS, BlRS] a mobile agent has to return to its home base to refuel (or recharge its battery) so that the same maximal distance may repeatedly be traversed. [DKS] gives an 8-competitive distributed algorithm for a set of agents with the same amount of power exploring the tree starting at the same node.

The convergecast problem is sometimes viewed as a special case of the data aggregation question (e.g. [KEW, RV]) and it has been studied mainly for wireless and sensor networks, where the battery power usage is an important issue (cf. [KK, AGS]). Recently [CJABL] considered the online and offline settings of the scheduling problem when data has to be delivered to mobile clients while they travel within the communication range of wireless stations. [KK] presents a randomized distributed convergecast algorithm for geometric ad-hoc networks and study the trade-off between the energy used and the latency of convergecast. The broadcast problem for stationary processors has been extensively studied both for the message passing model, see e.g. [AGP], and for the wireless model, see e.g. [BGI]. To the best of our knowledge, the problem of the present paper, when the mobile agents perform convergecast or broadcast by exchanging the held information when meeting, while optimizing the maximal power used by a mobile agent, has never been investigated before.

### 1.3 Our results

In the centralized setting, we give a linear-time algorithm to compute the optimal battery power and the strategy using it, both for convergecast and for broadcast, when agents are on the line. We also show that finding the optimal battery power for convergecast or for broadcast is NP-hard for the class of trees. In fact, the respective decision problem is strongly NP-complete. On the other hand, we give a polynomial algorithm that finds a 2-approximation for convergecast and a 4-approximation for broadcast, for arbitrary graphs.

In the distributed setting, we give a 2-competitive algorithm for convergecast in trees and a 4-competitive algorithm for broadcast in trees. The competitive ratio of 2 is proved to be the best for the problem of convergecast, even if we only consider line networks. Indeed, we show that there is no ()-competitive algorithm for convergecast or for broadcast in the class of lines, for any .

The following table gives the summary of our results.

In Section 2, we show that we can restrict the search for the optimal strategy for convergecast or broadcast on the line to some smaller subclass of strategies called regular strategies. In Section 3, we present our centralized algorithms for convergecast and broadcast on lines. Section 4 is devoted to centralized convergecast and broadcast on trees and graphs. In Section 5, we investigate convergecast and broadcast in the distributed setting. Section 6 contains conclusions and open problems.

## 2 Regular strategies for convergecast and broadcast on lines

In this section, we show that if we are given a convergecast (respectively broadcast) strategy for some initial positions of agents in the line, then we can always modify it in order to get another convergecast (respectively broadcast) strategy, using the same amount of maximal power for every agent, satisfying some simple properties. Such strategies will be called regular. These observations permit to restrict the search for the optimal strategy to some smaller and easier to handle subclass of strategies.

We order agents according to their positions on the line. Hence we can assume w.l.o.g., that agent , for is initially positioned at point of the line of length and that . The set will be called a configuration for the line of length .

### 2.1 Regular carry strategies

Given a configuration , a starting point , a target point (), and an amount of power , we want to know if there exists a strategy for the agents enabling them to move the information from to so that the amount of power spent by each agent is at most . Strategies that move information from point to point will be called carry strategies for . We restrict attention to configurations such that and because otherwise either (respectively ) is useless or it is impossible to carry information from to . A regular carry strategy for is the set of moves for agents defined as follows: agent first goes back to a point , getting there the information from the previous agent (except that has to go to ), then it goes forward to a point . Moreover, we require that each agent travels the maximal possible distance, i.e., it spends all its power.

###### Lemma 1.

If there exists a carry strategy for , then there exist the following two regular carry strategies.

The pull strategy that can be computed iteratively (in linear time) starting with the last agent:

1. , ,

2. ,

3. .

The push strategy that can be computed iteratively (in linear time) starting with the first agent:

1. , ,

2. ,

3. .

###### Proof.

We first show that there exists a pull strategy. Consider with the minimum number of agents such that there exists a carry strategy, but no pull strategy. We consider the smallest value such that admits a carry strategy but no pull strategy.

If , then either , or . In the first case, cannot move the information between and , and then admits a carry strategy but not a pull strategy and has fewer agents. In the second case, is also a carry strategy for and there is no pull strategy for , contradicting our choice of .

Hence, we may suppose that . Since there exists a carry strategy , let be the first agent that reaches . The rightmost point where can move the information from is . Since is a carry strategy, when considering all the agents except , is a carry strategy for . By minimality of the number of agents, the pull strategy solves the subproblem on . Consequently, we can assume that is a pull strategy on . If , by minimality of , we have and thus is a pull strategy which is a contradiction. Hence, suppose that . Note that if , we can exchange the roles of and and we are in the previous case. Hence, suppose that and let be the interval that traverses with the information when is applied; by minimality of , and consequently we have , and thus . Consider now the strategy where we exchange the roles of and : gets the information from , gives it to , and goes to . More formally, let , , and . From our definition of and and the first part of the proof, there exists a carry strategy for . However, , contradicting the minimality of .

Consequently, if there exists a carry strategy for , then there exists a pull strategy on .

Now suppose that admits a carry strategy. From the first part of the proof, we know that it admits a pull strategy. The push strategy for can be obtained inductively from the pull strategy. Let for be the set of intervals that induces the pull strategy for Notice that for induces the pull strategy for By induction, there exists a set of intervals that induces a push strategy for with We define and Since we deduce that and therefore the set of intervals induces a push strategy for . ∎

###### Remark 1.

Note that the pull strategy is uniquely defined by a configuration , a target point , and an amount of power and enables to compute the smallest such that admits a carry strategy.

Similarly, the push strategy is uniquely defined by a configuration , a starting point , and an amount of power and enables to compute the largest such that admits a carry strategy.

Note that carry strategies are defined for the target larger than the starting point . A carry strategy will be called reverse if the target is smaller than and all moves to the right are replaced by moves to the left and vice-versa.

### 2.2 Regular convergecast strategies

We now define the notion of a regular convergecast strategy for on the segment , using power at most . Without loss of generality, we suppose that and . Intuitively, a regular convergecast strategy divides the set of all agents into the set of left agents and the set of right agents such that left agents execute a push strategy from and right agents execute a reverse push strategy from .

More formally, a regular convergecast strategy is given by a partition of the agents into two sets and for some , and by two points of segment for each agent , such that

1. if , and ,

2. if , and ,

3. .

Suppose that we are given a partition of the agents into two disjoint sets and and values for each agent satisfying conditions (1)-(3). Then the following moves define a regular convergecast strategy: first, every agent moves to ; subsequently, every agent in moves to once it learns the initial information of ; then, every agent in moves to once it learns the initial information of . Let be an agent from such that is maximum. Once has moved to , it knows the initial information of all the agents such that . If , convergecast is achieved. Otherwise, since , we know that there exists an agent such that . When reaches it knows the initial information of all the agents such that and thus, and know the initial information of all agents, which accomplishes convergecast.

The following lemma shows that we can restrict attention to regular convergecast strategies.

###### Lemma 2.

If there exists a convergecast strategy for a configuration using power at most then there exists a regular convergecast strategy for the configuration using power at most .

###### Proof.

Consider a convergecast strategy for a configuration using power at most . Suppose that convergecast occurred at time at some point . If an agent does not get the initial information of , then at time it must have been in the segment . Hence, by time , it must have learned the initial information of . It follows that every agent , for , must learn either the initial information of agent or of . Therefore, we can partition the set of agents performing a convergecast strategy into two subsets and , such that each agent learns the initial information of agent before learning the initial information of agent (or not learning at all the information of ). All other agents belong to . We denote by the interval of all points visited by and by the interval of points visited by .

Let and . Since is a convergecast strategy, we have . Observe that the agents in move the initial information of from to and that the agents in move the initial information of from to . From Lemma 1, we can assume that the agents in (resp. ) execute a push strategy (resp. a reverse push strategy) and thus conditions (1)-(3) hold.

Suppose now that there exists an agent such that . Let and ; note that and . Consider the strategy where we exchange the roles of and , i.e., we put and . Let , , and .

If , then . If , then . In both cases, we still have a convergecast strategy.

If and , then , and . Consequently, we still have a convergecast strategy.

Applying this exchange a finite number of times, we get a regular convergecast strategy. ∎

We now define the notion of a regular broadcast strategy for where the source agent is , on the segment , using power at most . Without loss of generality, we suppose that and . Intuitively, a regular broadcast strategy divides the set of all agents into the set of left agents and the set of right agents such that left agents execute a reverse pull strategy from and right agents execute a pull strategy from .

More formally, a regular broadcast strategy is given by points of segment defined for each agent such that

1. , ,

2. if , and ,

3. if , and ,

4. and

Suppose that we are given points for each agent , satisfying conditions (1)-(4). Then the following moves define a regular broadcast strategy: initially every agent moves to . Once learns the source information, moves to . Since (1)-(4) hold, this is a broadcast strategy and the maximum amount of power spent is at most .

Before proving that it is enough to only consider regular broadcast strategies, we need to prove the following technical lemma.

###### Lemma 3.

There exists a broadcast strategy for a configuration if and only if for every , there exist positions such that

1. for each ,

2. ;

3. for each , .

4. for each , if (resp. ), there exists such that and (resp. ).

###### Proof.

Consider a broadcast strategy where the maximum amount of power spent is . For every agent , let be the position where learns the information that has to be broadcast, and let (resp. ) be the leftmost (resp. rightmost) position reached by once it got the information. By definition of , (1) and (2) hold. Since the maximum amount of power spent by an agent is at most , and since the agent has to go from to and then to and , (3) holds. Since every agent learns the information, for every agent , either , or meets an agent in such that already has the information. Assume that (the other case is symmetric). If , then (4) holds for . Suppose now that and let be the non-empty set of agents such that and learns the information before . Let be the agent that is first to learn the information. Since , learns the information from an agent that does not belong to . Consequently, and thus . Thus (4) holds for .

Conversely, if we are given values satisfying (1)-(4), we can exhibit a strategy for broadcast: initially every agent moves to . Once learns the information, if , then moves to and to and if , then moves to and to . Since (4) holds, this is a broadcast strategy and since (3) holds, the maximum amount of power spent is at most . ∎

The following lemma shows that we can restrict attention to regular broadcast strategies.

###### Lemma 4.

If there exists a broadcast strategy for a configuration with source agent , using power at most , then there exists a regular broadcast strategy for the configuration with source agent , using power at most .

###### Proof.

Suppose that there exists a broadcast strategy for . For every agent , we define as in the definition of a regular broadcast strategy. Note that the agents execute a reverse pull strategy between and . Similarly, the agents execute a pull strategy between and . By Remark 1, it means that there exists (resp. ) such that reaches (resp. ) with the information from . Moreover, since the agents execute either a reverse pull strategy or a pull strategy, we have , and .

Suppose the lemma does not hold. This means that , and . Consequently, cannot reach both and , i.e., there exists such that reaches , or there exists such that reaches . If , it implies that , and consequently, there cannot exist a broadcast strategy since there is no carry strategy on . Consequently, we can assume that . Using a similar argument we can also assume that .

Among all broadcast strategies, consider the strategy that minimizes the size of . Without loss of generality, assume that does not reach , and let such that reaches . For each agent , let be defined as in Lemma 3. Note that and . Moreover, .

Consider the new strategy defined as follows: for each agent , let and ; let , and ; let and . Note that and . Since , this is still a broadcast strategy, in view of Lemma 3. However, in this new strategy, there is one agent less in than in , contradicting the choice of our strategy.

Consequently, either , or and the lemma holds. ∎

## 3 Centralized convergecast and broadcast on lines

### 3.1 Centralized convergecast on lines

In this section we consider the centralized convergecast problem for lines. We give an optimal, linear-time, deterministic centralized algorithm, computing the optimal amount of power needed to solve convergecast for line networks and we provide a regular convergecast strategy for this amount of power. As the algorithm is quite involved, we start by observing some properties of the optimal strategies.

#### 3.1.1 Properties of a convergecast strategy

In the following, we only consider regular convergecast strategies. Note that a regular convergecast strategy is fully determined by the value of and by the partition of the agents into the two sets and . For each agent (resp. ), we denote by (resp. ). Observe that is the rightmost point on the line to which the set of agents at initial positions , each having power , may transport their total information. Similarly, is the leftmost such point for agents at positions .

Lemma 2 permits to construct a linear-time decision procedure verifying if a given amount of battery power is sufficient to design a convergecast strategy for a given configuration of agents. We first compute two lists , for and , for . Then we scan them to determine if there exists an index , such that . In such a case, we set and and we apply Lemma 2 to obtain a regular convergecast strategy where agents and meet and exchange their information which at this time is the entire initial information of the set of agents. If there is no such index , no convergecast strategy is possible. This implies

###### Corollary 1.

In time we can decide if a configuration of agents on the line, each having a given maximal power , can perform convergecast.

The remaining lemmas of this subsection bring up observations needed to construct an algorithm finding the optimal power and designing an optimal convergecast strategy.

Note that if the agents are not given enough power, then it can happen that some agent may never learn the information from (resp. from ). In this case, cannot belong to (resp. ). We denote by the minimum amount of power needed to ensure that can learn the information from : if , . Similarly, we have .

Given a strategy using power , for each agent , we have and either , or . In the first case, , while in the second case, .

We define threshold functions and that compute, for each index , the minimal amount of power ensuring that agent does not go back when (respectively ), i.e., such that (respectively ). For each , let and . Clearly, .

The next lemma shows how to compute and if we know and for every agent .

###### Lemma 5.

Consider an amount of power and an index . If , then . Similarly, if , then .

###### Proof.

We prove the first statement of the lemma; the proof of the other statement is similar. We first show the following claim.

Claim. If for every , , then

 ReachcLR(q,P)=2q−pReachcLR(p,P)+(2q−p−1)P−q∑i=p+12q−iPos[i].

We prove the claim by induction on . Note that since , . Thus if , the statement holds. Suppose now that . Since , by the induction hypothesis, we have

 ReachcLR(q−1,P)=2q−1−pReachcLR(p,P)+(2q−1−p−1)P−q−1∑i=p+12q−1−iPos[i].

Consequently, we have

 ReachcLR(q,P) = 2ReachcLR(q−1,P)+P−Pos[q] = 2q−pReachcLR(p,P)+(2q−p−2)P−q−1∑i=p+12q−iPos[i]+P−Pos[q] = 2q−pReachcLR(p,P)+(2q−p−1)P−q∑i=p+12q−iPos[i].

This concludes the proof of the claim.

If , then for each , and . Consequently,

 ReachcLR(q,P)=2q−pPos[p]+(2q−p+1−1)P−q∑i=p+12q−iPos[i].

In the following, we denote and .

###### Remark 2.

For every , we have .

We now show that for an optimal convergecast strategy, the last agent of and the first agent of meet at some point between their initial positions and that they need to use all the available power to meet.

###### Lemma 6.

Suppose there exists an optimal convergecast strategy for a configuration , where the maximum power used by an agent is . Then, there exists an integer such that

Moreover, , and , .

###### Proof.

In the proof we need the following claim.

Claim. For every , the function which assigns the value for any argument , is an increasing, continuous, piecewise linear function with at most pieces on .

For every , the function which assigns the value for any argument , is a decreasing continuous piecewise linear function with at most pieces on .

We prove the first statement of the claim by induction on . For ,