Arbitrary Pattern Formation by Asynchronous Opaque Robots
The Arbitrary Pattern Formation problem asks for a distributed algorithm that moves a set of autonomous mobile robots to form any arbitrary pattern given as input. The robots are assumed to be autonomous, anonymous and identical. They operate in Look-Compute-Move cycles under a fully asynchronous scheduler. The robots do not have access to any global coordinate system. The existing literature that investigates this problem, considers robots with unobstructed visibility. This work considers the problem in the more realistic obstructed visibility model, where the view of a robot can be obstructed by the presence of other robots. The robots are assumed to be punctiform and equipped with visible lights that can assume a constant number of predefined colors. We have studied the problem in two settings based on the level of consistency among the local coordinate systems of the robots: two axis agreement (they agree on the direction and orientation of both coordinate axes) and one axis agreement (they agree on the direction and orientation of only one coordinate axis). In both settings, we have provided a full characterization of initial configurations from where any arbitrary pattern can be formed.
Keywords:Distributed algorithm Arbitrary pattern formation Leader election Autonomous robots Opaque robots Luminous robots Obstructed visibility Asynchronous scheduler Look-compute-move cycle.
One of the recent trends of research in robotics is to use a swarm of simple and inexpensive robots to collaboratively execute complex tasks, as opposed to using one or few powerful and expensive robots. Robot swarms offer several advantages over single robot systems, such as scalability, robustness and versatility. Algorithmic aspects of decentralized coordination of robot swarms have been extensively studied in literature over the last two decades. In theoretical studies, the traditional framework models the robot swarm as a set of autonomous, anonymous and identical computational entities freely moving in the plane. The robots do not have access to any global coordinate system. Each robot is equipped with sensor capabilities to perceive the positions of other robots. If the robots are equipped with camera sensors, then it is unrealistic to assume that each robot can observe all other robots in the swarm, as the line of sight of a robot can be obstructed by the presence of other robots. This setting is known as the opaque robot or obstructed visibility model, where it is assumed that a robot is able to see another robot if and only if no other robot lies in the line segment joining them.
Arbitrary Pattern Formation or is a fundamental coordination problem in swarm robotics where the robots are required to form any specific but arbitrary geometric pattern given as input. In this work, we study the problem in obstructed visibility model. The majority of literature that studies this problem, considers robots with unobstructed visibility. Two recent works [13, 23] investigated two formation problems in the obstructed visibility model. In , Uniform Circle Formation problem was studied, where the robots are required to form a circle by positioning themselves on the vertices of a regular polygon. Their approach is to first solve the Mutual Visibility problem as a subroutine where the robots arrange themselves in a configuration in which each robot can see all other robots. Then they solved the original problem from a mutually visible configuration. The more general Arbitrary Pattern Formation problem was first studied in the obstructed visibility model by Vaidyanathan et al. in . Their algorithm first solves Mutual Visibility and then elects a leader by probabilistic method. In this work, our aim is to provide deterministic solutions. For robots with obstructed visibility and having only partial agreement in coordinate system, deterministic Leader Election is difficult and is of independent interest. Also, Leader Election and are deterministically unsolvable from some symmetric configurations. Therefore, trying to first solve Mutual Visibility may create new symmetries, from where is deterministically unsolvable. Hence, if we want to first bring the robots to a mutually visible configuration, then we have to design an algorithm that does not create such symmetries. The existing algorithms in literature for Mutual Visibility do not have this feature. In both of these works, the robots are assumed to be luminous, i.e., they are equipped with persistent visible lights that can assume a constant number of predefined colors.
1.1 Earlier Works
The study of Arbitrary Pattern Formation was initiated by Suzuki and Yamashita in [21, 22]. In these papers, a complete characterization of the class of formable patterns was provided for autonomous and anonymous robots with an unbounded amount of memory. The problem was first studied in the weak setting of oblivious and asynchronous robots by Flocchini et. al. in . They showed that if the robots have no common agreement on coordinate system, they cannot form an arbitrary pattern. If the robots have one axis agreement, then any odd number of robots can form an arbitrary pattern, but an even number of robots cannot, in the worst case. If the robots agree on both and axes, then any set of robots can form any pattern. They also proved that it is possible to elect a leader for robots if it is possible to form any pattern. In [11, 12], the authors studied the relationship between Arbitrary Pattern Formation and Leader election. They proved that any arbitrary pattern can be formed from any initial configuration wherein the leader election is possible. More precisely, their algorithms work for four or more robots with chirality and for at least five robots without chirality. Combined with the result in , it follows that Arbitrary Pattern Formation and Leader election are equivalent, i.e., it is possible to solve Arbitrary Pattern Formation for with chirality (resp. without chirality) if and only if Leader election is solvable. In [7, 15], the problem was studied allowing the pattern to have multiplicities. Recently, the case of robots was fully characterized in  in the asynchronous setting, with and without chirality. They proposed a new geometric invariant that exists in any configuration with four robots, and using this invariant, they presented an algorithm that forms any target pattern from any solvable initial configuration. The problem of forming a sequence of patterns in a given order was studied in . Randomized algorithms for pattern formation were studied in . In [8, 15], the so-called Embedded Pattern Formation problem was studied where the pattern to be formed is provided as a set of fixed and visible points in the plane. In , the problem was considered in a grid based terrain where the movements of the robots are restricted only along grid lines and only by a unit distance in each step. They showed that a set of fully asynchronous robots without agreement in coordinate system can form any arbitrary pattern, if their starting configuration is asymmetric.
All the aforementioned works considered robots with unlimited and unobstructed visibility. In the limited, but unobstructed, visibility setting, the problem was first studied in , and recently in . In obstructed visibility, the Gathering problem have been studied for fat robots in plane , and for point robots in three dimensional space . A related problem in obstructed visibility model is the Mutual Visibility problem. In this problem, starting from arbitrary configuration, the robots have to reposition themselves to a configuration in which every robot can see all other robots in the team. The problem has been extensively studied in literature under various settings [10, 20, 19, 4, 18, 1]. Arbitrary Pattern Formation in the obstructed visibility model was first studied recently in  where the authors proved runtime bounds in terms of the time required to solve Leader election. However, they did not provide any deterministic solution for Leader election and is yet to be studied in literature in the obstructed visibility model.
1.2 Our Contribution
We study the Arbitrary Pattern Formation problem for a system of opaque and luminous robots in a fully asynchronous setting. We have shown that the problem can be solved from any initial configuration if the robots agree on the direction and orientation of both and axes. If the robots agree on the direction and orientation of only axis, is unsolvable when the initial configuration has a reflectional symmetry with respect to a line which is parallel to the axis and has no robots lying on . The same result holds even if the robots have unobstructed visibility. For all other initial configurations, is solvable. Our algorithms require 3 and 6 colors respectively for two axis and one axis agreement.
The paper is organized as follows. In Section 2, the robotic model under study is presented. In Section 3, the formal definition of the problem is given along with some basic notations and terminology. In Section 4, we present the algorithm for under one axis agreement. In Section 5, the main results of the paper are presented along with formal proofs.
2 Robot Model
A set of mobile computational entities, called robots, are initially placed at distinct points in the Euclidean plane. Each robot can move freely in the plane. The robots are assumed to be:
they have no unique identifiers that they can use in a computation
they are indistinguishable by their physical appearance
there is no centralized control
they execute the same deterministic algorithm.
The robots are modeled as points in the plane, i.e., they do not have any physical extent. The robots do not have access to any global coordinate system. Each robot is provided with its own local coordinate system centered at its current position, and its own notion of unit distance and handedness. However, the robots may have a priori agreement about the direction and orientation of the axes in their local coordinate systems. Based on this, we consider the following two models.
- Two axis agreement:
They agree on the direction and orientation of both axes.
- One axis agreement:
They agree on the direction and orientation of only one axis.
The opaque robot or obstructed visibility model assumes that visibility of a robot can be obstructed by the presence of other robots. We assume that the robots have unlimited but obstructed visibility, i.e., the robots are equipped with a 360 vision camera sensor that enables the robots to take snapshots of the entire plane, but its vision can be obstructed by the presence of other robots. Formally, a point on the plane is visible to a robot if and only if the line segment joining and does not contain any other robot. Hence, two robots and are able to see each other if and only if no other robot lies in the line segment joining them.
This paper studies the pattern formation problem in the robots with lights or luminous robot model, introduced by Peleg . In this model, each robot is equipped with a visible light which can assume a constant number of predefined colors. The lights serve both as a weak explicit communication mechanism and a form of internal memory. We denote the set of colors available to the robots by . Notice that a robot having light with only one possible color has the same capability as the one with no light. Therefore, the luminous robot model generalizes the classical model.
The robots, when active, operate according to the so-called LOOK-COMPUTE-MOVE (LCM) cycle. In each cycle, a previously idle or inactive robot wakes up and executes the following steps.
The robot takes a snapshot of the current configuration, i.e., it obtains the positions, expressed in its local coordinate system, of all robots visible to it, along with their respective colors. The robot also knows its own color.
Based on the perceived configuration, the robot performs computations according to a deterministic algorithm to decide a destination point (expressed in its local coordinate system) and a color . As mentioned earlier, the deterministic algorithm is same for all robots.
The robot then sets its light to and moves towards the point .
After executing a LOOK-COMPUTE-MOVE cycle, a robot becomes inactive. Then after some finite time, it wakes up again to perform another LOOK-COMPUTE-MOVE cycle. Notice that after a robot sets it light to a particular color in the MOVE phase of a cycle, it maintains its color until the MOVE phase of the next LCM cycle. The robots are oblivious in the sense that, when a robot transitions from one LCM cycle to the next, all of its local memory (past computations and snapshots) are erased, except for the color of the light.
Based on the activation and timing of the robots, there are three types of schedulers considered in literature.
- Fully synchronous:
In the fully synchronous setting (FSync), time can be logically divided into global rounds. In each round, all the robots are activated. They take the snapshots at the same time, and then perform their moves simultaneously.
The semi-synchronous setting (SSync) coincides with the FSync model, with the only difference that not all robots are necessarily activated in each round. However, every robot is activated infinitely often.
- Fully asynchronous:
The fully asynchronous setting (ASync) is the most general model. The robots are activated independently and each robot executes its cycles independently. The amount of time spent in LOOK, COMPUTE, MOVE and inactive states is finite but unbounded, unpredictable and not same for different robots. As a result, the robots do not have a common notion of time. Moreover, a robot can be seen while moving, and hence, computations can be made based on obsolete information about positions. Also, the configuration perceived by a robot during the LOOK phase may significantly change before it makes a move and therefore, may cause a collision.
The scheduler that controls the activations and the durations of the operations can be thought of as an adversary. In the Non-Rigid movement model, the adversary also has the power to stop the movement of a robot before it reaches its destination. However, so that each robot traverses at least the distance unless its destination is closer than . This restriction imposed on the adversary is necessary, because otherwise, the adversary can stop a robot after moving distances and thus, not allowing any robot to traverse a distance of more than 1. In the Rigid movement model, each robot is able to reach its desired destination without any interruption. In this paper, the robots are assumed to have Rigid movements. The adversary also has the power to choose the local coordinate systems of individual robots (obeying the agreement assumptions). However, for each robot, the local coordinate system set by the adversary when it is first activated, remains unchanged. In other words, the local coordinate systems of the robots are persistent. Of course, in any COMPUTE phase, if instructed by the algorithm, a robot can logically define a different coordinate system, based on the snapshot taken, and transform the coordinates of the positions of the observed robots in the new coordinate system. This coordinate system is obviously not retained by the robot in the next LCM cycle, since it is oblivious.
3.1 Definitions and Notations
We assume that a set of robots are placed at distinct positions in the Euclidean plane. For any time , will denote the configuration of the robots at time . For a robot , its position at time will be denoted by . When there is no ambiguity, will represent both the robot and the point in the plane occupied by it. By , we shall denote the unit distance according to the local coordinate system of . At any time , or simply will denote the color of the light of at .
Suppose that a robot , positioned at point , takes a snapshot at time . Based on this snapshot, suppose that the deterministic algorithm run in the COMPUTE phase instructs the robot to change its color (Case 1) or move to a different point (Case 2) or both (Case 3). In case 1, assume that it changes its color at time . In case 2, assume that it starts moving at time . Note that when we say that it starts moving at , we mean that , but for sufficiently small . For case 3, assume that changes it color at and starts moving at . Then we say that has a pending move at if in case 1 or in case 2 and 3. A robot is said to be stable at time , if is stationary and has no pending move at . A configuration at time is said to be a stable configuration if every robot is stable at . A configuration at time is said to be a final configuration if
every robot at is stable,
any robot taking a snapshot at will not decide to move or change its color.
With respect to the local coordinate system of a robot, positive and negative directions of the -axis will be referred to as right and left respectively, while the positive and negative directions of the -axis will be referred to as up and down respectively. For a robot , and are respectively the vertical and horizontal line passing through . We denote by (resp. ) and (resp. ) the upper and bottom open (resp. closed) half-plane delimited by respectively. Similarly, (resp. ) and (resp. ) are the left and right open (resp. closed) half-plane delimited by respectively. We define (resp. ) as the horizontal open (resp. closed) strip delimited by and . For a robot and a straight line passing through it, will be called non-terminal on if it lies between two other robots on , and otherwise it will be called terminal on .
3.2 Problem Definition
A swarm of robots is arbitrarily deployed at distinct positions in the Euclidean plane. Initially, the lights of all the robots are set to a specific color called off. Each robot is given a pattern as input, which is a list of distinct elements from . Without loss of generality, we assume that
the elements in are arranged in (ascending) dictionary order, i.e., iff either or , ,
for each element in .
The goal of the Arbitrary Pattern Formation, or in short , is to design a distributed algorithm so that there is a time such that
is a final configuration,
the lights of all the robots at are set to the same color,
can be obtained from by a sequence of translation, reflection, rotation, uniform scaling operations.
at any time , no two robots occupy the same position in the plane, i.e., in other words, the movements are collision-free.
4 Arbitrary Pattern Formation under One Axis Agreement
In this section, we shall discuss under the one axis agreement model in ASync. We assume that the robots agree on the direction and orientation of only axis. Before presenting the algorithm, we shall discuss some basic limitations under this model.
is unsolvable in SSync if the initial configuration has a reflectional symmetry with respect to a line which is parallel to the axis with no robots lying on . The same result holds even if the robots have unobstructed visibility.
Assume that the initial configuration has a reflectional symmetry with respect to a horizontal line and there are no robots lying on . Let and be the two open half-planes delimited by . In the initial configuration, the set of robots can be partitioned into disjoint pairs , where is in and is its specular partner in . For each pair, suppose that the adversary sets the -axes of their local coordinate systems in opposite directions (See Fig. 0(a)). Suppose that at each round, the adversary activates exactly one pair of specular robots. In the first round, suppose and are activated. Assume that decides to move to a point .
Case 1: Let . Then also decides to move to , as they have the same view, same color, and they both execute the same deterministic algorithm. Hence, they will collide at .
Case 2: Let . By the same logic, will decide to move to , which is the mirror image of with respect to . If the adversary makes both of them move at the same time with same speed, they will collide on (See Fig. 0(b)).
Case 3: Therefore, the only possible case is . Again, will decide to move to , which is the mirror image of with respect to . Also, if they change their lights, both will set the same color. Hence, after the move, the configuration remains symmetric with respect to , with containing no robots and the specular partners having same colors (See Fig. 0(c)). Therefore, it is easy to see that it is impossible to form any arbitrary pattern, for example, an asymmetric pattern. Hence, is unsolvable. Obviously, the same result holds even if the robots have unobstructed visibility. ∎
is unsolvable in ASync if the initial configuration has a reflectional symmetry with respect to a line which is parallel to the axis with no robots lying on . The same result holds even if the robots have unobstructed visibility.
Therefore, we shall assume that in the one axis agreement setting, the initial configuration does not have such a symmetry. We shall prove that with this assumption, is solvable in ASync. Our algorithm requires six colors, namely off, terminal, candidate, symmetry, leader, and done. As mentioned earlier, initially the lights of all the robots are set to off.
The goal of is that the robots have to arrange themselves to a configuration which is similar to with respect to translation, rotation, reflection and uniform scaling. Since the robots do not have access to any global coordinate system, there is no agreement regarding where and how the pattern is to be embedded in the plane. To resolve this ambiguity, we shall first elect a robot in the team as the leader. The relationship between leader election and arbitrary pattern formation is well established in literature. Once a leader is elected, it is not difficult to reach an agreement on a suitable coordinate system. Then the robots have to reconfigure themselves to form the given pattern with respect to this coordinate system. Thus, the algorithm is divided into two stages, namely leader election and pattern formation from leader configuration. The leader election stage is again logically divided into two phases, phase 1 and phase 2. Since the robots are oblivious, in each LCM cycle, it has to infer in which stage or phase it currently is, from certain geometric conditions and the lights of the robots in the perceived configuration. These conditions are described in Algorithm 1. Notice that due to the obstructed visibility, two robots taking snapshots at the same time can have quite different views of the configuration. Therefore, it may happen that they decide to execute instructions corresponding to different stages or phases of the algorithm.
4.1 Leader Election
For a group of anonymous and identical robots, leader election is solved on the basis of the relative positions of the robots in the configuration. But this is only possible if the robots can see the entire configuration. Therefore, the naive approach would be to first bring the robots to a mutually visible configuration where each robot can see all other robots. But as mentioned earlier, this can create unwanted symmetries in the configuration from where arbitrary pattern formation may be unsolvable. Therefore, we shall employ a different strategy that does not require solving mutual visibility.
The aim of the leader election stage is to obtain a stable configuration where there is a unique robot with its light set to leader and the remaining robots have their lights set to off. But we would like the configuration to satisfy some additional properties that will be useful in the next stage. Formally, our aim is to obtain a stable configuration where there is a unique robot such that
off for all
for all .
We shall call this a leader configuration, and call the leader. As mentioned earlier, the leader election algorithm consists of two phases, namely phase 1 and phase 2 (See Algorithm 1). We describe the phases in detail in the following.
4.1.1 Phase 1
Since the robots agree on left and right, if there is a unique leftmost robot in the configuration, then it can identify this from its local view and elect itself as the leader. However, there might be more than one leftmost robot in the configuration. Assume that there are leftmost robots in the configuration. Our aim in this phase is to reduce the number of leftmost robots to , or if possible, to . Suppose that the leftmost robots are lying on the vertical line . We shall ask the two terminal robots on , say and , to move horizontally towards left. If both robots move synchronously by the same distance, then the new configuration will have two leftmost robots. However, and can not distinguish between this configuration and the initial configuration, i.e., they can not ascertain from their local view if they are the only robots on the vertical line due to the obstructed visibility. To resolve this, we shall ask and to change lights to terminal before moving. Now, consider the case where and move different distances with moving further. Suppose that reaches its destination first, and when it takes the next snapshot, it finds that (with light terminal) is on the same vertical line. So, incorrectly concludes that the two terminal robots have been brought on the same vertical line, while actually is still moving leftwards. To avert this situation, we shall use the color candidate. After moving, the robots will change their lights to candidate to indicate that they have completed their moves. So, if we have two robots with light candidate on the same vertical line, then we are done. On the other hand, if we end up with the two robots with light candidate not on the same vertical line, then the one on the left will become the leader.
Before describing the algorithm, let us give a definition. We define a candidate configuration to be a stable configuration where there are two robots and such that
off for all
and are on the same vertical line
for all .
Our aim in this phase is to create either a leader configuration or a candidate configuration. A pseudocode description of phase 1 is given in Algorithm 2.
Suppose that a robot with off takes a snapshot at time and finds that
there are no robots in other than itself, i.e., is the unique leftmost robot in the configuration,
all robots have their lights set to off.
Then we shall say that the robot finds itself eligible to become leader at time . In this case, the robot will not immediately change its light to leader. It will start executing BecomeLeader(). BecomeLeader() makes it move downwards (according to its local coordinate system) until there no robots in other than itself. This ensures that when changes its light to leader, we obtain a leader configuration.
In the case where with off is not a unique leftmost robot, it calls the function LeftMostTerminal(). If LeftMostTerminal() returns True, will move leftwards, and otherwise it will stay put. For a robot with off, LeftMostTerminal() returns True if one of the following holds.
There are no robots in , i.e., it is a leftmost robot. Furthermore, it is terminal on its vertical line.
There is exactly one robot in and has light set to candidate. Furthermore, if is the open half-plane not containing , then must not contain any robot.
The second condition is necessary in the asynchronous setting. Let and be the leftmost robots terminal on their vertical line in the initial configuration. Suppose that takes its first snapshot earlier and hence decides to move due to the first condition. Suppose that it leaves at time . If takes a snapshot before , it will decide to move. But in absence of the second condition, will not move if it is activated after as it is no longer a leftmost robot. So, it will become impossible to know whether will move future or not. The second condition ensures that both and eventually leaves .
Therefore, if the initial configuration has leftmost robots, then both leftmost terminal robots will eventually change their lights to candidate and move left. After completing their moves, they will set their lights to candidate. Now, let us see what might happen if do not specify how much the robots should move. Consider the situation shown in Fig. 2. The initial configuration has exactly two leftmost robots and . Let be the configuration as shown in Fig. 1(b). has completed its move and changed its light to candidate. Clearly, LeftMostTerminal() will return True for as it can see only one robot in with light set to candidate. Therefore, will also decide to move left. It can be shown that such a situation does not occur if the robots move to the points computed by the function ComputeDestination() (See Algorithm 2).
4.1.2 Phase 2
Phase 1 will end with either a leader configuration or a candidate configuration. In the first case, leader election is done, and in the second case we enter phase 2. So assume that we have two leftmost robots, and , lying on the vertical line with their lights set to candidate. Let be the configuration of the remaining robots. The idea is to elect the leader by inspecting the configuration . If is asymmetric (Case 1), then it is possible to deterministically elect one of and as the leader from asymmetry of . The problem is that and may have only a partial view of due to obstructions. However, in that case the robot is aware that it can not see the entire configuration, as it knows the total number of robots (from the input ). Suppose that finds that it can not see all the robots in the configuration. Then we can ask to move along the vertical line (in the direction opposite to ) in order to get an unobstructed view of the configuration. It can be proved that after finitely many steps, will be able to see all the robots in the configuration. However, this strategy may create a symmetry with the axis of symmetry not containing any robots, from where arbitrary pattern formation is deterministically unsolvable. This happens only in the case where has such a symmetry (Case 2). But the , can not ascertain this without having the full view of the configuration. Consider the leftmost vertical line containing a robot in . Let be the horizontal line passing through the mean of the positions of the robots lying on . Clearly, if is symmetric, then is the axis of symmetry. Note that both , can see all robots lying on . Hence, they can determine . Therefore, although they can not ascertain if is symmetric or not, they can still determine the only possible axis with respect to which there may exist a symmetry. Now and will determine their movements based on their distances from . In particular, only the one farther away from will move, and in case of a tie, both will move. We can prove that following this strategy, it is possible to elect one among and as the leader in Case 1 and Case 2. However, this may not be possible if is symmetric with respect to with containing at least one robot (Case 3). In this case, the movements of and can make the entire configuration symmetric with respect to the axis (which contains at least one robot). In that case, and will change theirs lights to symmetry. It is crucial for the correctness of the algorithm that we coordinate the movements of and in such a way that when both set their lights to symmetry, they must be 1) visible to all the robots in , and 2) equidistant from . If this is ensured, then all the robots can determine , which should be the horizontal line passing through the mid-point of the line segment joining and . Then the leftmost robot on will move towards left to become the leftmost robot in and will eventually become the leader.
A pseudocode description of phase 2 is presented in Algorithm 3. Let and be the two candidate robots in the candidate configuration . Let . As defined earlier, is the vertical line containing and , is the leftmost vertical line containing a robot in , and is the horizontal line passing through the mean of the positions of the robots lying on . Let and be the two open half-planes delimited by . Let and be their closure. First assume that both and lie in the same closed half-plane, say . Then the robot further from , say , will move left. Then clearly we are back to phase 1. As described in the Section 4.1.1, eventually will become leader. However, this movement can create the similar situation shown in Fig. 1(b). To avoid this, we have specify the extent of the movement by the function ComputeDestination2 (See Algorithm 3).
Now assume that and are in different open half-planes. So let and be the half-planes containing and respectively. For each , we define a coordinate system in the following way. The point of intersection between and is the origin, is the positive -axis and the positive direction of -axis is according to the global agreement. We express the positions of all the robots in with respect to the coordinate system . Now arrange the positions in the dictionary order. Let denote the string thus obtained (See Fig. 3). Each term of the string is an element from . To make the length of the strings and equal, null elements may be appended to the shorter string. For any non-null term of a string, we set . We shall write iff is lexicographically larger than . For each string , let be the string obtained from by deleting all terms with -coordinate not equal to . That is, the terms of corresponds to the robots on . Again, null elements may be appended to make the length of the strings and equal. The plan is to choose a leader by comparing these strings. First, the robots will compare and . Clearly, both of them can see all the robots on , hence can compute and . If , will move left. As before, will become leader. If , the robots have to compare the full strings and . But in order to compute these strings, the complete view of the configuration is required. Let and be the distance of and from respectively. If , will move vertically away from by a distance to get the full view of the configuration. When it can see all the robots in the configuration, it computes and . If , will move left ( will become leader) and if , will move right ( will become leader). In the case where ( is symmetric), will move left if has no robots on it, or otherwise will change its light to symmetry.
4.2 Pattern Formation from a Leader Configuration
In a leader configuration, all non-leader robots lie on one of the open half-planes delimited by the horizontal line passing through the leader . This leads to an agreement on the direction of axis, as we can set the empty open half-plane to correspond to the negative direction of -axis or ‘down’. Hence, we have an agreement on ‘up’, ‘down’, ‘left’ and ‘right’. However, we still do not have a common notion of unit distance. Notice that all the non-leader robots are in . Now, we shall ask one of the non-leader robots to move to the line . The distance of this robot from the leader will be set as the unit distance. Now it only remains to fix the origin. We shall set the origin at a point such that the coordinates of are . Now that we have a common fixed coordinate system, we embed the pattern on the plane. Let us call these points the target points. We have to bring the robots to these points without causing any collisions. Take the projection of the target points on . Our plan is to first sequentially bring the robots to these points and then sequentially move them to the corresponding target points. The problem is that there might be multiple target points on the same vertical line and thus we may have one projected point corresponding to multiple target points. Therefore, to accommodate multiple robots, we shall assign pockets of space of a fixed size around each projected point.
A pseudocode description of this stage is given in Algorithm 4. At the start of this stage, we have a leader configuration. Let be the leader having its light set to leader. Any robot that can see starts executing Algorithm 4.
Initially all robots other than are inside the region . Order the robots in from bottom to up, and from left to right in case multiple robots on the same horizontal line (See Fig. 3(a)). Let us denote these robots by in this order . The robots will move sequentially according to this order. will move to , while will move to . Each will decide to move when it finds that 1) there are no robots in , and 2) is the leftmost robot on . These conditions will ensure that the robots execute their moves sequentially according to ordering in Fig. 3(a). First will move horizontally to . After reaches , we shall denote it by . Any robot that can see both and , sets its coordinate system as shown in Fig. 3(b) so that is at and is at . We shall refer to this coordinate system as the agreed coordinate system. Now embed the pattern in the plane with respect to this coordinate system. Let denote the point in plane corresponding to . Let us call the target points. Recall that each is from . Also, is sorted in dictionary order. Therefore, and are ordered from left to right, and from bottom to up in case there are multiple robots on the same vertical line (See Fig. 3(c)). For each , , we define a point on in the following way. Let . Let be the vertical line passing through , i.e., the line . Let be the th target point on from bottom, i.e., the th item in with the first coordinate equal to . Then , where is equal to the total number of target points on , and is equal to the smallest horizontal distance between any two target points not on the same vertical line, or equal to 1 if all target points are on the same vertical line (See Fig. 3(d)). Notice that the numbers are calculated from alone. The robots will sequentially move to respectively (See Fig. 3(e)). When an () has to move, it can see robots with light off on and thus, decides to move to . Clearly, can see both and and therefore, can determine the point .
Once these moves are completed, we have on at respectively. Now will sequentially move to respectively (See Fig. 3(f)). To make the robots move in this order, we ask a robot to move when it can see on the same horizontal line and can also see . But now the robot has to decide to which target point it is supposed to go. Since it can see both and , it can compute its coordinates in the agreed coordinate system. If it finds its coordinates to be equal to , then it decides that it has to move to . But if , it will wait till complete their moves. Hence, it will move only when it sees a robot with light done at . When all these conditions are satisfied, it will change its light to done and move to its destination.
Now and have to move to and respectively. moves when it finds that there is no other robot with light off in and there are no robots in except . However, in this case, it will change its light to done after the move (See line 4 in Algorithm 4). When sees no robots with light off, it decides to move to . However, since there is no longer a robot on , it can not ascertain the unit distance of the agreed coordinate system and therefore, can not determine the point in the plane. Hence, it has to locate the point by some other means. Let be the leftmost (and bottommost in case of tie) robot that can see. Clearly, is at . Hence, knows the point in the plane with coordinates , and also point with coordinates (its own position). From these two points, it can easily determine the point , i.e., the point in the plane with coordinates in the agreed coordinate system. Therefore, it will change its light to done and move to .
5 Correctness and the Main Results
5.1 Correctness of Algorithm 1
Suppose that at time
a robot with light set to off finds itself eligible to become leader, and
for all , is stable and off,
Then such that is a leader configuration with leader .
Clearly, will start executing BecomeLeader(). The robot will move downwards according to its local coordinate system if it finds any robot in other than itself. Clearly, after finitely many moves, it will find that has no robots other than itself. Notice that no livelock is created as its local coordinate system (and hence, its perception of up and down) is unchanged in each LCM cycle. So, will change its light to leader at some time and will be a leader configuration. ∎
If the initial configuration has a unique leftmost robot, then such that is a leader configuration.
Follows from Lemma 1. ∎
If the initial configuration has leftmost robots where , then such that is a candidate configuration.
Let denote the vertical line passing through the leftmost robots in . Let and be the terminal robots on . Let , where is the set of non-terminal robots on and is the set of robots not lying on . Clearly , as