TuringMobile: A Turing Machine of Oblivious Mobile Robots with Limited Visibility and its Applications
In this paper we investigate the computational power of a set of mobile robots with limited visibility. At each iteration, a robot takes a snapshot of its surroundings, uses the snapshot to compute a destination point, and it moves toward its destination. Each robot is punctiform and memoryless, it operates in , it has a local reference system independent of the other robots’ ones, and is activated asynchronously by an adversarial scheduler. Moreover, robots are non-rigid, in that they may be stopped by the scheduler at each move before reaching their destination (but are guaranteed to travel at least a fixed unknown distance before being stopped).
We show that despite these strong limitations, it is possible to arrange of these weak entities in to simulate the behavior of a stronger robot that is rigid (i.e., it always reaches its destination) and is endowed with registers of persistent memory, each of which can store a real number. We call this arrangement a TuringMobile. In its simplest form, a TuringMobile consisting of only three robots can travel in the plane and store and update a single real number. We also prove that this task is impossible with fewer than three robots.
Among the applications of the TuringMobile, we focused on Near-Gathering (all robots have to gather in a small-enough disk) and Pattern Formation (of which Gathering is a special case) with limited visibility. Interestingly, our investigation implies that both problems are solvable in Euclidean spaces of any dimension, even if the visibility graph of the robots is initially disconnected, provided that a small amount of these robots are arranged to form a TuringMobile. In the special case of the plane, a basic TuringMobile of only three robots is sufficient.
Framework and background
The investigations of systems of autonomous mobile robots have long moved outside the boundaries of the engineering, control, and AI communities. Indeed, the computational and complexity issues arising in such systems are important research topics within theoretical computer science, especially in distributed computing. In these theoretical investigations, the robots are usually viewed as computational entities that live in a metric space, typically or , in which they can move. Each robot operates in “Look-Compute-Move” (LCM) cycles: it observes its surroundings, it computes a destination within the space based on what it sees, and it moves toward the destination. The only means of interaction between robots are observations and movements: that is, communication is stigmergic. The robots, identical and outwardly indistinguishable, are oblivious: when starting a new cycle, a robot has no memory of its activities (observations, computations, and moves) from previous cycles (“every time is the first time”). In other words, the robots have no persistent memory; for this reason, they are sometime said to be memoryless. Clearly obliviousness is a desirable property as it ensures a degree of self-stabilization and fault-tolerance into the system and its computations. Equally clear is that being memoryless severely constrains the computational capabilities of the robots.
There have been intensive research efforts on the computational issues arising with such robots, and an extensive literature has been produced in particular in regard to the important class of Pattern Formation problems [10, 12, 13, 18, 19] as well as for Gathering [1, 3, 4, 5, 11, 14]; see also [6, 20]. The goal of the research has been to understand the minimal assumptions needed for a team (or swarm) of such robots to solve a given problem, and to identify the impact that specific factors have on feasibility and hence computability.
The most important factor is the power of the adversarial scheduler that decides when each activity of each robot starts and when it ends. The main adversaries (or “environments”) considered in the literature are: synchronous, in which the computation cycles of all active robots are synchronized, and at each cycle either all (in the fully synchronous case) or a subset (in the semi-synchronous case) of the robots are activated, and asynchronous, where computation cycles are not synchronized, each activity can take a different and unpredictable amount of time, and each robot can be independently activated at each time instant. An important factor is whether a robot moving toward a computed destination is guaranteed to reach it (rigid robot), or it can be stopped on the way (non-rigid) at a point decided by an adversary. In all the above cases, the power of the adversaries is limited by some basic fairness assumption. All the existing investigations have concentrated on the study of (a-)synchrony, several on the impact of rigidity, some on other relevant factors such as agreement on local coordinate systems or on their orientation, etc.; for a review, see .
From a computational point of view, there is another crucial factor: the visibility range of the robots, that is, how much of the surrounding space they are able to observe in a Look operation. In this regard, two basic settings are considered: unlimited visibility, where the robots can see the entire space (and thus all other robots), and limited visibility, when the robots have a fixed visibility radius. While the investigations on (a-)synchrony and rigidity have concentrated on all aspects of those assumptions, this is not the case with respect to visibility. In fact, almost all research has assumed unlimited visibility; few exceptions are the algorithms for Convergence , Gathering [7, 8, 11], and Near-Gathering  when the visibility range of the robot is limited. The unlimited visibility assumption clearly greatly simplifies the computational universe under investigation; at the same time, it neglects the more general and realistic one, which is still largely unknown.
Let us also stress that, in the existing literature, all results on oblivious robots are for and ; the only exception is the recent result on plane formation in by semi-synchronous rigid robots with unlimited visibility . No results exist for robots in higher dimensions.
In this paper we contribute several constructive insights on the computational universe of oblivious robots with limited visibility, especially asynchronous non-rigid ones, in any dimension.
The first and main contribution is a technique to construct a “moving Turing Machine” made solely of asynchronous oblivious non-rigid robots in with limited visibility, for any . More precisely, we show how to arrange identical non-rigid oblivious robots in with a visibility radius of (for any ) and how to program them so that they can collectively behave as a single rigid robot in with persistent registers and visibility radius would. This team of identical robots is informally called a TuringMobile.
We obtain this result by using as fundamental construction a basic component, which is able to move in while storing and updating a single real number. Interestingly, we show that agents are necessary and sufficient to build such a machine. The TuringMobile will then be built by arranging multiple copies of this basic component side by side.
We stress that robots forming the TuringMobile are asynchronous, that is, the scheduler makes them move at independent arbitrary speeds, and each robot takes the next snapshot an arbitrary amount of time after terminating each move; furthermore, they are anonymous, in that they are indistinguishable from each other, and they all execute the same program to compute their destination points. Notably, this program only performs arithmetic operations, square roots, and comparisons (hence no transcendental function has to be computed by the robots).
A TuringMobile is a powerful construct that, once deployed in a swarm of robots, can act as a rigid leader with persistent memory, allowing the swarm to overcome many handicaps imposed by obliviousness, limited visibility, and asynchrony. As examples we present a variety of applications in , with .
First of all we show how a TuringMobile can explore and search the space. We then show how it can be employed to solve the long-standing open problem of (Near-)Gathering with limited visibility in spite of an asynchronous non-rigid scheduler and disagreement on the axes, a problem still open without a TuringMobile. Interestingly, the presence of the TuringMobile allows Gathering to be done even if the initial visibility graph is disconnected (this does not change the fact that there are cases in which Gathering is impossible, as remarked in [3, 11]). Finally we show how the arbitrary Pattern Formation problem can be solved under the same conditions (asynchrony, limited visibility, possibly disconnected visibility graph, etc.).
There is a limitation to the use of a TuringMobile when deployed in a swarm of robots. Namely, the TuringMobile must be always recognizable (e.g., by its unique shape) so that other robots cannot interfere by moving too close to the machine, disrupting its structure.
The paper is organized as follows: In Section 2 we give formal definitions, introducing mobile robots with or without memory as oracle semi-oblivious real RAMs. In Section 3 we illustrate our implementation of the TuringMobile. The correctness of the proposed construction is proved in Section 4. In Section 5 we show how to apply the TuringMobile to solve fundamental problems. In Section 6 we conclude with some extra remarks and open problems.
2 Definitions and Preliminaries
2.1 Oracle Semi-Oblivious Real RAMs
Real random-access machines
A real RAM, as defined by Shamos [15, 17], is a random-access machine  that can operate on real numbers. That is, instead of just manipulating and storing integers, it can handle arbitrary real numbers and do infinite-precision operations on them. It has a finite set of internal registers and an infinite ordered sequence of memory cells; each register and each memory cell can hold a single real number, which the machine can modify by executing its program.111Nonetheless, the constant operands in a real RAM’s program cannot be arbitrary real numbers, but have to be integers.
A real RAM’s instruction set contains at least the four arithmetic operations, but it may also contain -th roots, trigonometric functions, exponentials, logarithms, and other analytic functions, depending on the application. The machine can also compare two real numbers and branch depending on which one is larger.
The initial contents of the memory cells are the input of the machine (we stipulate that only finitely many of them contain non-zero values), and their contents when the machine halts are its output. So, each program of a real RAM can be viewed as a partial function mapping tuples of reals into tuples of reals.
The real RAMs can at least compute the Turing-computable partial functions over the integers. Indeed, it is well known that all these functions can be computed by traditional RAMs whose programs only contain integer additions, subtractions, and comparisons. It is obvious that a real RAM running such a program on an integer-valued input behaves exactly as a traditional RAM, and therefore computes the same partial function.
Oracles and semi-obliviousness
We introduce the oracle semi-oblivious real RAM, which is a real RAM with an additional “ASK” instruction. Whenever this instruction is executed, the contents of all the memory cells are replaced with new values, which are a function of the numbers stored in the registers.
In other words, the machine can query an external oracle by putting a question in its registers in the form of real numbers. The oracle then reads the question and writes the answer in the machine’s memory cells, erasing all pre-existing data. The term “semi-oblivious” comes from the fact that, every time the machine invokes the oracle, it “forgets” everything it knows, except for the contents of the registers, which are preserved.222Observe that, in general, the machine cannot salvage its memory by encoding its contents in the registers: since its instruction set has only analytic functions, it cannot injectively map a tuple of arbitrary real numbers into a single real number.
In spite of their semi-obliviousness, these real RAMs with oracles are at least as powerful as Turing Machines with oracles.
Given an oracle Turing Machine, there is an oracle semi-oblivious real RAM with one register that computes the same partial function.
Following Rogers , we define an oracle Turing Machine as a Turing Machine with an additional read-only tape containing the answers to all possible oracle queries. The th cell of the oracle tape contains a symbol that is read by the machine whenever the head of the oracle tape is in position .
Given such a machine , we construct an oracle semi-oblivious real RAM with one register that “simulates” step by step. As already observed, a real RAM can compute any Turing-computable partial function, and behaves as a real RAM as long as it does not invoke its oracle. So, can encode and decode the entire state of , including the contents of its non-oracle tapes and the positions of its heads on the tapes, as a single integer: indeed, the functions that encode and decode a Turing Machine’s state are themselves Turing-computable.
To simulate one step of , encodes the current state of in its register and executes an “ASK” instruction. The oracle of reads the register, decodes the state of , fetches the position of the head on the oracle tape, and answers with the symbol that would read on its oracle tape at that position. Next, finds in the first cell of its own memory. So, decodes the contents of the register to retrieve the state of , and uses it along with to compute the next state of . ∎
2.2 Mobile Robots as Real RAMs
Our oracle semi-oblivious real RAM model can be reinterpreted in the realm of mobile robots. A mobile robot is a computational entity that lives in a metric space, typically or . It can observe its surroundings and move within the space based on what it sees. The same space may be populated by several mobile robots and static objects.
To compute its next destination point, a mobile robot executes a real RAM program with input a representation of its local view of the space. After moving, its entire memory is erased, but the content of its registers is preserved. Then it makes a new observation; from the observation data and the contents of the registers, it computes another destination point, and so on. If , the mobile robot is said to be oblivious.
The actual movement of a mobile robot is controlled by an external scheduler. The scheduler decides how fast the robot moves toward its destination point, and it may even interrupt its movement before the destination point is reached. If the movement is interrupted midway, the robot makes the next observation from there and computes a new destination point as usual. The robot is not notified that an interruption has occurred, but it may be able to infer it from its next observation and the contents of its registers. For fairness, the scheduler is only allowed to interrupt a robot after it has covered a distance of at least in the current movement, where is a positive constant. This guarantees, for example, that if a robot keeps computing the same destination point, it will reach it in a finite number of iterations. If , the robot always reaches its destination, and is said to be rigid.
Mobile robots, revisited
A mobile robot in with registers can be modeled as an oracle semi-oblivious real RAM with registers, as follows.
position registers hold the absolute coordinates of the robot in .
destination registers hold the destination point of the robot, expressed in its local coordinate system.
timestamp register contains the time of the robot’s last observation.
true registers correspond to the registers of the robot.
As the RAM’s execution starts, it ignores its input, erases all its registers, and executes an “ASK” instruction. The oracle then fills the RAM’s memory with the robot’s initial position , the time of its first observation, and a representation of the geometric entities and objects surrounding the robot, as seen from at time .
The RAM first copies and in its position registers and timestamp register, respectively. Then it executes the program of the mobile robot, using its true registers as the robot’s registers and adding to all memory addresses. This effectively makes the RAM ignore the values of and , which indeed are not supposed to be known to the mobile robot.
When the robot’s program terminates, the RAM’s memory contains the output, which is the next destination point , expressed in the robot’s coordinate system. The RAM copies into its destination registers, and the execution jumps back to the initial “ASK” instruction.
Now the oracle reads , , and from the RAM’s registers (it ignores the true registers), converts in absolute coordinates (knowing and the orientation of the local coordinate system of the robot) and replies with a new position , a timestamp , and observation data representing a snapshot taken from at time . To comply with the mobile robot model, must be on the segment , such that either or . The execution then proceeds in the same fashion, indefinitely.
Note that in this setting the oracle represents the scheduler. The presence of a timestamp in the query allows the oracle to model dynamic environments in which several independent robots may be moving concurrently (without a timestamp, two observations from the same point of view would always be identical).
Snapshots and limited visibility
In the mobile robot model we consider in this paper, an observation is simply an instantaneous snapshot of the environment taken from the robot’s position. In turn, each entity and object that the robot can see is modeled as a dimensionless point in . A mobile robot has a positive visibility radius : it can see a point in if and only if it is located at distance at most from its current position. If , the robot is said to have unlimited visibility.
As we hinted at earlier in this section, a mobile robot has its own local reference system in which all the coordinates of the objects in its snapshots are expressed. The origin of a robot’s local coordinate system always coincides with the robot’s position (hence it follows the robot as it moves), and its orientation and handedness are decided by the scheduler (and remain fixed). Different mobile robots may have coordinate systems with a different orientation or handedness. (However, when two robots have the same visibility radius, they also implicitly have the same unit of distance.)
So, a snapshot is just a (finite) list of points, each of which is an -tuple of real numbers.
Simulating memory and rigidity
The main contribution of this paper, loosely speaking, is a technique to turn non-rigid oblivious robots into rigid robots with persistent memory, under certain conditions. More precisely, if identical non-rigid oblivious robots in with a visibility radius of (for any ) are arranged in a specific pattern and execute a specific algorithm, they can collectively act in the same way as a single rigid robot in with persistent registers and visibility radius would. This team of identical robots is informally called a TuringMobile.
We stress that the robots of a TuringMobile are asynchronous, that is, the scheduler makes them move at independent arbitrary speeds, and each robot takes the next snapshot an arbitrary amount of time after terminating each move. The robots are also anonymous, in that they are indistinguishable from each other, and they all execute the same program.
3 Implementing the TuringMobile
3.1 Basic Implementation
We will first describe how to construct a basic version of the TuringMobile with just three oblivious non-rigid robots in . This TuringMobile can remember a single real number and rigidly move in the plane by fixed-length steps: its layout is sketched in Figure 2. In Section 3.2, we will show how to combine several copies of this basic machine to obtain a full-fledged TuringMobile.
Position at rest
The elements of the basic TuringMobile are three robots: a Commander robot, a Number robot, and a Reference robot, located in , , and , respectively. These robots have the same visibility radius of , where , and there is always a disk of radius containing all three of them. When the machine is “at rest”, is a right angle, the distance between and is some fixed value , and the distance between and is approximately . More precisely, lies on a segment of length , where is some fixed value, such that has distance from and has distance from .
The distance between the Reference robot and the Number robot when the TuringMobile is at rest is a representation of the real number that the machine is currently storing. There are several possible ways of defining such a code: an easy one is to encode the number as and to decode it as . A different method that does not use transcendental functions is discussed in Section 6.
The Commander’s role is to decide in which direction the machine should move next, and to initiate the movement. When the machine is at rest, the Commander may choose among three possible final destinations, labeled , , and in Figure 2. The segments , , and all have the same length , with , and form angles of with one another, in such a way that is collinear with and .
Around the center of each segment there is a midway triangle , drawn in gray in Figure 2. This is an isosceles triangle of height whose base lies on and has length as well. When the Commander decides that its final destination is , it moves along the segment , but it takes a small detour in the midway triangle , as we will explained shortly.
Structure of the algorithm
Algorithm 1 is the program that each element of the basic TuringMobile executes every time it computes its next destination point.
Since the robots are anonymous, they first have to determine their roles, i.e., who is the Commander, etc. (line 1 of the algorithm). We make the assumption that there exists a disk of radius containing only the TuringMobile (close to its center) and no other robot. Using the fact that the two closest robots must be the Commander and the Reference robot and that the two farthest robots must be the Commander and the Number robot, it is then easy to determine who is who (these properties will be preserved throughout the execution, as we will see in the next section).
Once it has determined its role, each robot executes a different branch of the algorithm (cf. lines 2, 13, and 23). The general idea is that, when the Commander realizes that the machine is in its rest position, it decides where to move next, i.e., it chooses a final destination . This choice is based on the number stored in the machine’s “memory” (i.e., the number encoded by ), the relative positions of the visible robots external to the machine, and also on the application, i.e., the specific program that the TuringMobile is executing.
When the Commander has decided its final destination , the entire machine moves by the vector , and the Number robot also updates its distance from the Reference robot to represent a different real number . Again, this number is computed based on the number the machine was previously representing, the relative positions of the visible robots external to the machine, and the specific program: in general, the new distance between and is a function of the old distance.
When this is done, the machine is in its rest position again, so the Commander chooses a new destination, and so on, indefinitely.
Note that it is not possible for all three robots to translate by at the same time, because they are non-rigid and asynchronous. If the scheduler stops them at arbitrary points during their movement, after the structure of the machine has been destroyed, they will be incapable of recovering all the information they need to resume their movement (recall that they are oblivious and they have to compute a destination point from scratch every time).
To prevent this, the robots employ various coordination techniques. First the Commander moves to the middle triangle , and precisely to its base vertex , as shown in Figure 3(a) (cf. line 5 of Algorithm 1). Then it positions itself on the altitude , in such a way as to indicate the new number that the machine is supposed to represent. That is, the Commander picks the point on at distance from (lines 6 and 7). Even if it is stopped by the scheduler before reaching such a point, it can recover its destination by simply drawing a ray from to its current position and intersecting it with (lines 8 and 9).
When the Commander has reached , it waits to let the Number robot adjust its position on the segment to match that of the Commander on , as in Figure 3(b) (lines 21 and 22). This effectively makes the Number robot represent the new number . Note that the Number robot can do this even if it is stopped by the scheduler several times during its march, because the Commander keeps reminding it of the correct value of : since depends on the old number , the Number robot would be unable to re-compute after it has forgotten .
When the Commander has reached , the Number robot realizes it and makes the corresponding move (lines 14–18) while the other two robots wait. The destination point of the Number robot is , as shown in Figure 2. Finally, when the Number robot is in , the Reference robot realizes it and makes the final move to bring the TuringMobile back into a rest position (lines 23–27).
Computing the Virtual Commander
After the Commander has left its rest position and is on its journey to , the TuringMobile loses its initial shape, and identifying the ’s and the midway triangles becomes a non-trivial task. To simplify this task, the robots try to guess where the Commander’s original rest position may have been by computing a point called the Virtual Commander .
Assuming that the Reference and Number robots have not moved from their rest positions, the Virtual Commander is easily computed: draw the line through perpendicular to ; then, is the point on at distance from that is closest to . Once we have , we can construct the points with respect to (in the same way as we did in Figure 2 with respect to ). This technique is used by Algorithm 1 at lines 3 and 20.
In the special case where the Commander has reached its final destination and the Reference robot has not moved from its rest position (but perhaps the Number robot has moved), the Virtual Commander can also be computed. This situation is recognized because the distance between the Commander and the Reference robot is either maximum (i.e., ) or minimum (i.e., , by the law of cosines), as Figure 2 shows. If the distance is maximum, then must coincide with ; otherwise, coincides with (if the angle is obtuse) or (if the angle is acute). Since we know the position of and one of the ’s, it is then easy to determine the other ’s. This technique is used at line 15.
The Reference robot’s behavior
To know when it has to start moving, the Reference robot simply executes Algorithm 1 from the perspective of the Commander and the Number robot: if neither of them is supposed to move, then the Reference robot starts moving (line 24).
We have seen that the Number robot can determine its destination solely by looking at the positions of and , which remain fixed as it moves. For the Reference robot the destination point is not as easy to determine, because the distance between and varies depending on what number is stored in the TuringMobile.
However, the Reference robot knows that its move must put the TuringMobile in a rest position. The condition for this to happen is that its destination point be at distance from (line 25) and form a right angle with and (line 26). There are exactly two such points in the plane, but one of them has distance much greater than from , and hence the Reference robot will pick the other (line 27).
As the Reference robot moves toward such a point, all the above conditions must be preserved, due to the asynchronous and non-rigid nature of the robots. This is not a trivial requirement, and we will prove that it is indeed fulfilled in Section 4.
3.2 Complete Implementation
We have shown how to implement a basic component of the TuringMobile in consisting of three robots: a Commander, a Number, and a Reference. Te basic component is able to rigidly move by a fixed distance in three fixed directions, apart from one another. It can also store and update a single real number.
We can obtain a full-fledged TuringMobile in by putting several tiny copies of the basic component side by side as in Figure 4.
For the machine to work, we stipulate that there exists a disk of radius that contains all the robots constituting the TuringMobile and no extraneous robot, where . The distance between two consecutive basic components of the TuringMobile is roughly , where . This makes it easy for the robots to tell the basic components apart and determine the role of each robot within its basic component.
Since a basic component of the TuringMobile is a scalene triangle, which is chiral, all its members implicitly agree on a clockwise direction even if they have different handedness. Similarly, all robots in the Turing Mobile agree on a “leftmost” basic component, whose Commander is said to be the Leader of the whole machine.
All the basic components of the TuringMobile are always supposed to agree on their next move and proceed in a roughly synchronous way. To achieve this, when all the basic components are in a rest position, the Leader decides the next direction among the three possible, and executes line 4 of Algorithm 1. Then all the other Commanders see where the Leader is going, and copy its movement.
When all the Commanders are in their respective ’s, they execute line 7 of the algorithm, and so on. At any time, each robot executes a line of the algorithm only if all its homologous robots in the other basic components of the TuringMobile are ready to execute that line or have already executed it; otherwise, it waits.
When the last Reference robot has completed its movement, the machine is in a rest position again, and the coordinated execution repeats with the Leader choosing another direction, etc.
Simulating a non-oblivious rigid robot
Let a program for a rigid robot in with persistent registers and visibility radius be given. We want the TuringMobile described above to act as , even though its constituting robots are non-rigid and oblivious.
Our TuringMobile consists of basic components, each dedicated to memorizing and updating one real number. These numbers are the coordinate and the coordinate of the destination point of and the contents of the registers of . We will call the first two numbers the variable and the variable, respectively.
When the TuringMobile is in a rest position, its and variables represent the coordinates of the destination point of relative to the Leader of the machine. Whenever the TuringMobile moves by in some direction, these values are updated by subtracting the components of an appropriate vector of length from them. Of course, this update is computed by the Commanders of the first two basic components of the machine, which communicate it to their respective Number robots, as explained in Section 3.1.
Let be the destination point of . Since the TuringMobile can only move by vectors of length in three possible directions, it may be unable to reach exactly. So, the Leader always plans the next move trying to reduce its distance from until this distance is at most (this is possible because ).
When the Leader is close enough to , it “pretends” to be in , and the TuringMobile executes the program of to compute the next destination point. Recall that the visibility radius of is , and that of the robots of the TuringMobile is . Since , each member of the TuringMobile can therefore see everything that would be visible to if it were in , and compute the output of the program of independently of the other members. The only thing it should do when it executes the program of is subtract the values of the and variables to everything it sees in its snapshot, discard whatever has distance greater than from the center, and of course discard the robots of the TuringMobile and replace them with a single robot in the center. Then, the robots that are responsible for updating the and variables add the relative coordinates of the new destination point of to these variables. Similarly, the robots responsible for updating the registers of do so.
Note that the above simulation works also in the special case where is supposed to update its registers without moving. The Leader will move by in any direction, followed by the entire machine (because this is the only way the TuringMobile can update its registers), and the and variables will be updated with the old position of the Leader.
The above TuringMobile correctly simulates under certain conditions. The first one is that, if all robots are indistinguishable, then no robot extraneous to the TuringMobile may get too close to it (say, within a distance of of any of its members). This kind of restriction cannot be dispensed with: whatever strategy a team of oblivious robots employs to simulate a single non-oblivious robot’s behavior is bound to fail if extraneous robots join the team creating ambiguities between its members. Nevertheless, the restriction can be removed if we stipulate that the members of a TuringMobile are distinguishable from all other robots.
Another difficulty comes from the fact that, if the TuringMobile is made of more than one basic component and its Commanders are all in their respective ’s and ready to update the values represented by the machine, they may get their screenshots at different times, due to asynchrony. If the environment moves in the meantime, the screenshots they get are different, and this may cause the machine to compute an incorrect destination point or put inconsistent values in its simulated registers.
There are several possible solutions to this problem: we will only mention two trivial ones. We could for instance assume the Commanders to be synchronous, that is, make the scheduler activate them in such a way that all of them take their screenshots at the same time. This way, all Commanders get compatible screenshots and compute consistent outputs. Another possible solution is to make the TuringMobile operate in an environment where everything else is static, i.e., no moving entities are present other than the TuringMobile’s members.
We stress that these restrictions make sense if a perfect simulation of is saught. As we will see in Section 5, there are several other applications of the TuringMobile technique where no such restriction is required.
Let us now generalize the above construction of a planar TuringMobile to , for any . We start with the same TuringMobile with basic components laid out on a plane . Since has only two basic components for the and variables, we will add basic components to it, positioned as follows.
Let vectors and be two orthonormal generators of , and let us complete to an orthonormal basis of . Now, for all , we make a copy of the basic component of containing the Leader, we translate it by , and we add it to the TuringMobile ( is the same value used in the construction of the planar TuringMobile at the beginning of Section 3.2). Note that the Leader of this new TuringMobile is still easy to identify, as well as the plane when is at rest.
Clearly, basic components allow the machine to record a destination point in , as opposed to . Additionally, the positions of the basic components with respect to give the machine an -dimensional sense of direction.
For instance, say that , is a horizontal plane, and points upward. Then, when the Leader decides to move up, it moves by in the direction of the basic component of the TuringMobile not lying on (first stopping in a midway triangle, as per Algorithm 1). The rest of can reconstruct the direction of , for instance by inspecting the relative positions of the Reference robots, and move as required when the time comes. In the subsequent moves, the Leader still retains a consistent notion of up and down, and can therefore lead close enough to the destination point.
The same restrictions that apply to the planar TuringMobile as a simulator of course extend to its higher-dimensional versions. The next section will be devoted to proving the following theorem, which summarizes the results obtained so far.
Under the aforementioned restrictions, a rigid robot in with persistent registers and visibility radius can be simulated by a team of non-rigid oblivious robots in with visibility radius .
This section is devoted to the proof of Theorem 2. The crux of the proof is the following lemma, which states that a single basic component of the TurnigMobile, as described in Section 3.1, works as intended.
The fundamental lemma
Let a TuringMobile in consisting of a single basic component execute Algorithm 1, and assume that throughout the execution no object extraneous to the machine approaches any of its members by less than . If at some point in time the TuringMobile is in a rest position and none of its members is moving anywhere, then, at a point in time , the TuringMobile is in a rest position again, its Commander and Reference robot have translated by a vector of length in one of three predefined directions (as in Figure 2), its Number robot has correctly updated its distance from the Reference robot (according to some function of the previous distance and the TuringMobile’s surrounding environment as observed by the Commander in a single snapshot taken between times and ), and no member of the TuringMobile is moving anywhere.
A robot is “not moving anywhere” at time if it has already reached the last destination point that it has computed before time , and it has not taken the next snapshot before time (although it may be taking the snapshot exactly at time ).
If Lemma 1 holds, then a TuringMobile in with only one basic component correctly performs a single step of the execution, rigidly moving by and updating the real number that it is storing. By repeatedly applying this lemma, we have the correctness of the entire execution of a basic component.
Let us prove Lemma 1. The intended behavior of the machine is for the execution to go through the following five phases in chronological order:
During each phase, only one robot is supposed to move, while the other two wait. If we can ensure this behavior, then Lemma 1 follows.
Recall that the robots constituting the TuringMobile are asynchronous and non-rigid. This means that we have to guarantee two things for each of the above phases:
If a robot moves as per phase and another robot sees it at any time before it has finished (due to asynchrony), the second robot does not mistakingly think that the current phase is not , and hence it waits.
If a robot moves as per phase and the scheduler stops it before it has reached its destination (due to non-rigidity), the robot takes another snapshot, and correctly resumes phase .
If the assumptions of Lemma 1 are satisfied, the first robot to take a snapshot after time (or exactly at time ) will se a TuringMobile in a rest position. As a consequence, the Virtual Commander coincides with the Commander, and therefore only the Commander is allowed to move toward some .
While the Commander moves, its distance from the Reference robot never gets as small as or as large as (cf. Figure 2), hence the conditions of line 14 of Algorithm 1 are never satisfied. Also, the Virtual Commander computed with respect to and always coincides with the starting position of the Commander, which means that the Commander will be seen on the segment , implying that only the Commander will be allowed to move.
Since the Commander approaches by at least at every movement (cf. Section 2.2), it eventually reaches it. When it reaches , it chooses a destination point on based on a single snapshot of the environment (as required by Lemma 1): once a destination point has been chosen, it never changes even if the Commander is stopped before reaching it, due to lines 8 and 9. Since the Number robot and the Reference robot have not moved yet, the number stored in the machine is still the same as it was a time , and therefore the point on chosen by the Commander is correctly computed by applying function to . Again, only the Commander is allowed to move, and it eventually reaches , for the same reasons as before.
When the Commander is on , it waits until the Number robot has a distance from of . Observe that, as the Number robot moves within , the slope of the line does not change, and therefore the Virtual Commander computed with respect to and is always the position that occupied at time . So, the point is always the same, and the Number robot keeps consitently moving toward the same destination point on .
As for phase 1, never becomes as small as or as large as , and therefore the Number robot always executes line 21 until it reaches the correct point on .
When , the Commander knows it has to start moving again, first to and then to , due to lines 10–12. Again, while this happens the Virtual Commander computed based on and is always the same point , so the positions of , , , etc. remain consistent, and the distance between and never gets as small as or as large as until the Commander has reached . In particular the Number robot never sees the Commander on after it has left it, and so it does not move. Eventually, the Commander reaches .
When the Commander reaches , its distance from finally becomes (if ) or (if or ), and so the Number robot executes lines 15–18 and starts moving. While the Number robot moves, the Commander does not: indeed, as long as the Number robot is tasked to move, the Reference robot never moves (cf. line 24), and hence remains the same. Therefore, if the Commander computes a Virtual Commander based on and , and then computes the points and the midway triangles with respect to , it will never believe to be in or in the interior of the segment or in , no matter where is. This is because all such points have distance greater than and smaller than from (cf. Figure 2). So, the conditions of lines 5–12 are never satisfied.
Suppose that the Commander is in . This configuration is correctly identified by the Number robot no matter how it moves, because it is the only one in which . So the Number robot computes the point correctly (it does so only based on and ) and keeps moving toward until it reaches it (line 16).
Suppose now that the Commander is in . The Number robot recognizes this configuration because and . Again, it correctly computes and moves toward it (line 17). As the Number robot moves, the angle grows (cf. Figure 2), and so the condition of line 17 keeps being satisfied. Eventually, the Number robot reaches .
Finally, suppose that the Commander robot is in . Now and , and so the Number robot starts moving toward (cf. Figure 2). We have to prove that, if it stops on its way to and gets a new screenshot, the inequality keeps being satisfied, and so the Number robot re-computes as its destination point, until it reaches it. This is not trivial, since the angle grows as approaches .
The situation is illustrated in Figure 5, where represents the starting position of the Commander and the starting position of the Number robot. Since , proving that is equivalent to proving that (where ).
By the law of sines applied to triangle ,
Again for the law of sines applied to triangle ,
where . Hence
Observe that , because lies on , and therefore (i.e., the numerator of (1) is greater than that of (2)). Recall that , and so . Moreover, since (cf. Figure 2) and , it follows that . Also observe that , from which we obtain that (i.e., the denominator of (1) is smaller than that of (2)). As a consequence, . Since , both and are acute, which means that (the function increases monotonically when ).
Since in the previous phases either the Commander or the Number robot was always tasked to move, the condition of line 24 was never satisfied, and hence the Reference robot never moved. Now that the Commander is in and the Number robot is in , they are no longer tasked to move, and so the Reference robot executes lines 25–27.
Ideally, the Reference robot should complete the translation of the TuringMobile in order to put it in a rest position again. This is achieved by moving by vector , where is the initial position of the Commander (i.e., its position when phase 1 starts). Instead of trying to reconstruct , the Reference robot constructs two circumferences and and moves to their nearest intersection point. Note that passes through the center of , and hence it has at most two intersection points with it. At least one intersection point exists: this is the point , where is the initial position of the Reference robot (which coincides with its position when phase 5 starts). If there is another intersection point between the two circles, it must be symmetric to with respect to line , because such line passes through the centers of both circles (recall that the segment is a diameter of ). So, assuming that the Commander and the Number robot do not move in this phase, remains the destination point of the Reference robot as long as the robot never crosses the line . But this is impossible, since the segment has length , and therefore it cannot cross the line , whose distance from is roughly .
It remains to prove that, as the Reference robot moves toward , the Commander and the Number robot remain still. Recall that, when phase 5 starts, either or . As approaches (and does not move), converges monotonically to . It follows that never becomes or again, and so the condition of line 14 is never satisfied.
Consider now the Virtual Commander computed with respect to and when is strictly between and , and construct the three segments and the three midway triangles around . If we can prove that, no matter where is located in the interior of the segment , never lies on any of these segments and triangles, we are finished: indeed, this would mean that the conditions of lines 5–12 and line 21 are never satisfied.
Suppose that the Number robot has moved to during phase 4, which means that at the beginning of phase 5 we have : this case is illustrated in Figure 6. We have to show that does not lie on any of the solid gray lines and triangles around the Virtual Commander . It is obvious that the lines and are not parallel and intersect each other at . Also, and are always on opposite sides of . This already implies that cannot be on the segments and or on their respective midway triangles.
To show that does not lie on or , consider the circle through centered at . Note that is always outside the circle, because its radius is , but . Since can be arbitrarily small compared to , the angle between and can be made arbitrarily small, as well (cf. Figure 6). If we take a small-enough , the segment is entirely contained in the circle, and hence it cannot contain . Moreover, since has height , by taking a small-enough we ensure that is contained in the circle, too.
Suppose now that the Number robot has moved to during phase 4: then, at the beginning of phase 5, and . On the other hand, when reaches , we have and . It follows that, when is strictly between and , and (because both quantities change monotonically), as Figure 7 shows. Since , cannot be located on or or , because all their points satisfy . Also, all the points in have distance at least from (recall that ), and so cannot lie in , because .
Let us show that does not lie on or . Observe that , that , and that and are parallel (cf. Figure 7). It follows that the line is obtained by rotating line about by some angle . These two lines are not parallel, and hence they intersect in a single point . As approaches , tends monotonically to , and approaches the foot of the altitude from to the line . So, if we take small enough with respect to , we can keep as close as we want to . The distance between and is obviously minimum when , in which case . It follows that, for small-enough values of , is always as close as we want to . Hence we have , proving that , and so cannot be on . Also, since , and are always on opposite sides of (cf. Figure 7), and so cannot be in .
Lastly, suppose that the Number robot has moved to during phase 4: then, at the beginning of phase 5, and , as shown in Figure 8. Similarly to the previous case, we can prove that cannot lie on or because the line is obtained by rotating line about by some angle that can be made arbitrarily small by just decreasing . Again, this implies that the intersection point between the lines and can be kept at a distance from arbitrarily close to , and can therefore never coincide with , which is only away from . Also, because ( increases monotonically and converges to as converges to ), and are always on opposite sides of , and so cannot be in .
To conclude the proof, it suffices to show that and lie on strictly opposite sides of line : indeed, this would imply that is not on the segments and or in their respective midway triangles, because these lie on the same side of as or on the line itself (cf. Figure 8). To prove this claim, consider a Cartesian coordinate system with origin in and axis oriented as . Let . Since the line forms an angle of with the axis, the coordinates of are
We therefore have
We also have
It follows that the line has equation
Since the line is orthogonal to , it has equation
Observe that passes through the origin and its slope is positive. Hence lies above this line, because its coordinate is negative and its coordinate is positive.
Let us now plug the coordinate of in (3):
Recall from the discussion on phase 4 that (it corresponds to in Figure 5), and therefore the in (4) is abundantly greater than . On the other hand, the coordinate of is , which is smaller than , implying that lies below the line . We conclude that and lie on opposite sides of .
We have just proved that the Reference robot keeps moving until it reaches , thus bringing the TuringMobile in a rest position again, say at time . We ultimately observe that the real number stored in the machine at time is the same the one the Commander computed in phase 1 and that the Number robot copied during phase 2. This is because the Number robot and the Reference robot, during phases 4 and 5 respectively, have moved by in the same direction: so, at the end of phase 5, they have the same distance they had at the end of phase 2.
In this section we discuss some applications of the TuringMobile. We also prove that the basic TuringMobile constructed in Section 3.1 is minimal, in the sense that no smaller team of oblivious robots can accomplish the same tasks.
5.1 Exploring the Plane
The first elementary task a basic TuringMobile in can fulfill is that of exploring the plane. The task consists in making all the robots in the TuringMobile see every point in the plane in the course of an infinite execution. We first assume that the three members of the TuringMobile are the only robots in the plane. Later in this section, we will extend our technique to other types of scenarios and more complex tasks.
A basic TuringMobile consisting of three robots in can explore the plane.
Recall that a basic TuringMobile can store a single real number and update it at every move as a result of executing a real RAM program with input . In particular, the TuringMobile can count how many times it has moved by simply starting its execution with and computing at each move.
Moreover, the Commander chooses the direction of the next move (in the form of a point , see Figure 2) by executing another real RAM program with input . If is an integer, the Commander can therefore compute any Turing-computable function on , and so it can decide to move to the first time, then to twice, then to three times, to four times, and so on. This pattern of moves is illustrated in Figure 9, and of course it results in the exploration of the plane, because the visibility radius of the robots is much greater than the step . ∎
5.2 Minimality of the Basic TuringMobile
We can use the previous result to prove indirectly that our basic TuringMobile design is minimal, because no team of fewer than three oblivious robots in can explore the plane.
If only one or two oblivious identical robots with limited visibility are present in , they cannot explore the plane, even if the scheduler lets them move synchronously and rigidly.
Assume that a single oblivious robot is given in . Since it always gets the same snapshot, it always computes the same destination point in its local coordinate system, and so it always translates by the same vector. As a consequence, it just moves along a straight ray, and therefore it cannot explore the plane.
Let two oblivious robots be given, and suppose that their local coordinate systems are oriented symmetrically. Whether the robots see each other or not, if they take their snapshots simultaneously, they get identical views, and so they compute destination points that are symmetric with respect to . If they keep moving synchronously and rigidly, remains their midpoint. So, if the robots have visibility radius , they see each other if and only if they are in the circle of radius centered in .
Let be the midpoint of the robots’ locations, and consider a Cartesian coordinate system with origin . Without loss of generality, when the robots do not see each other, they move by vectors and , respectively. Let be the half-plane , and observe that lies completely outside .
It is obvious that the robots cannot explore the entire plane if neither of them ever stops in . The first time one of them stops in , it takes a snapshot from there, and starts moving parallel to the axis, thus never seeing the other robot again, and never leaving . Of course, following a straight line through is not enough to explore all of it. ∎
5.3 Near-Gathering with Limited Visibility
The exploration technique can be applied to several more complex problems. The first we describe is the Near-Gathering problem, in which all robots in the plane must get in the same disk of a given radius (without colliding) and remain there forever. It does not matter if the robots keep moving, as long as there is a disk of radius that contains them all.
It is clear that solving this problem from every initial configuration is not possible, and hence some restrictive assumptions have to be made. The usual assumption is that the initial visibility graph of the robots be connected [11, 14]. Here we make a different assumption: there are three robots that form a basic TuringMobile somewhere in the plane, and each robot not in the TuringMobile has distance at least from all other robots. (Actually we could weaken this assumption much more, but this simple example is good enough to showcase our technique.)
Say that all robots in the plane have a visibility radius of , and that the TuringMobile moves by at each step. The TuringMobile starts exploring the plane as above, and it stops in a rest position as soon as it finds a robot whose distance from the Commander is smaller than and greater than . On the other hand, if a robot is not part of the TuringMobile, it waits until it sees a TuringMobile in a rest position at distance smaller than . When it does, it moves to a designated area in the proximity of the Commander. Such an area has distance at least from the Commander, so no confusion can arise in the identification of the members of the TuringMobile. If several robots are eligible to move to , only one at a time does so: note that the layout of the TuringMobile itself gives an implicit total order to the robots around it. Observe that the robots cannot form a second TuringMobile while they move to : in order to do so, two of them would have to move to at the same time and get close enough to a third robot. Once they enter , the robots position themselves on a segment much shorter than , so they cannot possibly be mistaken for a TuringMobile.
Once the eligible robots have positioned themselves in , the TuringMobile resumes its exploration of the plane, and the robots in copy all its movements. Now, if the total number of robots in the plane is known, the TuringMobile can stop as soon as all of them have joined it. Otherwise, the machine simply keeps exploring the plane forever, eventually collecting all robots. In both cases, the Near-Gathering problem is solved.
5.4 Pattern Formation with Limited Visibility
Suppose the robots are exactly , and they are tasked to form a given pattern consisting of a multiset of points: this is the Pattern Formation problem, which becomes the Gathering problem in the special case in which the points are all coincident. For this problem, it does not matter where the pattern is formed, nor does its orientation or scale.
Again, the Pattern Formation problem is unsolvable from some initial configurations, so we make the same assumptions as with the Near-Gathering problem. The algorithm starts by solving the Near-Gathering problem as before. The only difference is that now there is a second tiny area , attached to (and still far enough from the TuringMobile), which the robots avoid when they join . This is because this second area will be used to form the pattern.
Since is known, the TuringMobile knows when it has to interrupt the exploration of the plane because all robots have already been found. At this point, the robots switch algorithm: one by one, they move to and form the pattern. This task is made possible by the presence of the TuringMobile, which gives an implicit order to all robots, and also unambiguously defines an embedding of the pattern in . So, each robot is implicitly assigned one point in , and it moves there when its turn comes.
If or , there are uninteresting ad-hoc algorithms to do this: so, let us assume that . The first to move are the robots in : this part is easy, because they all lie on a small segment, which already gives them a total order. The robots only have to be careful enough not to collide with other robots before reaching their final positions.
When this part is done, there are at least two robots in , all of which have distance much smaller than from each other. Then the members of the TuringMobile join as well, in order from the closest to the farthest. Each of them chooses a position in based on the robots already there and the remnants of the TuringMobile. Moreover, the members of the TuringMobile that have not started moving to yet cannot be mistaken for robots in , because they are at a greater distance from all others (and vice versa).
Note that, when the last robot leaves the TuringMobile and joins , it is able to find its final location because there are already at least four robots there, which provide a reference frame for the pattern to be formed. When this last robot has taken position in , the pattern is formed.
5.5 Higher Dimensions
Everything we said in this section pertained to robots in the plane. However, we can generalize all our results to robots in , for . Recall that, at the end of Section 3.2, we have described a TuringMobile for robots in , which can move within a specific plane exactly as a bidimensional TuringMobile, but can also move back and forth by in all other directions orthogonal to .
Now, extending our results to actually boils down to exploring the space with a TuringMobile: once we can do this, we can easily adapt our techniques for the Near-Gathering and the Pattern Formation problem, with negligible changes.
There are several ways a TuringMobile can explore : we will only give an example. Consider the exploration of the plane described at the beginning of this section, and let be the point reached by the Commander after its th move along the spiral-like path depicted in Figure 9 ( is the initial position of the Commander).
Our -dimensional TuringMobile starts exploring as if it were . Whenever it visits a for the first time, it goes back to . From , it keeps making moves orthogonal to until it has seen all points in whose projection on is and whose distance from is at most . Then it goes back to , moves to , and repeats the same pattern of moves in the section of whose projection on is . It then does the same thing with , etc. When it reaches (for the first time), it goes back to , and proceeds in the same fashion. By doing so, it explores the entire space .
Note that this algorithm only requires the TuringMobile to count how many moves it has made since the beginning of the execution: thus, the machine only has to memorize a single integer. The direction of the next move according to the above pattern is then obviously Turing-computable given the move counter.
We have introduced the TuringMobile as a special configuration of oblivious non-rigid robots that can simulate a rigid robot with memory. We have also applied the TuringMobile to some typical robot problems in the context of limited visibility, showing that the assumption of connectedness of the initial visibility graph can be dropped if a unique TuringMobile is present in the system. Our results hold not only in the plane, but also in Euclidean spaces of higher dimensions.
The simplest version of the TuringMobile (Section 3.1) consists of only three robots, and is the smallest possible configuration with these characteristics (Theorems 3 and 4). Our generalized TuringMobile (Section 3.2), which works in and simulates registers of memory, consists of robots (Theorem 2). We believe we can decrease this number to by putting all the Number robots in the same basic component and adopting a more complicated technique to move them. However, minimizing the number of robots in a general TuringMobile is left as an open problem.
Our basic TuringMobile design works if the robots have the same radius of visibility, because that allows them to implicitly agree on a unit of distance. We could remove this assumption and let each of them have a different visibility radius, but we would have to add a fourth robot to the TuringMobile for it to work (as well as keep the TuringMobile small compared to all these radii).
Recall that, in order to encode and decode arbitrary real numbers we used the function and its inverse, which in turn are computed using the and the functions. However, using transcendental functions is not essential: we could achieve a similar result by using only comparisons and arithmetic operations. The only downside would be that such a real RAM program would not run in a constant number of machine steps, but in a number of steps proportional to the value of the number to encode or decode. With this technique, we would be able to dispense with the trigonometric functions altogether, and have our robots use only arithmetic operations and square roots to compute their destination points.
-  N. Agmon and D. Peleg. Fault-tolerant gathering algorithms for autonomous mobile robots. SIAM Journal on Computing, 36(1):56–82, 2006.
-  A. V. Aho, J. E. Hopcroft, and J. D. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, Reading, Massachusetts, 1974.
-  H. Ando, Y. Oasa, I. Suzuki, and M. Yamashita. Distributed memoryless point convergence algorithm for mobile robots with limited visibility. IEEE Transactions on Robotics and Automation, 15(5):818–828, 1999.
-  M. Cieliebak, P. Flocchini, G. Prencipe, and N. Santoro. Distributed computing by mobile robots: Gathering. SIAM Journal on Computing, 41(4):829–879, 2012.
-  R. Cohen and D. Peleg. Convergence properties of the gravitational algorithm in asynchronous robot systems. SIAM Journal on Computing, 34(6):1516–1528, 2005.
-  S. Das, P. Flocchini, N. Santoro, and M. Yamashita. Forming sequences of geometric patterns with oblivious mobile robots. Distributed Computing, 28(2):131–145, 2015.
-  B. Degener, B. Kempkes, P. Kling, and F. Meyer auf der Heide. Linear and competitive strategies for continuous robot formation problems. ACM Transactions on Parallel Computing, 2(1): 2:1–2:18, 2015.
-  B. Degener, B. Kempkes, T. Langner, F. Meyer auf der Heide, P. Pietrzyk, and R. Wattenhofer. A tight runtime bound for synchronous gathering of autonomous robots with limited visibility. In 23rd ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 139–148, 2011.
-  P. Flocchini, G. Prencipe, and N. Santoro. Distributed Computing by Oblivious Mobile Robots. Morgan & Claypool, 2012.
-  P. Flocchini, G. Prencipe, N. Santoro, and G. Viglietta. Distributed computing by mobile robots: Uniform circle formation. Distributed Computing, 2016, to appear, doi:10.1007/s00446-016-0291-x.
-  P. Flocchini, G. Prencipe, N. Santoro, and P. Widmayer. Gathering of asynchronous robots with limited visibility. Theoretical Computer Science, 337(1–3):147–168, 2005.
-  P. Flocchini, G. Prencipe, N. Santoro, and P. Widmayer. Arbitrary pattern formation by asynchronous, anonymous, oblivious robots. Theoretical Computer Science, 407(1–3):412–447, 2008.
-  N. Fujinaga, Y. Yamauchi, S. Kijima, and M. Yamashita. Pattern formation by oblivious asynchronous mobile robots. SIAM Journal on Computing, 44(3):740–785, 2015.
-  L. Pagli, G. Prencipe, and G. Viglietta. Getting close without touching: Near-Gathering for autonomous mobile robots. Distributed Computing, 28(5):333–349, 2015.
-  F. P. Preparata and M. I. Shamos. Computational Geometry. Springer-Verlag, Berlin and New York, 1985.
-  H. Rogers, Jr. Theory of Recursive Functions and Effective Computability. McGraw-Hill, 1967.
-  M. I. Shamos. Computational Geometry. Ph.D. thesis, Department of Computer Science, Yale University, 1978.
-  I. Suzuki and M. Yamashita. Distributed anonymous mobile robots: formation of geometric patterns. SIAM Journal on Computing, 28(4):1347–1363, 1999.
-  M. Yamashita and I. Suzuki. Characterizing geometric patterns formable by oblivious anonymous mobile robots. Theoretical Computer Science, 411(26–28):2433–2453, 2010.
-  Y. Yamauchi, T. Uehara, S. Kijima, and M. Yamashita. Plane formation by synchronous mobile robots in the three-dimensional Euclidean space. Journal of the ACM 64(3): 16:1–16:43, 2017.