Variable-Scaling Rate Control for Collision-Free Teleoperation of an Unmanned Aerial Vehicle

Variable-Scaling Rate Control for Collision-Free Teleoperation of an Unmanned Aerial Vehicle

Dawei Zhang and Rebecca P. Khurshid Dawei Zhang is with the Department of Mechanical Engineering, Boston University, Boston, MA 02215, USA dwzhang@bu.eduRebecca P. Khurshid is with the Department of Mechanical Engineering and the Division of Systems Engineering, Boston University, Boston, MA 02215, USA khurshid@bu.edu
Abstract

We propose that automatically adjusting the scale factor in rate-control teleoperation could enable a human operator to better control the motion of a remote robot. In this paper, we present four new variable-scaling rate-control methods that adjust the scale factor depending on the state of the user’s input commands and/or the risk of a collision between the robot and its environment. Methods that depend on the risk of a collision are designed to guarantee collision avoidance by setting the scale factor to be zero if the operator issues a command that would result in a collision between the robot and its environment. A within-subject user study was conducted to determine the effects of the four newly designed rate-control methods and a traditional fixed-scale rate-control method on a person’s ability to complete a navigation task in a simulated two-dimensional environment. The results of this study indicate that well-designed variable-scale rate control can guarantee collision-free teleoperation without reducing task efficiency.

I Introduction

A small and agile unmanned aerial vehicle (UAV) can be used by rescue teams to search for survivors in the wake of a man-made or natural disaster. For example, a rescue worker could fly the UAV through a building that is unsafe or impossible for the rescue team to enter directly themselves. The rescue worker should be able to quickly fly the UAV, while deftly maneuvering around objects in its environment. The rescue worker must also be able to precisely control the UAV’s motion if he or she wants to carefully inspect a certain area.

To control the motion of the remote robot, the human operator issues commands using a control interface, such as a joystick. To enable the operator to control the motion of the robot over large distances using a much smaller control interface, it is necessary to modify the operator’s input commands through a forward-control method to calculate the desired state of the robot. Rate control, also known as velocity control, is the most common control method used to remotely control unmanned ground or aerial vehicles [15]. Under this method, the robot’s commanded velocity is proportional to the position of the control interface. While rate control enables an operator to span large areas with the remote robot, a major limitation is that it can be difficult for the operator to precisely control the position of the remote robot through rate control teleoperation [2, 5]. If the remote robot can move quickly, as is the case for agile UAVs, the operator could easily crash the remote robot. Thus, although the methods developed in the paper could be applied to any remote mobile robot, our intended application is the remote control of agile UAVs.

Several researchers have implemented haptic feedback schemes as a means to reduce collisions between a remotely controlled UAV and its environment [10, 1, 12, 16]. Under these schemes, a grounded kinesthetic (force-reflecting) control interface is used to apply a force on the user when there is an increased risk of a collision. Typically, the magnitude of the force is related to the risk of a collision and the direction of the force is pointed directly away from the object that poses the greatest risk. For example, Brant and Colton set the magnitude of the force of the haptic feedback to be proportional to the time that it would take the UAV to collide with an object in its environment, if the UAV continued flying with its current velocity [1]. The results of a user study showed that this method was effective in reducing the number of collisions between the robot and its environment, without sacrificing task efficiency [1]. Drawing on potential functions used in robotic path planning, Lam et al. proposed a parametric risk field to calculate the risk of a collision, which was then used to generate the magnitude of the force exerted on the user through the control interface [11]. Hou and Mahony implemented a similar method using an admittance-type haptic device to physically prevent the operator from issuing a command that would result in a collision [8].

Alternatively, changing the mapping between the operator’s input commands and the commanded state of the remote robot can help improve the operator’s ability to control the motion of a remote robot. Hybrid forward-control schemes have been implemented to automatically switch between enabling the operator to directly control the robot’s velocity, which is better for large movements, or enabling the operator to directly control the robot’s position, which enables more precise control of the robot’s position [5, 13]. Romano et al. proposed a hybrid control law that commands the robot’s velocity to be proportional to the square of the velocity of the control interface, so that slow movements of the control interface enable the operator to precisely control the position of the robot and fast movements of the control interface enable the operator to move the robot quickly [19].

In this paper, we present novel variable-scaling rate control methods to enable both fine and fast control of a robot by automatically changing the scale factor relating the position of the control interface to the commanded velocity of the remote robot. The scale factor is adjusted based on the human operator’s input commands to allow for finer motion control when the operator issues smaller input commands. The scale factor is also adjusted based on the risk of the UAV’s collision in a manner that guarantees that the robot cannot be commanded to collide with an object in its environment.

We note that shared-autonomy methods also use information related to the user’s input and the robot’s environment to help an operator control the motion of a remote robot (or virtual agent). A leading shared autonomy paradigm involves the robot predicting the operators goal (or a probability distribution of the operator’s goal) online and acting semi-autonomously to help achieve the predicted goal [4, 7, 9, 14]. Another method helps ensure safety even when the operator commands an unsafe action, by having the robot move to the closest state to the human’s command that satisfies some safety criteria [20]. Another method uses a policy trained via deep reinforcement learning to alter the operator’s commands, if they are deemed to be sufficiently suboptimal, to a sufficiently near-optimal action closest to the user’s input [18]. In these state-of-the-art shared autonomy systems, the human operator’s input is used to generate intermediate commands, which are then blended with or replaced by the commands generated by the shared-autonomy method. In this paper, we seek to improve a direct mapping between the user’s input and the commands sent to the robot.

A detailed description of our methods is presented in Section II. Section III describes the design and results of a user study investigating the effects of variable-scaling rate control on an operator’s ability to control a simulated robot in a two-dimensional environment. We interpret the results of this user study in Section IV. Finally, Section V presents the main conclusions of this paper and our plans for future research.

Ii Variable-Scaling Rate Control

Variable-scaling rate-control builds on the classical rate-control method. Rate control in robotic teleoperation is used to map the position of the operator’s control interface to the desired velocity of the remote robot, which can be described as follows:

(1)

In this equation the commanded velocity of the robot, , is proportional to the position of the control interface, , through a proportionality constant, . Picking an appropriate scale factor, , can be challenging. If the scale factor is too small, the motion of the robot will be slow and the operator will need to spend more time and energy to move the robot to the goal position, which may be frustrating. On the other hand, if the scaling is too large, it will be hard for the operator to precisely control the position of the robot. A large scale factor can also increase the likelihood that an operator would crash the remote robot, especially if the remote robot can move quickly, as is the case for agile UAVs.

In this paper, we automatically adjust the value of based on the user’s input and the risk of a collision between the remote robot and its environment. Namely, we multiply a constant scale factor, , by a scale factor related to the human’s input, , and a scale factor relating to the risk of a collision, :

(2)

Ii-a User’s Input

The scale factor related to the human’s input should allow for fast control when the human commands a large velocity and should allow for fine control when the human commands a smaller velocity. In this implementation, we use to reduce the robot’s commanded velocity if the user is displacing the control interface less than some distance, , indicating that the human is trying to precisely control the position of the robot. If the human displaces the the control interface greater than , then is equal to 1. The scale factor related to the human’s input is represented by:

(3)

Ii-B Risk of Collision

The scale factor relating to the risk of a collision, , decreases the commanded velocity as the risk that the UAV will collide with another object in its environment increases. This can be described as:

(4)

where represents the likelihood, between 0 and 1 inclusive, that the human will command the robot to collide with another object in its environment. If the risk of a collision is 0, then will be equal to 1 and the commanded speed of the robot will not be reduced. If the robot is certain to collide with another object, then the risk of a collision will be equal to 1 and will be equal to 0. Thus, the robot’s commanded velocity will be equal to 0, preventing a collision from occurring.

We adopt the parametric risk field, developed by Lam et al. [11], to calculate the risk factor, . Following, [11], we first calculate a critical region, in which a collision will be unavoidable, represented by the red region in Fig 1. The critical region depends on the robot’s current velocity and maximum acceleration. For a UAV, the critical region includes the space directly around the UAV, which is circumscribed by a circle that has a radius, , which is shown by the dashed black circle in Fig 1. The critical region also includes the space swept out by this circumscribing circle if the UAV were to decelerate as quickly as possible. The length of the critical region can be calculated by:

(5)

where is the UAV’s current velocity and is the magnitude of UAV’s maximum acceleration.

If an object is located just outside the critical region, there is a high risk of collision between the UAV and that object. If an object is located far away from the critical region, there is a low risk of collision between the UAV and the object. Thus, Lam et al. proposed determining the risk of a collision using a potential field in the space around the critical region [11]. In this implementation, we have chosen to compute the potential field at all points within a distance, , from the critical region. Because objects pose a lower risk at low velocities and a higher risk at higher velocities, we compute as follows:

(6)

where and are constant values. The region over which the potential field will be computed is shown by the transparent gray region in Fig 1.

Fig. 1: A UAV flown with velocity, V, through a hallway (shown in black) would unavoidably collide with any object located in the red region and may collide with any object located in the gray region. Transparent gray regions represent areas where a line-of-sight sensor mounted on the UAV would not be able to gather data.

The risk of a collision varies from 0, at the far extent of the potential field, to 1, at the boundary between the potential field and the critical region. The risk of collision between the UAV and a point on a obstacle, , that is some distance, , outside the critical region, can be computed by:

(7)

The function can be any smooth function that ranges from 0, when to, 1, when at the border of the critical region. In this implementation, can be expressed by . In this formulation, the point of occupied space that is closest to the critical region poses the highest risk of a collision. To calculate the overall risk of a collision of the UAV, one option is to set to be that maximum value of over all occupied points in the environment, such that:

(8)

Note, that because the risk of a collision will only be equal to 1 when an object is located at the boundary of the critical region, this method will allow the UAV to get arbitrarily close of objects in its environment, although will force the UAV to approach these objects at slow speeds.

Theoretically, the UAV could become stuck if an object is located exactly at the boundary of the critical region, so that the commanded velocity would become zero and the operator would not be able to move away from the object. This is not likely a concern in practice because the commanded velocity of the robot is very small when obstacles are near the boundary of the critical region, so it takes considerable effort to force an object to be just at the border of the critical region. However, a practical concern of this implementation is that when the overall risk of a collision becomes high, the commanded velocity of the robot will be small, even when the operator is trying to move the robot away from the object. We address this limitation by introducing direction-dependent scaling methods to calculate .

Ii-C Direction-Dependent Scaling

The risk of a collision between the UAV and it’s environment is dependent both on the location of objects around the UAV and the direction of the UAV’s commanded velocity. For example, consider the scenario shown in Fig. 1, in which the operator is flying the UAV in a hallway. In this scenario, the individual point with the highest risk factor is located on the wall closest to the UAV. However, the operator is commanding the robot to fly away from this direction, and thus the actual risk of a collision is lower than the maximum value of . In this example, it it clear that it could be beneficially to independently change the mapping scale in the X and Y directions. If the scale factor is independently varied in the X and Y directions, then fast control can occur along the direction of the hallway while fine control occurs in the direction of the nearest obstacle, i.e. the wall.

At each time step, we define the X-axis to be in the direction of the point that has the maximum risk, , as determined by Equation 7. The Y-axis is set to be perpendicular to the X-axis, such that the direction of the commanded velocity, the X-axis, and the Y-axis are all coplanar. We can then extend Equation 1 to:

(9)

where and are computed as:

(10)
(11)

In the above equations, all components are with respect to the local X-Y frame, whose X-axis points in the direction of the occupied point with the highest . We note that no scale factor is needed in the Z-axis, because there is no component of the commanded velocity in the Z-axis.

and are calculated as and , respectively. Similar to the overall risk factor, is calculated as:

(12)

where is the set of points such that a line through the point in the X-direction would intersect with the critical region. is calculated by:

(13)

Here, is the magnitude of the x-component of the vector between point and the point on the critical region closest to .

Similarly, is calculated as:

(14)

where is the set of points such that a line through the point in the Y-direction would intersect with the critical region. is calculated by:

(15)

Here, is the magnitude of the x-component of the vector between point and the point on the critical region closest to . In Equations 13 and 15 the function is and , respectively.

The direction of the commanded velocity, , can also be taken into account when calculating and . Again, referring to Fig. 1, we see that the X-component of the commanded velocity is in the opposite direction of the point with the highest overall . Therefore, is may make sense to only calculate using objects only in the negative X direction from the critical region. We can formalize this as:

(16)

where and are the subsets of points with X position coordinates in the positive and negative X directions, respectively, and is the unit vector in the X direction. Similarly, we have that

(17)

Iii User Study

We conducted a user study to understand the effects of variable-scaling rate control on a user’s ability to control a UAV. This study was approved as exempt by the Boston University Institutional Review Board under protocol number 5070E.

Iii-a Experimental Setup

As shown in Fig. 2, each subject controlled the motion of a virtual robot in a simulated 2D environment. The control interface was a Logitech Extreme 3D Pro joystick. The position of the joystick is measured in terms of its maximum displacement, so that a reading of 0 corresponds to the neutral position and a reading of 1 corresponds to maximum displacement. Robot Operating System (ROS) [17] was used to create a 2D simulated environment and a simulated point robot. The radius of the point robot is m and it’s maximum acceleration is m/s, based reported values for a high-power quadrotor UAV [3]. All the values of parameters used in the user study are listed in Table. I.

Four simulated environments, shown in Fig. 3, were used in this study. Each environment contains open space near the starting location of the simulated robot, a constrained hallway with two turns, and free space on the other side of the hallway. The width of the hallway is m. There is one goal location in the free space close to the starting location, two target locations in the corners of the hallway, and a final target position at the end of the hallway. Participants had a full overhead view of the simulated environment. The simulated walls did not constrain the robot’s position, meaning that the person could command the position of the robot to penetrate the virtual wall.

The velocity of the simulated robot was set to be the desired velocity determined by the forward-control method, unless doing so required the simulated robot to accelerate faster than its maximum acceleration. In that case, the acceleration of the simulated robot was set to be the maximum acceleration in the direction needed to achieve the desired velocity. The position of the simulated robot was updated at a rate of 100Hz through Euler integration.

Parameter
Value 5.0 0.5 0.2 35 0.3 1.0
Units - - -
TABLE I: Parameters in User Study

Iii-B Evaluated Rate-Control Methods

Each subject tested the following five rate-control methods:

  • (C) Constant scale factor:

    (18)

    where , refer to the scale factors in Equation 9.

  • (H) Scale factor based on the user’s input:

    (19)
  • (R1) Scale factor based on the user’s input and the risk of collision:

    (20)

    where is calculated from Equation 8.

  • (R2) Scale factor based on the user’s input and the risk of collision in the X- and Y- directions, separately:

    (21)
    (22)

    where is determined by Equation 12 and is determined by Equation 14.

  • (R3) Scale factor based on the user’s input and the risk of collision in the X- and Y- directions, separately, accounting only for the risk of collision with objects in the direction of the commanded velocity:

    (23)
    (24)

    where is determined by Equation 16 and is determined by Equation 17.

Fig. 2: Each participant used a joystick with their right hand to control the motion of a simulated robot in a 2D environment.
Fig. 3: The four simulated 2D environments used in this study. The simulated robot was shown to the users as a green circle. In the above figures, the robot is at its starting location. During each trial, the subject uses a joystick to move the simulated robot to a target area in free space, then to two target locations in the simulated hallway, then to a final goal location near the exit of the hallway. Target locations were shown to the user as red circles.

Iii-C Subject Population

Fifteen subjects participated in this user study (three female and twelve male). All of the subjects were right-handed and between the ages of 21 and 27.

Iii-D Experimental Procedure

A within-subject experimental protocol was used. Each subject completed 8 trials with each of the five rate-control methods. For each rate control method, the subject navigated the simulated robot through each of the four simulated environments two times. The presentation order of the rate-control methods was counterbalanced using a Latin Square. The presentation order of the environments was randomized. Participants used the joystick with their right hand to navigate the simulated robot to each of the four target locations. Subjects pressed a button on the control interface to indicate when they felt they had reached each target location. The trial began when the participant issued the first command to the simulated robot. The trial ended when the participant pushed the button at the final target location. Participants were told to complete the task as quickly as possible, without colliding the robot with the simulated walls in the experiment.

After completing 8 trials for each method, subjects provided subjective measures of their experience using NASA Task Load Index (NASA-TLX) [6]. After all five forward-control methods were tested, subjects completed a final survey ranking each of the methods according to:

  • their favorite method

  • the method they would choose to do a more complicated task

  • the method they would choose to accomplish a task quickly.

Methods were referred to by their presentation order. No information about the control methods was given, beyond the fact that the position of the joystick would be mapped to the robot’s velocity.

Fig. 4: Results of the user study as measured by (a) trial time, , (b) total distance traveled by the robot, . (c) duration of collision, , and (d) the overshoot distance of the final target, . Significant pairwise differences between the different rate-control methods are marked with brackets. For all metrics, lower values indicate better performance.
Fig. 5: Subjective workload and performance ratings as measured via the NASA-TLX survey.

Iii-E Data Analysis

During each trial we recorded the simulated robot’s position at a rate of 100Hz. We use the following metrics to evaluate the user’s performance when using each of the five rate-control methods:

  • : Trial duration, which is the time in seconds that each subject spent to complete the task. A smaller implies a better performance in speed.

  • : Total distance traveled by the virtual UAV during the trial. A small value of means better economy of motion.

  • : The total duration of time that the simulated robot was in contact with the simulated walls. A small value of is preferable.

  • : The distance that the robot traveled past the final target, after it left the simulated hallway. A small value of indicates better performance.

These four metrics were averaged over the eight trials conducted by each subject for each rate-control method. Repeated measures analysis of variance (rANOVA) was used to determine whether the rate-control method had any effect on task performance. When a significant difference in subject performance was found, Tukey’s test was performed at a confidence level of to determine which methods led to significant differences in the metric. All data analysis was performed using MATLAB’s built-in statistical functions.

Iii-F Experimental Results

The quantitative results of the user study are shown in Fig 4. In these plots, significant pairwise differences are marked with brackets. The rate-control method used has a significant effect on the trial time (F(4,56)=99.82, p 0.0001). Participants took significantly longer to complete the task under Method R1, which is the most conservative method, as compared to the other four methods. It also took participants significantly longer to complete the task using Method R2, which sets different scale factors in the X- and Y- directions, than when using Method R1, which accounts for the direction of the desired velocity when setting the X- and Y- scale factors.

The rate-control method used has a significant effect on the total distance traveled by the simulated robot (F(4,56)=8.52, p 0.0001). The distance traveled by the robot when under Method R2, was significantly longer than the distance traveled under Methods H, R1, and R3. There is no significant differences in when comparing Methods C,H,R1 and R3 against each other.

There were no collisions between the robot and its environment when the participants used Methods R1, R2, and R3 because these methods guarantee collision-free teleoperation. For the metric of a Student’s T-Test was performed to evaluate any difference between method C and H, because collision duration as exactly equal to zero for methods R1, R2, and R3. No significant difference was found between Methods C and H, in terms of collision duration.

As shown in Fig. 4(d), only three operators overshot the target just after the exit of the hallway when using Methods C and H. Because the overshoot for Methods C and H were heavily saturated at zero, only differences in R1, R2, and R3 were analyzed. The results of the rANOVA run on these three methods show that the rate-control method did have an effect of overshoot (F(2,28) = 26.183, p 0.0001). Participants were less prone to overshoot the final target when using Method R3, as compared to Method R1.

The different rate-control methods did not result in a difference in the subjective rating of workload and task performance, as measured by subject responses to the NASA-TLX, which are shown in Fig. 5. The subjects’ rankings of the five rate-control methods are shown in Fig. 6. Nine of the fifteen study participants chose Method R3 as their favorite method and seven participants indicated that they would choose Method R3 to do a more complicated task. No one ranked Method R3 as their least or second to least favorite method. Furthermore, no one ranked Method R3 as fourth or fifth choice to complete a more complicated task. Method C and Method R3 were the top two choices for the preferred method to complete a task quickly. Fourteen of fifteen users indicated that Method R1 would be their last choice to complete a task quickly. While many subjects ranked Method R1 as their least favorite, two participants indicated that it was their favorite.

Fig. 6: User rankings of the five rate-control methods according to their favorite method (top), the method they would choose to do a more complicated task (middle), and the method they would choose to do a task quickly (bottom). The bars represent the number of participants who chose each method as their first choice, second choice, third choice, fourth choice, and fifth choice.

Iv Discussion

The results of this study indicate that variable-scaling rate control can improve a person’s ability to control a UAV. Importantly, we note that by taking into account the risk that the robot will collide with its environment, variable-scaling rate control can guarantee that the human operator will not crash the remote robot. Calculating a single scale factor based on the overall highest risk of a collision, as is done in Method R1, reduces the speed at which the operator can fly the UAV in constrained areas. This is reflected by the increased time it took the operators to complete the task when using Method R1. Calculating separate scale factors for the direction pointing to the object in the UAV’s environment posing the biggest collision risk and a direction perpendicular to this, as is done in R2, can significantly increase the speed with which the user can fly the UAV. This is especially true for a hallway scenario, where the distance between the UAV and a wall will typically be much smaller than the distance between the UAV an object along the length of the hallway. Moreover, calculating separate scale factors for two perpendicular directions accounting only for the risk of collision with objects in the direction of the commanded velocity, as is done in Method R3, resulted in faster task completion times than considering the overall all risk in these directions. Notably, the results of the user study indicated that Method R3 did not effect the user’s task completion time, as compared to Methods C and H, which did not slow the UAV down when the risk of a collision was high. The finding that Method R3 did not result in decrease task efficiency is also reflected in the fact that nearly as many study participants indicated that they most prefer to complete a task quickly using this method, as compared to the number of participants who selected the method that never reduced the scale factor.

One limitation of reducing the scale factor when the risk of collision is high, is that our results indicate that it may be difficult for people to control the position of the robot when the risk of collision transitions from high to low. In this study, under Methods R1, R2, and R3, when the UAV exited the hallway, the same position of the joystick will suddenly result in a higher commanded speed to the robot. Nearly all participants overshot the target located near the hallway’s exit under Methods R1 and R2, and about half of the participants overshot this target when using Method R3. Only three participants overshot this target when using the conditions that did not reduce the scale factor based on the risk of a collision between the UAV and an object in its environment. This indicates that there should be limits on the rate with which the overall rate-control scale factor can be increased.

There are no differences between user performance when using the rate-control method that reduced the scale factor based on the user’s input, H, and the user performance when using the rate-control method that never reduced the scale factor, C. However, participants were still likely to be able to perceive a difference between these conditions, as indicated by the fact that seven of the fifteen participants indicated that they would most prefer to use Method C to complete a task quickly, while no participants indicated that they would most prefer to used Method H.

Based on the results of the user study, Method R3, which adjusts the rate-control’s scale factor based on the user’s input and the risk of collision in the X- and Y- direction accounting only for the risk of collision with objects in the direction of the commanded velocity, has a best overall performance.

V Conclusions and Future Work

In this paper, we tested the hypothesis that automatically adjusting the scale factor in rate-control teleoperation would better enable the operator to control the motion of a robot. We developed methods that reduce the scale factor when the user is issuing small velocity commands and the risk of a collision between the robot and objects in its environment is high. The results of the user study show that variable-scale rate control can successfully improve an operators ability to control the position of the robot, without sacrificing the speed that the operator can complete a navigation task. However, as noted in Section IV, a limitation of the developed method is that it is difficult for the operator to control the motion of the robot when the risk of a collision transitions from high to low, which rapidly increases the scale factor. In the future, we will improve our method by setting limits on the rate with which the scale factor can increase.

In this paper, we investigate the operator’s ability to control the robot through a two-dimensional environment, which they viewed from a bird’s-eye perspective. In the future, we will test the developed methods in a three-dimensional simulated environment and will provide the operators with a first-person view via a simulated camera on the robot. In the future, we will also test the developed methods using a real UAV. The developed variable-scale rate control methods will theoretically work using data from LIDAR sensors or point cloud data from a map of the robot’s environment generated online. Error from real sensor measurements could make it possible for the human operator to collide the UAV with an object, if the object’s position is not accurately measured by the sensor. This may make it necessary to enlarge the critical region to ensure collision-free teleoperation. On the other hand, variable-scale rate control could prevent the operator from flying the UAV to a location that is incorrectly seen as occupied by the on-board sensors. This may make it necessary to enable the human operator to override the variable-scale rate control.

Finally, we note that the risk factor used in the variable-scaling rate-control methods in this paper is based on the risk factors previously used to generate haptic feedback to help inform an human operator about the location of objects in the robot’s environment [11]. If haptic feedback is implemented with variable-scaling rate control, then the haptic feedback could inform the user both about the state of the robot’s environment and the magnitude of the scale-factor. Therefore, we believe that haptic feedback could enhance the utility of variable-scaling rate control for teleoperation.

References

  • [1] A. M. Brandt and M. B. Colton (2010) Haptic collision avoidance for a remotely operated quadrotor UAV in indoor environments. In Proc. International Conference on Systems Man and Cybernetics, pp. 2724–2731. Cited by: §I.
  • [2] F. Conti and O. Khatib (2005) Spanning large workspaces using small haptic devices. In Proc. World Haptics, pp. 183–188. Cited by: §I.
  • [3] B. Dirk Extreme high power drone with highest climb rate. External Links: Link Cited by: §III-A.
  • [4] A. D. Dragan and S. S. Srinivasa (2012) Formalizing assistive teleoperation. In Proc. Robotics: Science and Systems, Cited by: §I.
  • [5] I. Farkhatdinov, J. Ryu, and J. Poduraev (2009) A user study of command strategies for mobile robot teleoperation. Intelligent Service Robotics 2 (2), pp. 95–104. Cited by: §I, §I.
  • [6] S. G. Hart and L. E. Staveland (1988) Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. In Advances in psychology, Vol. 52, pp. 139–183. Cited by: §III-D.
  • [7] K. Hauser (2013) Recognition, prediction, and planning for assisted teleoperation of freeform tasks. Autonomous Robots 35 (4), pp. 241–254. Cited by: §I.
  • [8] X. Hou and R. Mahony (2013) Dynamic kinesthetic boundary for haptic teleoperation of aerial robotic vehicles. In Proc. International Conference on Intelligent Robots and Systems, pp. 4549–4950. Cited by: §I.
  • [9] S. Javdani, S. S. Srinivasa, and J. A. Bagnell (2015) Shared autonomy via hindsight optimization. In Proc. Robotics: Science and Systems, Cited by: §I.
  • [10] T. M. Lam, M. Mulder, and M. R. van Paassen (2009) Haptic interface in UAV teleoperation using force-stiffness feedback. In Proc. International Conference on Systems, Man and Cybernetics, pp. 835–840. Cited by: §I.
  • [11] T. M. Lam, H. W. Boschloo, M. Mulder, and M. M. Van Paassen (2009) Artificial force field for haptic feedback in UAV teleoperation. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 39 (6), pp. 1316–1330. Cited by: §I, §II-B, §II-B, §V.
  • [12] A. Y. Mersha, S. Stramigioli, and R. Carloni (2012) Bilateral teleoperation of underactuated unmanned aerial vehicles: the virtual slave concept. In Proc. International Conference on Robotics and Automation, pp. 4614–4620. Cited by: §I.
  • [13] A. Y. Mersha, S. Stramigioli, and R. Carloni (2012) Switching-based mapping and control for haptic teleoperation of aerial robots. In Proc. International Conference on Intelligent Robots and Systems, pp. 2629–2634. Cited by: §I.
  • [14] K. Muelling, A. Venkatraman, J. Valois, J. Downey, J. Weiss, S. Javdani, M. Hebert, A. B. Schwartz, J. L. Collinger, and J. A. Bagnell (2015) Autonomy infused teleoperation with application to bci manipulation. In Proc. Robotics: Science and Systems, Cited by: §I.
  • [15] G. Niemeyer, C. Preusche, and G. Hirzinger (2008) Telerobotics. In Springer handbook of robotics, pp. 741–757. Cited by: §I.
  • [16] S. Omari, M. Hua, G. Ducard, and T. Hamel (2013) Bilateral haptic teleoperation of VTOL UAVs. In Proc. International Conference on Robotics and Automation, pp. 2393–2399. Cited by: §I.
  • [17] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng (2009) ROS: an open-source robot operating system. In ICRA Workshop on Open Source Software, Cited by: §III-A.
  • [18] S. Reddy, S. Levine, and A. Dragan (2018) Shared autonomy via deep reinforcement learning. In Proc. Robotics: Science and Systems, Vol. 14. Cited by: §I.
  • [19] J. M. Romano, R. J. Webster, and A. M. Okamura (2007) Teleoperation of steerable needles. In Proc. International Conference on Robotics and Automation, pp. 934–939. Cited by: §I.
  • [20] W. Schwarting, J. Alonso-Mora, L. Pauli, S. Karaman, and D. Rus (2017) Parallel autonomy in automated vehicles: safe motion generation with minimal intervention. In Proc. International Conference on Robotics and Automation, pp. 1928–1935. Cited by: §I.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398112
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description