Vision-based Control of a Quadrotor in User Proximity: Mediated vs End-to-End Learning Approaches
We consider the task of controlling a quadrotor to hover in front of a freely moving user, using input data from an onboard camera. On this specific task we compare two widespread learning paradigms: a mediated approach, which learns an high-level state from the input and then uses it for deriving control signals; and an end-to-end approach, which skips high-level state estimation altogether. We show that despite their fundamental difference, both approaches yield equivalent performance on this task. We finally qualitatively analyze the behavior of a quadrotor implementing such approaches.
Videos, Datasets, and Code
Videos, data, and code to reproduce our results are available at: https://github.com/idsia-robotics/proximity-quadrotor-learning.
Robot control systems are traditionally structured in two distinct modules: perception and control. Perception processes the robot’s input in order to derive an high-level state, which represents meaningful and relevant information for the task the robot needs to solve. State information is then used by a controller, which determines the low-level control signals to be provided to the hardware.
For mobile robots operating in real-world unstructured environments, perception is often challenging. This is especially true when this step involves the interpretation of complex, high-dimensional data such as images. Many recent successful systems deal with this problem by adopting supervised machine learning (ML) techniques which operate on sensing data as input.
When designing such a system, we face the choice between at least two approaches.
A mediated approach, in which one trains a supervised ML model to predict the high-level state given the inputs; then, control signals are derived from the state using a designed controller (or another learned model).
An end-to-end approach, in which a supervised ML model is trained to directly predict the control signals from the sensing data, without passing through an intermediate high-level representation of the state.
Which architecture is preferable? In some situations, one might not have a choice: for example, if one can not (or does not want to) collect ground truth information about the high-level state, training a perception model for a mediated approach is not possible (and this fact motivates many end-to-end systems). However, sometimes datasets annotated with such a ground truth can be acquired, potentially at a cost. Is it worth it?
There are several advantages in a mediated approach: 1) in many mobile robotics applications, hand-designing a controller, given an high-level state representation, is feasible and gives the designer explicit control on the resulting robot behavior; 2) a mediated approach is more transparent and may be easier to inspect and debug: given an unexpected robot behavior, the designer can inspect the high-level state to determine whether the problem is in the perception or controller.
In contrast, end-to-end approaches are appealing because they are conceptually very simple and can potentially be more computationally efficient, especially if the high-level state representation is complex and high-dimensional. Moreover, as mentioned above, end-to-end approaches don’t depend on high-level state’s ground truth for training.
The considerations above disregard one key issue, i.e., the difficulty of learning models for the two approaches. Is learning a perception model (which outputs high-level state) easier than learning an end-to-end model (which directly outputs control signals)? Does one of the two models require a larger amount of training data to reach the same performance?
This paper investigates this question. After reviewing related literature (Section I-A), we model and formalize the end-to-end architecture and two variants of the mediated architecture (Section II-A); we instantiate these three architectures for one specific task, that we consider in the remainder of the paper, i.e., controlling a quadrotor in order to hover in front of a person who is freely moving (Section II); such behavior could be implemented by a quadrotor tasked to monitor a person or expecting commands from them. The main contribution of the paper is a set of experiments showing that, for this specific task, training models for the three architectures has the same difficulty (the setup and results for these experiments are described in Sections III and IV respectively). The main limitation is that we limit our study to a single reactive control task; nonetheless, it is a challenging, real-world task which has several characteristics in common with other important subproblems in mobile robotics. A secondary contribution of this paper is the design, implementation and validation of the drone control system described above, which constitutes an useful component in applications involving proximal interaction of humans and quadrotors; collected datasets, source code and training models are available for download.
I-a Related Work
End-to-end learning approaches map raw sensor data (lidar, camera images, ego-motion sensors) to control actions: either as direct low-level control outputs , or as target points (e.g., desired velocity) for optimal low-level controllers .
Supervised end-to-end approaches have been successfully applied to a variety of challenging control scenarios: off-road obstacle avoidance [1, 3]; autonomous driving [4, 5, 6, 7]; vision-based manipulation ; quadrotor control in forested environments [9, 10, 11], cities [12, 13], and cluttered environments .
The ground truth used for learning may have various origins: skilled drivers or operators [9, 13, 1, 3]; people walking [10, 11] or driving a vehicle that is not the target robot ; random controllers that sometimes lead to collisions, which the model learns to avoid ; hand-designed controllers ; controllers learned through reinforcement learning ; or the future position of vehicles in a large driving dataset . Most of these approaches are stateless and reactive, but applications of Deep Recurrent Neural Networks allows to capture temporal dynamics in end-to-end approaches .
Compared to mediated approaches, end-to-end learning has been found in some cases to be slower to converge [15, 16] and to require more training samples . Direct perception methods  are a further alternative for learning higher-level representations of the environment than mediated methods.
The task we consider in this paper is aimed to proximity human-robot interaction [19, 20]: a drone should be able to fly at an appropriate distance in front of people, following them  while waiting for possible command gestures . The use of visual markers  simplifies this task but, in general, more sophisticated techniques, like Tracking-Learning-Detection , are needed . For instance, several deep learning approaches have been proposed to estimate the 6D pose of a person’s head  or body  from monocular cameras (as well of general objects ). Adopting such perception modules would be a reasonable alternative to solve our task; however, in this paper the task acts as a model of a larger class of tasks.
Ii Task and Model
We consider one specific task: controlling a quadrotor to stay at a fixed distance () and at eye-level height in front of a user who is free to move in an environment.
The available inputs are the video feed from a forward-pointing camera and the current linear velocity of the quadrotor , obtained through ego-motion sensors; the controller outputs are composed by:
the desired pitch and roll of the drone, which map to the acceleration along the drone’s and axes (, respectively);
the desired velocity along the axis ();
the desired angular velocity around the axis ().
The state for the given task can be compactly represented using two pieces of information .
A 3D transformation , representing the pose (location and orientation) of the user’s head with respect to the quadrotor; more specifically, only the heading component of the head’s orientation is relevant to the task (in fact, tilting the head or looking up or down should not affect the drone behavior). Therefore, we represent with angle , , the relative angle between the user’s and drone’s orientation, such that if the user’s face points along the negative axis of the drone (i.e., the user is in front of the drone and faces towards it). Therefore, .
In addition, the state also contains the quadrotor’s current measured velocity, which is available directly as an input .
In this specific task, we can also design a controller that produces control signals given the state: , as detailed in Section III-B. Note that for other tasks (e.g. grasping), designing a controller might be very complex.
Ii-a General Approaches
We now describe the three approaches that we will compare in the following sections (see Figure 1). Even though for clarity we use the task-specific notation introduced above, these approaches can be easily generalized to other control tasks; then, corresponds to the subset of the inputs that we want to learn how to interpret in order to obtain high-level state information . represents inputs that we can (directly or through some processing step) turn into meaningful high-level state information . For example, in an autonomous driving scenario: could correspond to lidar data; to the relative position of other cars on the road and their velocities; to the car odometry and to the car’s position on the lane (obtained through some existing modules using inputs from ); to steering and acceleration/brake controls.
In approach A1 (mediated), we learn a model M1 mapping to . Let denote the function implemented by M1; denotes the estimate of . is then joined to the odometry (available directly in the input) to form . The control signals are then computed as . In order to train M1, we require ground truth information on . Moreover, this approach requires the availability of a controller .
In approach A2 (end-to-end), we directly learn a function to predict from . We denote the result as . Training M2 requires that a ground truth for is available. Such a ground truth can be obtained in two ways. Learning strategy 1: acquiring the ground truth for directly (e.g., recording a skilled human pilot); Learning strategy 2: obtaining the ground truth for through a designed controller , which is given ground truth information.
In approach A3 (mediated with learned controller) we learn a model M1 just like in approach A1. This model obtains a state estimate . However, instead of using an hand-designed controller to produce the control signals from the estimated state, we learn a mapping from to . The control signals are then computed as . In order to train M1, we require ground truth information on . Moreover, in order to train M3 we need a ground truth on . Differently from A1, this approach does not require one to design the controller .
Iii Experimental Setup
Iii-a Hardware and Infrastructure
All experiments take place in a room fitted with an optical motion capture system. Within the room, a flying area of is defined and fenced by a black lightweight net.
We use a Parrot Bebop 2 quadrotor; the quadrotor is networked via wifi to a server, and controlled through a ROS interface , which exposes at the velocity of the drone computed from the visual odometry of a bottom-looking camera and, at , the front-facing camera feed. The drone uses a sonar and an IMU to estimate altitude and attitude; an onboard low-level controller accepts inputs and updates the motors speed.
The quadrotor is outfitted with a mocap target, so that its exact 6D pose is acquired in realtime. Moreover, we outfit the user’s head with another mocap target; this is implemented by having the user wear one of two objects on their head, on top of which the mocap target is fixed: either a black baseball hat, or a thin elastic band. While the former is clearly visible when the user is seen from the front, the latter is almost invisible.
We implement a simple baseline controller as a stateless function . From the estimation of the person’s pose, we compute a target point in front them, at distance along unit vector . Then, we compute a desired velocity to reach in time, limiting the components magnitude to . Finally, we compute a control output to reach velocity and rotate towards the user in time:
where is the azimuth of vector . Parameters are fixed to .
Iii-C Dataset Acquisition
In order to ease the acquisition of the dataset, the drone follows a controller that, given the pose of the user and the pose of the quadrotor provided by the mocap system, generates control signals that keep the quadrotor at a parametrized distance (that we adjusted between 1.0 and 2.0 meters during the acquisitions) in front of the user’s head. Such a controller is similar of the one described in Section III-B.The outputs of are not part of the dataset: they are only used to facilitate the acquisition of the sessions and make for an engaging experience for our test subjects.
We recorded 15 different sessions, each with a different user (age 23 to 38, different ethnicity, height ranging from 160 to 197 cm, variable levels of physical fitness, different clothing styles, hairstyles and colors). During the acquisition, users are instructed to move around the room freely: the quadrotor will then fly to stay in front of their head. After an initial period with cautious motion, users start challenging the controller and moving rather aggressively (videos of recording sessions are available as supplementary material), so that the quadrotor struggles to keep up; this ensures that many different relative positions for the head and the quadrotor are represented in the data. In some recordings, the users wear the hat, while in others the headband (which is almost not visible in the camera frames). The room is equipped with 4 distinct light sources (2 overhead, 2 movable spotlights placed in different corners of the room), which we toggle and move at different times of the recording to add variability to the scene. During some of the recordings, more people are in the flying area; moreover, often the quadrotor camera also sees people and different background objects standing outside of the flying area, such as computers, screens, desks, and windows. While acquiring the data, we take care that, if more than one person is visible in the frame, the person wearing the mocap target is the one closest to the camera. On occasion, the video link is temporarily corrupted and some frames are acquired with visual artifacts; we purposefully do not remove such frames from the recording as most of them can be still understood by an human observer. On average each session is minutes long, totaling more than minutes of flight.
All sessions are recorded in ROS bagfiles, from which we extract 79k dataset instances at a rate of 30Hz. Each instance contains: the video frame , the pose of the user head relative to the drone , and the drone velocity . The controller is then applied to to obtain the ground truth value (containing the four control variables). Note that the ground truth control signal does not necessarily correspond to the control signal that the drone was receiving during the acquisition, which came from a different controller.
We use all data from three sessions (16k instances) as a test set to quantitatively evaluate the performance of the three approaches. We randomly split the 63k instances from the remaining 12 sessions into a training set (50k instances) and a validation set (13k instances).
Iii-D Machine Learning Models
The network accepts as input a RGB image, and produces the 4 components of . The internal architecture is a ResNet-8  followed by two dense layers with 256 and 128 neurons, respectively.
Model M2 has a similar architecture as M1, but additionally accepts through two additional input neurons that skip the ResNet-8 layers. These neurons, concatenated to the ResNet-8 output, are input to the two dense layers with 256 and 128 neurons. The outputs correspond to the 4 components of .
Model M3 is implemented through a simple multilayer perceptron that maps 6 input values () to the 4 components of ; it contains 2 hidden layers, with 256 and 128 neurons respectively.
We train the three models with the same setup: 1) we use Mean Absolute Error as loss function and ADAM  as optimizer with a learning rate of 0.001; 2) we speed up learning by reducing the learning rate when the validation loss plateaus for more than 5 epochs; 3) we use early stopping (10 epochs of patience on the validation loss, with a maximum of 200 epochs).
Iv Experimental Results
We report three sets of experiments. First, we quantitatively evaluate the prediction quality of different approaches for all instances in the testing set, versus the ground truth for , also evaluating the impact of the training set size; second, we compare the trajectories of a quadrotor when it is controlled by an ideal controller, with the trajectories obtained by A1, A2 and A3. Finally, we perform qualitative robustness tests on the system controlled by each of the three approaches.
Iv-a Quantitative results on testing instances
For each component of , and we compute the coefficient of determination  of the estimate. This measure corresponds to the proportion of variance in that is explained by the model: a perfect estimator yields ; a dummy estimator that always returns the mean of the variable to be estimated yields ; even worse estimators yield . allows us to compare the quality of our estimate for different components of even though each has a different variance.
We compute these metrics for the three approaches trained on different amounts of training data; in particular, we are interested to compare how hard it is for the different approaches to achieve a given performance. For a given training set size , we randomly sample out of the 50k training instances, which we use to train the models for each of A1, A2 and A3. Validation and testing datasets in all cases remain the same. To account for the variability due the sampling of the training set (which is very large for low values of ), we repeat each experiment for up to 50 replicas.
Figure 3 reports the values resulting from the experiments described above. We observe that prediction quality increases with , but plateaus after ; there is no clear difference in prediction performance among A1, A2 and A3.
and yield coefficients of and , respectively. Prediction quality is significantly worse for and especially for , with values of and , respectively. This is easily explained: is mostly dependent on the position of the body in the frame, which is easy to perceive in : is learned significantly better than a dummy regressor, with as few as 128 training instances. mostly depends on the vertical position of the head in the frame, which is also easy to perceive in , but has less variability in the datasets. Predicting and is harder: the former relies on an accurate perception of the distance of the user (which is confounded by their height and body size), and the latter on an estimate of the relative orientation of the head (), which is arguably hard to get on low resolution inputs.
Note that prediction quality is not necessarily related to the quality of the resulting robot behavior. For example, a model which always yields a very tiny but systematic overestimate on yields an value close to 1, but would cause the quadrotor to crash on the user in a very short time. On the contrary, a model that produces predictions affected by large amounts of uncorrelated noise with zero mean could yield acceptable behaviors (especially with short control timesteps) but very disappointing quantitative metrics. To provide a better idea of the useability of the predictions for control, Figure 4 illustrates the outputs of the three approaches (trained with ) on a short segment extracted from a recording belonging to the testing set. We observe that the predicted signals closely track the ground truth; outputs for and exhibit a larger amount of high-frequency noise, which appears non-systematic.
Iv-B Analysis of flying performance
Figure 5 compares the trajectories resulting from the ground truth controller, with those resulting from A1, A2, and A3. The quadrotor approaches a user (not in the dataset), who is standing still, from different relative poses: the final pose of the robot always faces the user, and is reached in about 5 seconds, after which the robot stabilizes in a short time.
In Figure 6, the quadrotor’s approach towards a different user (also not in the dataset) was run 5 times for A1 and A2. This is the only case in our tests where we could notice a (small) difference in the behavior of different approaches, with A2 being somewhat smoother and closer to the ground truth trajectory.
Supplementary videos report more details on this experiment and extensive tests where we challenge the system robustness by having multiple people in the frame, quick movements, distracting objects, sudden and extreme lighting variations. During the tests, we cycled the control between the three approaches, and found that they behaved indistinguishably from each other; in all cases the drone behavior was predictable and perceived as safe by the users.
We considered the task of controlling a quadrotor to hover in front of a freely moving user using input data from an onboard camera. On this task, we compared mediated approaches to end-to-end approaches; we found equivalent quantitative performance, learning difficulty, perceived quality of the robot behaviors and robustness to challenging inputs; in only one occasion we measured repeatably different trajectories, which were slightly smoother in the end-to-end approach, but correct in all cases and perceived as very similar.
- Y. LeCun, U. Muller, J. Ben, E. Cosatto, and B. Flepp, “Off-road obstacle avoidance through end-to-end learning,” in NIPS, 2005.
- E. Kaufmann, A. Loquercio, R. Ranftl, A. Dosovitskiy, V. Koltun, and D. Scaramuzza, “Deep Drone Racing: Learning Agile Flight in Dynamic Environments,” ArXiv e-prints, 6 2018.
- M. Bajracharya, A. Howard, L. H. Matthies, B. Tang, and M. Turmon, “Autonomous off-road navigation with end-to-end learning for the lagr program,” Journal of Field Robotics, vol. 26, no. 1, pp. 3–25, 2008.
- Z. Chen and X. Huang, “End-to-end learning for lane keeping of self-driving cars,” in 2017 IEEE Intelligent Vehicles Symposium (IV), 6 2017, pp. 1856–1860.
- H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3530–3538, 2017.
- L. Chi and Y. Mu, “Deep Steering: Learning End-to-End Driving Model from Spatial and Temporal Visual Cues,” ArXiv e-prints, 8 2017.
- J. Heylen, S. Iven, B. D. Brabandere, J. O. M., L. V. Gool, and T. Tuytelaars, “From pixels to actions: Learning to drive a car with deep neural networks,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 3 2018, pp. 606–615.
- S. Levine, C. Finn, T. Darrell, and P. Abbeel, “End-to-end training of deep visuomotor policies,” J. Mach. Learn. Res., vol. 17, no. 1, pp. 1334–1373, 1 2016.
- S. Ross, N. Melik-Barkhudarov, K. S. Shankar, A. Wendel, D. Dey, J. A. Bagnell, and M. Hebert, “Learning monocular reactive uav control in cluttered natural environments,” in 2013 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2013, pp. 1765–1772.
- A. Giusti, J. Guzzi, D. Ciresan, F.-L. He, J. P. Rodriguez, F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. Di Caro, D. Scaramuzza, and L. Gambardella, “A machine learning approach to visual perception of forest trails for mobile robots,” IEEE Robotics and Automation Letters, 2016.
- N. Smolyanskiy, A. Kamenev, J. Smith, and S. T. Birchfield, “Toward low-flying autonomous mav trail navigation using deep neural networks for environmental awareness,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 4241–4247.
- A. Loquercio, A. I. Maqueda, C. R. del Blanco, and D. Scaramuzza, “Dronet: Learning to fly by driving,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 1088–1095, 4 2018.
- S. Kumaar, A. Sangotra, S. Kumar, M. Gupta, N. B, and S. Omkar, “Learning to Navigate Autonomously in Outdoor Environments : MAVNet,” ArXiv e-prints, 9 2018.
- D. Gandhi, L. Pinto, and A. Gupta, “Learning to fly by crashing,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 9 2017, pp. 3948–3955.
- T. Glasmachers, “Limits of end-to-end learning,” in Proceedings of the Ninth Asian Conference on Machine Learning, vol. 77. PMLR, Nov 2017, pp. 17–32.
- S. Shalev-Shwartz, O. Shamir, and S. Shammah, “Failures of Gradient-Based Deep Learning,” ArXiv e-prints, 3 2017.
- S. Shalev-Shwartz and A. Shashua, “On the Sample Complexity of End-to-end Training vs. Semantic Abstraction Training,” ArXiv e-prints, 4 2016.
- C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “Deepdriving: Learning affordance for direct perception in autonomous driving,” in Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). IEEE Computer Society, 2015, pp. 2722–2730.
- W. S. Ng and E. Sharlin, “Collocated interaction with flying robots,” in 2011 RO-MAN, 7 2011, pp. 143–149.
- T. Naseer, J. Sturm, and D. Cremers, “Followme: Person following and gesture recognition with a quadrocopter,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 9 2013, pp. 624–630.
- M. Jahidul Islam, J. Hong, and J. Sattar, “Person Following by Autonomous Robots: A Categorical Overview,” ArXiv e-prints, 3 2018.
- E. Peshkova, M. Hitz, and B. Kaufmann, “Natural interaction techniques for an unmanned aerial vehicle system,” IEEE Pervasive Computing, vol. 16, no. 1, pp. 34–42, 1-3 2017.
- F. Vasconcelos and N. Vasconcelos, “Person-following uavs,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 3 2016, pp. 1–9.
- Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-learning-detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 7, pp. 1409–1422, 7 2012.
- R. Barták and A. Vykovský, “Any object tracking and following by a flying drone,” in 2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI), 10 2015, pp. 35–41.
- M. Patacchiola and A. Cangelosi, “Head pose estimation in the wild using convolutional neural networks and adaptive gradient methods,” Pattern Recognition, vol. 71, pp. 132 – 143, 2017.
- F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black, “Keep it smpl: Automatic estimation of 3d human pose and shape from a single image,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Springer International Publishing, 2016, pp. 561–578.
- Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox, “PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes,” ArXiv e-prints, 11 2017.
- M. Monajjemi, “Bebop autonomy,” http://bebop-autonomy.readthedocs.io.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in International Conference on Learning Representations (ICLR), 2015.
- N. Draper and H. Smith, Applied regression analysis. Wiley, 1998.