RISCuer: A Reliable Multi-UAV Search and Rescue Testbed{}^{\star}

RISCuer: A Reliable Multi-UAV Search and Rescue Testbed

Abstract

We present the Robotics Intelligent Systems & Control (RISC) Lab multiagent testbed for reliable search and rescue and aerial transport in outdoor environments. The system consists of a team of three multirotor unmanned aerial vehicles (UAVs), which are capable of autonomously searching, picking up, and transporting randomly distributed objects in an outdoor field. The method involves vision based object detection and localization, passive aerial grasping with our novel design, GPS based UAV navigation, and safe release of the objects at the drop zone. Our cooperative strategy ensures safe spatial separation between UAVs at all times and we prevent any conflicts at the drop zone using communication enabled consensus. All computation is performed onboard each UAV. We describe the complete software and hardware architecture for the system and demonstrate its reliable performance using comprehensive outdoor experiments, and by comparing our results with some recent, similar works.

keywords:
Search and rescue, multiagent systems, unmanned aerial vehicles (UAVs), UAV testbeds, autonomous aerial grasping, reliable aerial transport
12
Figure 1: RISCuer: The RISC Lab cooperative multi-UAV testbed for search and rescue and autonomous aerial transport in outdoor environments.

1 Introduction

Unmanned aerial vehicles (UAVs) find enormous utilization in several areas of interest to both academia and industry. Hence, there is a growing enthusiasm from scientists and engineers to push the operation and performance capabilities of these robots to their limit. Many of these efforts have resulted in significant advancements in airframe design, flight controls, reliable propulsion systems, and efficient power management for drones. UAVs serve as an ideal testbed for some of the recently proposed multiagent control algorithms (Mohammadi et al., 2020), (Fiaz and Baras, 2019), (Abdelkader et al., 2017), and are shown to have a major impact over many traditional industries as well. Examples include agriculture (Grenzdörffer, 2008), (Zhang and Kovacs, 2012), infrastructure monitoring (Adams and Friedland, 2011), (Ro et al., 2007), public utility inspection (Agha-mohammadi et al., 2014), and land surveying and construction (d’Oleire-Oltmanns et al., 2012). Thus, the significance of UAVs in modern industry cannot be overstated.

Despite this excess of existing literature in the area, it is quite noticeable however, that most of the existing implementations of multi-UAV systems are performed in indoor environments, i.e., in the presence of perfect positioning and precise localization, optimal lighting conditions, and a robust communication infrastructure. However, implementing a multi-UAV system is more challenging outdoors because of several external factors and disturbances in the environment. Therefore, in this chapter, we focus on the implementation and integration of a multi-UAV system (see Fig. 1), designed to complete a complex task cooperatively and autonomously in an outdoor environment. For our case study, we tackle the challenge of an outdoor multi-UAV search and rescue and autonomous aerial transport. Another constraint that greatly hinders the autonomous operation of UAVs outdoors is the need for onboard computation, because of the power and payload limitations on UAVs. In majority of the existing literature, the computation is performed off-board, which is acceptable for indoor lab experiments, but for realistic outdoor applications where a complete or substantially high degree of autonomy is desired, onboard computation requirement must be satisfied. Hence throughout this chapter, we only deal with and propose strategies which admit fully onboard control and computation capabilities for the UAVs involved.

The rest of the chapter is organized as follows. Section 2 provides a brief literature survey on the existing state of the art for multiagent mission planning, aerial grasping, and search and rescue using UAVs. In Section 3, we describe the problem and the underlying assumptions, and discuss our solution approach. In Section 4, we describe the complete system architecture and the various hardware/software components involved. Section 5 demonstrates the finite state machine (FSM) for the mission. In Section 6, we discuss strategies for object detection, localization and tracking using vision. Section 7 details the aerial grasping mechanism, its actuation routine and our picking strategy for autonomous object transport. In Section 8, we elaborate the communication framework for our multi-UAV system. Next, we demonstrate results from simulations and experiments in Section 9, and provide a quick comparison with some recent, similar works. Finally, we conclude with a brief discussion and some future directions in Section 10.

2 Related Work

There have been extensive efforts toward design enhancements, improved flight controls, and efficient path planning for UAVs over the past decade (Almurib et al., 2011), (Lin and Goodrich, 2009). Recent developments have encouraged roboticists to design and build UAVs that are capable of several useful operations which include but are not limited to, aerial grasping and transport (Pounds et al., 2011), (Mellinger et al., 2013), (Fiaz, 2017), collaborative construction using flying robots (Augugliaro et al., 2014), (Durrant-Whyte et al., 2012), aerial perching on unstructured surfaces (Thomas et al., 2015), and drone assisted search and rescue missions (Gholami et al., 2019) etc.

Many of these aforementioned applications typically require more than one robot in order to accomplish the task efficiently; for example consider the problem of aerial coverage (Yazıcıoğlu et al., 2013). Clearly, it a multiagent distributed optimization problem that essentially desires multiple agents for communication-less coverage of a networked system (Yazıcıoğlu et al., 2017). Again, we observe a lot of contributions have been made over the past decade in cooperative and collaborative implementations of UAVs for tasks such as simultaneous localization and mapping (SLAM) (Weiss et al., 2011), vision-based autonomous UAV landing on moving platforms (Saripalli et al., 2002),(Beul et al., 2017), and cooperative aerial transport of objects with multiple UAVs (Michael et al., 2011), (Nieuwenhuisen et al., 2017).

It is evident from above, that aerial grasping is among the top research interests of people working in the field of aerial robotics. Besides, it can also be considered an integral component of UAV-based search and rescue missions (Fiaz and Baras, 2020). Several useful techniques have been proposed for UAVs to grasp objects of various shapes, textures, weights, and sizes such as (Kessens et al., 2016), (Hawkes et al., 2015), and (Pounds et al., 2011). All these works focus on the versatility of aerial grasping rather than its reliability and precision, which is indeed an interesting direction of research. However, for many practical and industrial applications, the need to grasp and transport objects reliably still remains a key objective. This is where ferrous aerial grasping comes to light. It is because of the well-known reliability and strength of the ferrous enclosures and their historic utilization in transportation of sensitive payloads and electronic components for several decades. In addition to the apparent physical protection, these enclosures also provide electromagnetic shielding to the transported payloads. Therefore, in this work, we specifically use our novel (Fiaz et al., 2019), passive magnetic gripper design for the outdoor multiagent aerial transport. The mechanism uses the concept of passive aerial grasping of ferrous objects and enclosures (Fiaz et al., 2017), combined with the dual impulsive release (Fiaz et al., 2018) of the payload at the drop zone.

A bulk of recent multi-UAV search and rescue, cooperative aerial transport, and treasure hunt literature comes from Mohamed Bin Zayed International Robotics Challenge (MBZIRC) (MBZIRC, 2020). This work is also motivated by the participation of Team KAUST at the inaugural version of MBZIRC, and is closely related to recent contributions from other participant teams such as (Nieuwenhuisen et al., 2017), (Lee et al., 2019), and (Beul et al., 2019). The key differences lie in our different approach to the mission, distinct system architecture, our novel and passive grasping mechanism, differences in actuation routine and communication protocols, and the mission execution itself. Therefore, throughout the rest of this chapter, we continue to highlight and compare these works with our method.

3 Problem Description

We now describe the problem in detail along with the underlying assumptions. We then proceed with a summary of our approach for solving the problem.

As mentioned before, this work is motivated by one of the challenges posed in the inaugural version of MBZIRC. The problem setup considered in this work is as follows. A team of three UAVs has to collaborate in order to autonomously search, localize, track and pick up a set of static objects autonomously. The objects are known to be of ferrous material, and may consist of various sizes, shapes and colors which need to be transported to a dedicated single drop zone within limited time. The search area is an open outdoor space which is defined by a set of GPS coordinates which are known a priori. This problem brings up a set of practical research and system design questions regarding multi-UAV coordinated control, aerial grasping, and vision-based object detection and localization.

3.1 Assumptions

Based on the problem statement, we consider the following assumptions.

  • Each payload has a maximum weight of 500 g. This suffices to saying that a single UAV can pick up an object on its own and cooperative lifting of payloads is not necessary.

  • A dedicated wireless network is available (on demand) for each UAV to share information with each other as desired. In practice, a 2.4 GHz WiFi network is used for experiments.

  • All computation and decision making needs to be performed onboard each UAV; i.e., a centralized system is not allowed.

  • The top surface of the payloads is known to be flat. Furthermore to simplify the detection of objects, we assume the geometry of all objects to be circular. Thus, the payloads considered are circular colored ferrous disks (see Fig. 2).

  • It is assumed that the search area has a rectangular geometry, with a known rectangular drop zone inside. This was specified in the MBZIRC challenge description as well.

  • The camera on each UAV is always facing downwards, i.e., a mechanical stabilization for the camera is present. This simplifies the problem of object localization using vision.

Figure 2: A sample payload used in this work. It is a 500 g disk of ferrous material with a diameter of 10 cm.

3.2 Approach

There are several ways to approach this challenging problem. One possible way could be to scan the whole search area for the objects using one or more UAVs. This will result in a map of the area with the detected object locations. One can then assign a given number of objects per UAV to transport them to the drop zone (Nieuwenhuisen et al., 2017). It turns out, this approach is not the most efficient way of solving the problem though. A better approach could be to use partitioning of the workspace into several search areas and assigning a UAV to each of them separately (Beul et al., 2019).

In this work, we use partitioning of the search area as well, to increase the speed of the search and rescue mission at hand. As shown in Fig. 3, we divide the workspace into three trapezoidal partitions of equal area. Each of the three UAVs is assigned to scan its respective partition for the objects. The scanning is performed in a uniform zig-zag fashion. Unlike (Lee et al., 2019), as soon as a UAV detects an object, it proceeds to pick it up and transports it to the drop zone. After dropping the object, it returns to the same object location to restart its scanning routine. As is shown by simulation and experiments, this change enables our system to complete the mission faster than similar works, which also use partitioning methods.

If a payload lies exactly on the boundary of two partitions, then a UAV which detects it first, has to pick it up. Further details on this partitioning, mission execution, and collision avoidance at the drop zone are provided in the following sections of the chapter.

Figure 3: Partitioning of the rectangular field map. The three green partitions , , and represent the respective search areas for the three UAVs, while the red area represents the drop zone. Any conflicts at the drop zone are avoided using communication enabled consensus.

4 System Architecture

In this section, we provide a description of the hardware and software components used in the testbed.

4.1 Hardware

The testbed comprises of three identical hexarotors. Each of them is equipped with an autopilot for UAV control and navigation, a companion computer for high-level computation, a camera enabled vision system for object detection and localization, and communication system for information exchange between the UAVs and the ground control station (GCS) for monitoring.

Figure 4: Fully equipped DJI F550 hexarotor platform.

Hexarotor Platform

Multirotor UAVs, e.g., hexarotors are known for their short flight time as compared to fixed-wing and other vertical take-off and landing (VTOL) platforms. That is because multirotors rely heavily on the thrust generated by their power hungry propulsion systems to stay airborne. However, multirotors have more agility and can hover in place, a trait which fixed-wing UAVs cannot generally achieve. We use an off-the-shelf hexarotor frame, the DJI Flamewheel F550, with customized onboard components (see Fig. 4). Although we have tested a good number of quadrotor platforms as well, we decided to work with hexarotors as they provide more stability, agility, and an adequate payload capacity with a decent flight time of 20 minutes for the mission. The propulsion system, DJI E310 was selected because it provides enough thrust to carry a maximum payload of kg. A list of the main UAV components is given in Table 1.

Item Description
Frame: DJI flamewheel F550 hexacopter
Propulsion system: DJI E310 with inch propellers
Battery: 10Ah 4S LiPo battery
Flight controller: Pixhawk 2 (the cube)
On-board computer: Odroid XU4
Altitude sensor: LiDAR Lite v3 sensor with m range
Camera: ELP fish-eye camera
Gripper: Custom passive design
Table 1: UAV Hardware Components

Autopilot

We use the open-source Pixhawk2 flight controller (see Fig. 5) along with the PX4 autopilot firmware for autonomous control and navigation of the UAVs. The PX4 software also allows us to use a companion computer, which is used to perform high-level algorithmic computations; for example vision processing, to send high-level commands such as attitude, velocity, and position set-points which the autopilot can then track. This control scheme allows the companion computer to focus on mission planning by leaving the low-level control load to the PX4 autopilot.

Figure 5: Autopilot: Pixhawk2 flight controller.

Companion Computer

A companion computer is an embedded low power computing module that usually runs a version of Linux OS onboard a UAV. In our system, we use an Odroid XU4 (see Fig. 6) to: (1) execute onboard vision algorithms for object detection and localization, and (2) to execute the state machine which manages the overall system transitions. The Odroid board weighs around 70 g and is powered by a regulated 5V supply from the main battery.

Figure 6: Odroid XU4: Onboard companion computer.

Sensors

In addition to the inertial measurement unit (IMU) which is embedded in the flight controller for attitude stabilization, we use the following three main sensors for localization and object detection (see Fig. 7):

  • LiDAR Lite v3: A distance sensor which provides a much more precise altitude estimate than barometer-based altitude sensor; this allows us a precise altitude control at low altitudes during object picking.

  • Here+ GPS receiver: We used this model as it provides more accurate global positioning accuracy compared to many other products which we tested before.

  • 170-degree FoV fish-eye camera: An ELP wide angle camera; it helps in object detection at low altitudes for accurate aerial grasping. The camera is mounted on to a customized ultra-nano stabilization gimbal to provide a horizontal image capture, which makes the object localization process much easier.

Figure 7: Onboard sensors: (a) LiDAR Lite altitude sensor, (b) Here+ GPS receiver, and (c) ELP fish-eye camera module.

The Gripper

A customized gripper is designed to grasp ferrous objects with a reliable pick up and drop confirmation message using our novel design (Fiaz et al., 2019). This feedback information is critical for autonomy of the aerial grasping operation. The gripper uses a specific configuration of permanent magnets embedded with a proximity sensor for grasping, and a dual impulsive release mechanism for drop. The utilization of permanent magnets gives our design numerous advantages over other grasping techniques discussed in Section 2. We cover the essentials of aerial grasping and release mechanism in Section 7. Further details on the gripper design can be found in our previous work (Fiaz et al., 2018).

4.2 Software

The system software is distributed over two main components. The first component is the flight controller which receives set-point commands from the onboard computer, which is the second component. The onboard computer runs a state machine which manages the drone strategies starting from takeoff until the end of the mission. It also receives image frames from a USB wide-angle camera, and then runs an OpenCV-based vision algorithm which detects closest objects and converts the locations in image frames to relative position estimates. Finally, the velocity set-points are generated and sent to the flight controller to guide the drone for object search, picking, or dropping.

The onboard computer software runs in Ubuntu Linux operating system, and we use the robot operating system (ROS)3 to conveniently interface the different software components. Figure 8 shows the software architecture for each of the three UAVs in the system.

Figure 8: Software components of the system are distributed over two main parts: (1) A dedicated flight controller that handles real-time low-level vehicle stabilization and command tracking, and (2) a high-level companion computer which executes the remaining mission planning software.

5 State Machine Description

A finite state machine (FSM) is required in order to manage autonomous transitions of the system during the mission, from auto-takeoff, object search and transportation, to landing. The flow diagram of the FSM is shown in Fig. 9. Now, we provide a brief description of each of the states of the FSM.

Figure 9: Flow diagram of the state machine for the mission.

5.1 Takeoff and Go to a Predefined Position

This is an initialization state, where UAVs go to a predefined start location in their assigned operational area or partition.

5.2 Object Search

Once each UAV arrives at the predefined initial position, it automatically switches to Object Search state. In this state, each UAV scans its own assigned area looking for objects. The scanning trajectories are designed to allow maximum distance between the UAVs to avoid collisions during the object search phase. If an object is detected, the state machine switches to the Object Picking state.

5.3 Object Picking

In case an object is detected, the UAV will switch to Object Picking state. It will keep trying to pick the object until it succeeds to do so. In each trial, it will descend gradually and check whether the object is well placed for picking. Otherwise it will ascend gradually to get more field of view. These steps are repeated until the UAV succeeds to pick up the object. If the object has been successfully picked, a sensor attached to the gripper will be activated and so the drone will switch to the Go to drop state. If picking is not successful, it will switch back to Object Search.

5.4 Go to Drop

Once an object is collected, the drone switches to the Go to Drop state, in which it goes to a predefined spot around the perimeter of the drop zone. Then, it starts communicating with other UAVs to negotiate its eligibility to enter the drop zone, and this is done in the next state.

5.5 Waiting to Drop

For each UAV, there is a pre-assigned waiting spot where it must wait until there is no other UAV operating inside the drop zone. This state is the only state that requires communication between the agents and in case there were two agents waiting for access permission, the permission is granted according to a priority policy i.e., first come first serve. This simple yet effective strategy ensures that all agents will operate without any risk of collision.

5.6 Drop

In case none of the agents is inside the drop zone, the drone navigates to the drop spot inside. Once the drop spot is reached, the drone sends a command to the gripper to release the object. Sensors on the gripper send a feedback signal to confirm whether the drop was successful. If the operation is indeed successful, the drone switches to the Object Search state.

5.7 Go Home and Land

After scanning the whole area and not finding any new object, the UAV will then switch from the Object Search state to Go Home and Land state, during which it flies towards a position called home spot where it lands. By doing so, the mission can be declared as accomplished for this particular UAV.

6 Object Detection and Localization

Object localization is an essential step to guide the UAV to an accurate picking spot. For an object to be localized, it first needs to be detected. A monocular camera is used along with blob detection algorithm (OpenCV, 2015), to detect objects of specific color and report their image pixel coordinates with respect to the image frame. If more than one object is detected, then the closest object is selected. In order to know how close the object is to the UAV, we use an empirical model which fuses the UAV altitude from ground with the reported object pixels in the image, to provide an accurate estimate of the object location with respect to the UAV. Such a model can be obtained by camera calibration process at a specific altitude. In this section, we explain the camera calibration process and the UAV-to-object control set-point calculations.

6.1 Camera Calibration

There have been several works related to aerial object tracking using different methodologies depending on the mission requirements and available tools. In particular, vision-based aerial object tracking has been an active field of research in computer vision community over the past decade (Redding et al., 2006), (Yue et al., 2016), (Wu et al., 2017).

We use a fusion of vision-based object detection and UAV altitude information to accurately localize colored ground objects relative to the UAV coordinate frame. The approach mainly relies on an empirical model based on camera calibration with respect to a certain fixed altitude. The empirical model takes as inputs: The pixel coordinates of the center of the detected object and the current altitude of the UAV, and outputs a position estimate of the coordinates of the object relative to the UAV coordinate frame.

Figure 10: Camera calibration setup: Camera is fixed at a specific calibration height . In each trial, the object is placed at a different location and the corresponding physical position from the camera center is recorded. In addition, the pixel displacement of the corresponding object center with respect to the center of the image frame is recorded. Finally, an empirical model is derived.

The calibration process (see Fig. 10) proceeds as follows. A camera sensor is fixed at a known altitude from an object of interest. Then, the object is horizontally displaced with known distances ( in Fig. 10) from the camera center. At each displacement, the reported radial pixel displacement to the object center is recorded against the corresponding actual displacement in meters. Using a specific camera sensor, a table of several measurements is constructed and a fitting function is derived. Following is a quadratic approximation of the relationship between the radial pixel displacement in the image frame and the estimated position of the object, in meters, relative to the UAV coordinate frame.

(1)

where is the calibration altitude, are the estimated object distances relative to the UAV in meters at the calibration altitude, and () is the detected object center in the horizontal image frame in pixels. In order to adapt to object localization at different altitudes, the estimated distances are linearly scaled according to the ratio of the actual altitude to the calibration altitude. That is,

(2)
Figure 11: UAV position defined in a local fixed ENU frame.

6.2 Obtaining Control Set-points

We use position set-points to navigate the UAV towards a detected object center. The UAV position is defined with respect to a fixed local coordinate frame, called of ENU convention i.e., East(), North(), Up(). (see Fig. 11). For the UAV, we define a fixed body frame, i.e., . The flight controller takes position set-points in the local fixed, frame. However, the estimated position of the object is in the frame. Therefore, a transformation of the object position from to is required in order to obtain a valid UAV position set-point. This is achieved by:

  • A rotational transformation using the body rotation angle (see Fig 12), to align with .

  • A translational transformation to finally express object position in .

Figure 12: UAV body frame with y-axis pointing in the forward direction. The object is detected with respect to the body frame.

Let denote the UAV position in the local fixed ENU frame , the object position with respect to the UAV body frame , the object position with respect to body frame after a rotation of around the body frame -axis, and the object position expressed in the local fixed frame after a translational transformation. Both transformations are done in Eq. 3 as given below:

(3)

Finally, the flight control set-point is simply which drives the UAV to a new position towards the detected object.

6.3 Color Thresholding

Another essential part of solving this problem is a reliable and versatile methodology for detecting colored objects. To this end, we have developed a simple yet effective strategy that also allows for user input for very fast online calibration of the vision algorithm for object detection.

The appearance of the objects outdoors can vary significantly due to environment variations such as time of day, weather conditions, etc. Hence, we design a method that does not require any training data but only requires tuning of a few threshold parameters. In essence, we simply threshold the input image in different color spaces and then merge the results. The thresholds for each color space are determined in a semi-automatic fashion. The user points the camera at a colored object and provides a tolerance threshold to determine the sensitivity of the determined thresholds. The thresholds for each color space are then determined automatically. This procedure is repeated for each color and the determined thresholds are saved to a local configuration file and synchronized with the ROS server.

Extensive experiments show that the LAB color space provides the best separation of the colors used in this challenge (blue, green, red, yellow, orange). In addition, we use the HLS color space which provides some invariance to illumination, and lastly the RBG color space in which the images are captured. We combine the thresholded images for each color space into a single RGB image where each channel now corresponds to a thresholded image. We then convert this RGB image to a gray scale image. The color channels are weighted when converting to gray scale, effectively providing automatic weights for the different color spaces (HSL - 0.2989, LAB - 0.5870, RGB - 0.1140). The merged result now contains a thresholded image for a specific color with very little noise due to this smart combination of different color spaces. We then find the contours on this thresholded image and fit the appropriate shapes (e.g., circle in our case). This methodology is very efficient and achieves close to real-time performance on an embedded platform such as the Odroid XU-4. It can be tuned very quickly and not only that it works well for detecting and tracking the colored objects, but also for localizing the rectangular drop zone precisely (see Fig. 13).

Figure 13: Object detection using color thresholding in various color spaces.

7 Aerial Grasping and Transport

In this section, we present a simple light-weight gripping mechanism for ferrous objects with feedback on the picking state. The mechanism is based on our novel design for passive aerial grasping and transport of ferrous objects (Fiaz et al., 2019). We also describe its actuation routine and a reliable picking strategy for grasping objects outdoors even in presence of high wind disturbance.

7.1 The Grasping and Release Mechanism

Payload is an important consideration while designing a gripper for drones. We would like to keep the grip as strong as possible while keeping the mechanism weight to a minimum. Thus, for ferrous grasping application, we investigated various options including electromagnets, electro-permanent magnets (EPMs), and permanent magnets. Low power consumption compared to electromagnets, high payload capability, and convenient commercial availability of the EPMs, apparently makes them a default choice. However, EPMs are shown to have problems with flushing on to the surface of the objects on touchdown, since they require a few seconds to activate in order to grasp a ferrous payload with full strength, and need perfect alignment with the payload surface (Fiaz et al., 2018). Therefore, instead we designed our own magnetic gripper with permanent magnets and a novel impulsive, servo-actuated release mechanism, which outperforms EPM based designs. Figure 14 shows the complete gripper assembly mounted to the hexarotor frame.

Figure 14: Gripper design: (a) side view, and (b) bottom view. The 3D printed gripper enclosure holds together two servo motors, four permanent magnets, push-button for feedback, gimbaled camera with its holder, and Arduino Nano for actuation control and ROS interface.

All the assembly parts have been designed and printed via the Objet30 Prime 3D printer at the RISC Lab. The whole gripper when assembled, weighs around 250 g. The servo mount holds everything in place. The square magnetic pad at the heart of the mechanism is the key to spontaneous grasping. It employs four 6.33 mm cubes of N42 Neodymium magnets. These magnets are collectively capable of providing a net lift of around 0.76kg. For our experiments, the test objects we used weigh 500 g at maximum. Thus, one pad does the job for us. The pad also contains in its center, a push-button, that is pressed and released every time the gripper picks up and drops an object respectively. As is described later in this section, this little feature is vital for ensuring flawless autonomous flow of the finite state machine (FSM) during the grasping operation.

The release mechanism as shown in Fig. 14 consists of two high speed servo actuators, which when activated push the object off the magnetic pad using their respective horns. The two servos are mounted at right angles to each other ensuring a counter-torque (see Fiaz et al. (2018)), when activated at the same time. This concept of dual impulsive release (see Fig. 15) is quite efficient in terms of design simplicity as well as power consumption, since the only time the gripper consumes power is in the drop phase. The average power consumption over a complete pick and drop cycle of the gripper operation is only 3.48 W.

Figure 15: Dual impulsive release mechanism with counter-torque. Two equal and opposing torques of the servo actuators double the release force on the ferrous payload attached to the magnets while preventing any torsional effect in the gripper assembly (Fiaz et al., 2018).

An Arduino Nano serves as a dedicated ROS node for controlling the gripper actuation. It reads the push-button feedback from the magnetic pad and publishes the pick/drop status to Odroid (i.e., the companion computer) in real time. It is subscribed to pick/drop commands from the Odroid as well, in response to which it either activates or deactivates the release (servo) mechanism.

7.2 Camera Stabilization

In addition to the grasping and release mechanism, the gripper assembly also has a built-in ultra-nano servo gimbal for the camera module (see Fig. 16). This customized 3D printed gimbal uses two Hitech ultra-nano servos to stabilize the roll and pitch of the camera as the UAV flies and carries out various maneuvers. This keeps the camera faced down, aligned with ground all the time, which makes the object detection and localization convenient.

Figure 16: CAD animation of the 3D printed ultra-nano stabilization gimbal for the camera.

7.3 Actuation Routine

Each of the three UAVs in our testbed is equipped with an identical gripper assembly. The actuation and grasping routine for any UAV proceeds as follows: The magnets being permanent are activated by default. In the picking state, the servos are deactivated i.e., the horns rest above the magnetic pad. Thus as a UAV detects, descends and picks up an object, the feedback signal from the push-button switches from 0 to 1. A 0 means an object is not picked, while a 1 means that an object has been picked up successfully. Thus a 1 message serves as a pick up confirmation for the FSM. Now, when a UAV reaches the drop zone, the Arduino (ROS node) receives a drop signal from the Odroid (FSM), and hence it activates the release mechanism. As the object is dropped, the push-button feedback switches from 1 to 0. Similar to the picking routine, a 0 message serves as the drop confirmation for the FSM. Once it gets the confirmation, it proceeds to the next state and also sends a pick up signal to the Arduino which deactivates the release mechanism again, and the process continues.

7.4 Picking Strategy

One of the main contributions of this work is our simple yet reliable picking strategy, which is the way the drone will approach the object to be picked. The proposed picking strategy relies on accurate tracking of the estimated object position, based on vision and the MAV altitude from ground. As stated earlier, a LiDAR sensor is used for accurate altitude estimation. For vision based object localization, however, objects can not be always detected in all image frames due to environmental conditions and disturbances. For this reason, we adopt a confidence based approach to descend towards an object only if there is high confidence that it is detected and centered within a certain region.

Figure 17: Descending cone for a UAV during the picking state.

Our picking strategy works as follows. First, once an object is detected, the UAV is commanded to do lateral tracking of the object based on the estimated object position using vision. A confidence parameter is updated based on the frequency of detection in image frames. Next, if the confidence is higher than a predefined threshold and the UAV is within a certain vicinity around the object, the altitude is decreased gradually (see the code for details). The vicinity threshold at which the UAV is considered safe to descend, is defined by a cone with decreasing radius as shown in Fig. 17. This allows the UAV to descend more quickly when at a high altitude while being conservative at low altitude, for accurate positioning onto the object center. If the confidence is low, the UAV falls back to a good altitude where it last saw the object. This approach is encoded in yet another finite state machine shown in Fig. 18. This approach proved to provide accurate and smooth centering over the detected objects in the field experiments, which we discuss in Section 9.

Figure 18: Picking state machine for a UAV.

8 Communication

In our multi-UAV testbed, we use WiFi enabled communication between the UAVs using a dedicated 2.4 GHz outdoor network. Thanks to the partitioning approach, we require communication between the UAVs only during the dropping phase. Even then, UAVs only need to share simple data such as their current state (e.g. takeoff, picking, dropping, .. etc.) and position with one another to avoid collisions. In this section, we describe a simple software application that uses a custom MAVLink message for intercommunication between the three UAVs. The MAVLink protocol and its simple message customization provide a reliable encoding/decoding mechanism as well as make the handling of communicated messages rather trivial.

As is emphasized earlier, due to a limited space of the drop zone, it is necessary to guarantee collision free drop of the objects, in case more than one UAVs are in the drop state at the same time. Although a vision based approach may be feasible for a UAV to identify a partner drone (Lin et al., 2014), (Sapkota et al., 2016) in the drop zone, it will require extra computation and tuning to reach a satisfactory level of robustness and reliability. To simplify this task, we use communication enabled consensus to share simple information, e.g., current position and current mission state, between the three UAVs. The role of this communication here is to provide the UAVs with the needed information to do coordinated and collision-free drop. In our system, we use customized communication programs (ROS nodes) in order to allow the UAVs to have their independent ROS master node, which greatly reduces the chances of failure of the overall system. Figure 19 elaborates this idea in pictorial form.

Figure 19: Inter-UAV communication architecture: Figure (a) shows the standard ROS communication way with a centralized master node, which is prone to single point of failure issue. Figure (b) shows our customized communication method with distributed master nodes, to mitigate this issue.

Our software architecture is running on top of ROS which in principle, allows for setting up a distributed system. However, for that to work, only one machine has to be defined as the master node which runs the ROS core communication interface, which is responsible for connecting other nodes together, either on the same machine or others (see Fig. 19(a)). If the communication structure is reliable, e.g., a reliable transmission through the physical WiFi setup, the standard ROS communication architecture will work perfectly fine, and all nodes can easily share their information through the master node. However, if one node fails to communicate to the master node at some point in time, the node execution is affected and can be interrupted, and eventually can lead to a node crash. In fact, we faced such problems when only one master node was used in our experiments; i.e., the ground control station (GCS) computer was the master node, and all three drones were connected to it as slave nodes. A major problem would arise, whenever a drone lost connection to the master node on the GCS, and the node execution would be interrupted, which in turn would lead to mission interruptions.

In order to solve this issue, we resort to a distributed master node architecture (see Fig. 19(b)). This is achieved by letting each drone run its own master ROS core node locally, in order to avoid the dependency on a remote master and the occasional disconnections resulting from it. However, using this method, other UAVs information (called topics in ROS terminology) are not available anymore, and a special communication pipeline is needed. Therefore, we customized a simple ROS node on each drone to handle the communication of the required information, i.e., positions and mission states using the UDP protocol. We chose the UDP protocol as it does not involve handshaking mechanisms, which reduces latency and increases the data throughput.

Each communication node performs two tasks. First, it subscribes to its UAV position and mission state, encodes them in a customized MAVLink message, and sends it to the other UAVs. Secondly, it listens to messages form other UAVs, decodes them using the same definition of the custom MAVLink message, and publishes them locally as ROS topics to be used by other local nodes. The MAVLink protocol is used because it provides a simple way of defining custom messages as well as simple encoding and decoding functionalities. It also includes a checksum in the low-level message construction that helps to recover correct information. The custom MAVLink message packet has a payload of bytes which includes a UAV-ID, latitude and longitude, and mission state information (e.g., takeoff, picking, dropping etc.). Figure 20 shows the contents of this message.

Figure 20: Content of custom MAVLink message used in inter-UAV communication.

9 Experiments and Results

Now, we describe the simulation environment that we used for verifying the mission execution and results from outdoor experiments on the real system.

9.1 Simulations

Before experimenting with the physical testbed outdoors, we verified our approach inside a simulation environment. We used V-REP simulator for testing the successful completion of the mission using an identical three UAV system. The only key difference between the simulation and reality is that we used quadrotor UAVs in the V-REP environment, which does not affect the mission results significantly. These simulations helped us a lot in tuning color thresolding parameters for detecting objects of various colors and in determining efficient scanning schemes for the UAV partitions. It also enabled us to verify the correct execution of the FSM for the mission.

After several successful simulation runs in V-REP, and after fine tuning of the FSM and the vision parameters, we were ready to do outdoor experiments with the real testbed. A snapshot of the V-REP multi-UAV simulation is shown in Fig. 21. A link to the video of a successful simulation run is also available in Section 11.

Figure 21: An example of the system setup while being simulated in the V-REP robotic simulator, where three identical UAVs are considered. The search area is divided into three partitions as before, and colored objects are placed randomly in the search space. The downward facing camera view for each drone is shown in separate windows on the top.

9.2 Outdoor Experiments

In the following, we show results from a single UAV test and a complete experimental run for the search and rescue mission, which demonstrates autonomous exploration, grasping and coordinated transport. A video of the experiment is available in Section 11 as well.

Figure 22: Single UAV testing; (a) a snapshot of the drone in the search phase, (b) a snapshot of the drone after an object is found and selected, (c) a snapshot of the drone while descending over the object, (d) a snapshot of the drone while aligning over the object to prepare for picking, (e) a snapshot of the drone while picking the object, and (f) a snapshot of the drone after picking the object, going to the drop zone.

Each drone was tested individually in order to verify correct execution of each operational task during the mission. This included area exploration, object detection, object picking test, dropping test, and eventually the overall autonomous mission which is managed by the FSM. Figure 22 shows snapshots of testing the individual tasks during an autonomous mission for a single UAV.

Figure 23: A screenshot overlay of field partitioning in the outdoor experiments.

A multi-UAV experiment with a full mission i.e., the RISCuer was then executed, where the field was divided into three partitions as described before. The corner points of each partition were provided for the corresponding UAV only (see Fig. 23). In each operation area, two colored ferrous discs with cm radius were placed on wooden stands of cm height, at random locations. Then, the UAVs were given a start signal and they executed the complete mission autonomously afterwards. A Linux computer (i.e., the GCS) was used to monitor the mission execution and UAV states remotely. Several runs were performed to confirm the reliable operation of the testbed and and the successful execution of the mission. All runs were successful with an average completion time of about minutes. Figure 24 shows a screenshot from these outdoor experiments.

Figure 24: Snapshot from the outdoor experiments.

9.3 Discussion

In our experiments, we used rosbags (data logging system in ROS) for data logging as it provides convenient tools for data visualization and time-stamped mission replays. Figure 25 shows a snapshot of a log replay of one of the three drones during a complete mission. The logged data includes time-stamped processed gray-scale image where an object is encircled if it is detected, state of the mission, error distance to current detected object, and the gripping status.

Figure 25: Log replay of a drone for a complete mission.

During several autonomous missions, a main factor of success is the accurate object centering with respect to the UAV gripper, which is a result of an accurate object localization using vision. In Fig. 26, a smooth descent can be seen as the object is being centered with respect to the gripper center, while Fig. 27 shows the distance error between the UAV and the detected object during the picking state. These plots validate the effectiveness of our approach over other recent works, such as (Lee et al., 2019) and (Beul et al., 2019).

Figure 26: Smooth altitude trajectory of the UAV during the picking state.
Figure 27: Distance error between the UAV position and the detected object during the picking state.

The experiments also showed the effectiveness of our proposed grasping mechanism over the EPM based solution. In particular, we performed a comparative study with EPMs in terms of power consumption as well as the payload handling capabilities. Based on this study, the success rates for autonomous pick ups were observed to be 53 for EPMs and 97 for our passive design respectively. In addition, the study also showed our mechanism to be more power efficient (see Fiaz et al. (2018) for details). This further strengthens the claim that our system is more reliable than several recent works that use EPMs as their solution for autonomous aerial transport.

We would also like to highlight one of the main challenges that we faced during the course of this work. For the sake of simplicity of the system, we used blob detection methods on low-computation modules for vision based object detection. Such methods are usually tuned for particular colors at specific environmental conditions e.g., light intensity. Therefore, it is challenging to use the same parameters to detect the same colors in different lighting conditions, which we faced during outdoor field tests. More complex methods can be used, but at the expense of more computation power. One possible solution is to use adaptive vision parameters (i.e., color thresholds) according to a pre-trained model which accounts for environmental changes such as light intensity. The trained model can then be executed rapidly on low-computation modules.

10 Conclusions and Prospects

In this work, we presented a fully integrated multiagent UAV system for searching, collecting and transporting objects with unknown locations in an outdoor environment. The proposed system simplifies such complex tasks by introducing full autonomy which extends its application domains to real life situations such as search and rescue missions and commercial package delivery. Objects were localized based on a monocular camera and the drone altitude, and picked up using our customized novel passive grasping mechanism with feedback. The overall system architecture was implemented and tested successfully in an outdoor environment using a simple yet effective approach with low-cost hardware, which makes it an appealing research testbed for future multiagent control algorithms. Further enhancements can be made in the design as well as the cooperative control techniques to incorporate robust performance of the system under varying environment conditions.

11 Supplementary Material

Footnotes

  1. journal: Unmanned Aerial Systems, Elsevier
  2. footnotetext: Supported by funding from King Abdullah University of Science & Technology (KAUST)
  3. http://www.ros.org

References

  1. A distributed framework for real time path planning in practical multi-agent systems. IFAC-PapersOnLine 50 (1), pp. 10626–10631. Cited by: §1.
  2. A survey of unmanned aerial vehicle (uav) usage for imagery collection in disaster research and management. In 9th International Workshop on Remote Sensing for Disaster Response, pp. 8. Cited by: §1.
  3. Health aware stochastic planning for persistent package delivery missions using quadrotors. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3389–3396. Cited by: §1.
  4. Control and path planning of quadrotor aerial vehicles for search and rescue. In SICE Annual Conference (SICE), 2011 Proceedings of, pp. 700–705. Cited by: §2.
  5. The flight assembled architecture installation: cooperative construction with flying machines. IEEE Control Systems 34 (4), pp. 46–64. Cited by: §2.
  6. Fast autonomous landing on a moving target at mbzirc. In 2017 European Conference on Mobile Robots (ECMR), pp. 1–6. Cited by: §2.
  7. Team nimbro at mbzirc 2017: fast landing on a moving target and treasure hunting with a team of micro aerial vehicles. Journal of Field Robotics 36 (1), pp. 204–229. Cited by: §2, §3.2, §9.3.
  8. Unmanned aerial vehicle (uav) for monitoring soil erosion in morocco. Remote Sensing 4 (11), pp. 3390–3416. Cited by: §1.
  9. Construction of cubic structures with quadrotor teams. In Robotics:Science and Systems VII, Vol. , pp. 177–184. External Links: Document, ISSN , ISBN 9780262305969, Link Cited by: §2.
  10. An intelligent gripper design for autonomous aerial transport with passive magnetic grasping and dual-impulsive release. In 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), pp. 1027–1032. Cited by: §2, §4.1.5, Figure 15, §7.1, §7.1, §9.3.
  11. A hybrid compositional approach to optimal mission planning for multi-rotor uavs using metric temporal logic. arXiv preprint arXiv:1904.03830. Cited by: §1.
  12. Fast, composable rescue mission planning for uavs using metric temporal logic. arXiv preprint arXiv:1912.07848 (), pp. . Cited by: §2.
  13. Passive aerial grasping of ferrous objects. IFAC-PapersOnLine 50 (1), pp. 10299–10304. Cited by: §2.
  14. Passive magnetic latching mechanisms for robotic applications. Master’s Thesis, KAUST, . Cited by: §2.
  15. Impulsive release mechanism and method. Google Patents. Note: US Patent App. 16/266,800 Cited by: §2, §4.1.5, §7.
  16. Drone-assisted communications for remote areas and disaster relief. arXiv preprint arXiv:1909.02150. Cited by: §2.
  17. The photogrammetric potential of low-cost uavs in forestry and agriculture. Cited by: §1.
  18. Three-dimensional dynamic surface grasping with dry adhesion. The International Journal of Robotics Research, pp. 0278364915584645. Cited by: §2.
  19. Versatile Aerial Grasping Using Self-Sealing Suction. In IEEE International Conference on Robotics and Automation, Stockholm. Cited by: §2.
  20. A mission management system for complex aerial logistics by multiple unmanned aerial vehicles in mbzirc 2017. Journal of Field Robotics 36 (5), pp. 919–939. Cited by: §2, §3.2, §9.3.
  21. Vision-based formation for uavs. In 11th IEEE International Conference on Control Automation (ICCA), Vol. , pp. 1375–1380. External Links: Document, ISSN 1948-3449 Cited by: §8.
  22. UAV intelligent path planning for wilderness search and rescue. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 709–714. Cited by: §2.
  23. Mohammed bin zayed international robotics challenge. Note: \urlhttp://www.mbzirc.com/” Cited by: §2.
  24. Cooperative grasping and transport using multiple quadrotors. In Distributed autonomous robotic systems, pp. 545–558. Cited by: §2.
  25. Cooperative manipulation and transportation with aerial robots. Autonomous Robots 30 (1), pp. 73–86. External Links: ISSN 1573-7527, Document, Link Cited by: §2.
  26. Control of multiple quad-copters with a cable-suspended payload subject to disturbances. IEEE/ASME Transactions on Mechatronics. Cited by: §1.
  27. Collaborative object picking and delivery with a team of micro aerial vehicles at mbzirc. In 2017 European Conference on Mobile Robots (ECMR), pp. 1–6. Cited by: §2, §2, §3.2.
  28. Blob detection using opencv. Note: \urlhttps://www.learnopencv.com/blob-detection-using-opencv-python-c/ ” Cited by: §6.
  29. Practical aerial grasping of unstructured objects. In 2011 IEEE Conference on Technologies for Practical Robot Applications, pp. 99–104. Cited by: §2, §2.
  30. Vision-based target localization from a fixed-wing miniature air vehicle. In 2006 American Control Conference, Vol. , pp. 6 pp.–. External Links: Document, ISSN 0743-1619 Cited by: §6.1.
  31. Lessons learned: application of small uav for urban highway traffic monitoring. In 45th AIAA aerospace sciences meeting and exhibit, pp. 2007–596. Cited by: §1.
  32. Vision-based unmanned aerial vehicle detection and tracking for sense and avoid systems. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 1556–1561. External Links: Document, ISSN Cited by: §8.
  33. Vision-based autonomous landing of an unmanned aerial vehicle. In Robotics and automation, 2002. Proceedings. ICRA’02. IEEE international conference on, Vol. 3, pp. 2799–2804. Cited by: §2.
  34. Planning and Control of Aggressive Maneuvers for Perching on Inclined and Vertical Surfaces. In IDETC/CIE, Boston, pp. 1–10. Cited by: §2.
  35. Monocular-slam–based navigation for autonomous micro helicopters in gps-denied environments. Journal of Field Robotics 28 (6), pp. 854–874. Cited by: §2.
  36. Vision-based Real-Time Aerial Object Localization and Tracking for UAV Sensing System. ArXiv e-prints. External Links: 1703.06527 Cited by: §6.1.
  37. A game theoretic approach to distributed coverage of graphs by heterogeneous mobile agents. IFAC Proceedings Volumes 46 (27), pp. 309–315. Cited by: §2.
  38. Communication-free distributed coverage for networked systems. IEEE Transactions on Control of Network Systems 4 (3), pp. 499–510. Cited by: §2.
  39. A fast target localization method with multi-point observation for a single uav. In 2016 Chinese Control and Decision Conference (CCDC), Vol. , pp. 5389–5394. External Links: Document, ISSN Cited by: §6.1.
  40. The application of small unmanned aerial systems for precision agriculture: a review. Precision agriculture 13 (6), pp. 693–712. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
414517
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description