FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System
Automated Parking is a low speed manoeuvring scenario which is quite unstructured and complex, requiring full 360° near-field sensing around the vehicle. In this paper, we discuss the design and implementation of an automated parking system from the perspective of camera based deep learning algorithms. We provide a holistic overview of an industrial system covering the embedded system, use cases and the deep learning architecture. We demonstrate a real-time multi-task deep learning network called FisheyeMultiNet, which detects all the necessary objects for parking on a low-power embedded system. FisheyeMultiNet runs at 15 fps for 4 cameras and it has three tasks namely object detection, semantic segmentation and soiling detection. To encourage further research, we release a partial dataset of 5,000 images containing semantic segmentation and bounding box detection ground truth via WoodScape project .
Keywords: Automated Parking, Visual Perception, Embedded Vision, Object Detection, Deep Learning.
Recently, Autonomous Driving (AD) gained huge attention with significant progress in deep learning and computer vision algorithms , where it is considered one of the highly trending technologies all over the globe. Within the next 5-10 years, AD is expected to be deployed commercially. Currently, most of the automotive original equipment manufacturers (OEMs) over the world are working on development projects focusing on AD technology . The complexity of the system must be acceptable for the purpose of producing commercial cars which adds limitations to the hardware used for production. Fisheye cameras offer a distinct advantage for automotive applications. Given their extremely wide field of view, they can observe the full surrounding of a vehicle with a minimal number of sensors. Typically four cameras is all that is required for full 360 coverage of a car (Figure 1). Nevertheless, this advantage comes with some drawbacks in the significantly more complex projection geometry that fisheye cameras exhibit. This advantage comes with a cost in the significantly more complex projection geometry exhibited by fisheye cameras.
Convolutional neural networks (CNNs) have became the standard building block for the majority of visual perception tasks in autonomous vehicles. Bounding boxes for object detection is one of the first successful applications of CNNs for detecting not only pedestrians and vehicles, but also their positions. Recently semantic segmentation is becoming more mature  , starting with detection of roadway objects like road surface, lanes, road markings, curbs, etc. CNNs are also becoming competitive for geometric vision tasks like depth estimation , Visual SLAM , etc. Despite rapid progress in the computational power of embedded systems and of specialized CNN hardware accelerators, real-time performance of semantic segmentation is still challenging. In this paper, we focus on deep learning architecture for an automated parking system which is relatively less explored in the literature .
The rest of the paper is structured as follows. Section 2 provides an overview of parking system use cases and necessary visual perception modules. Section 3 details a concrete implementation of efficient multi-task architecture with results and discusses how it fits into the overall system architecture. Finally, Section 4 summarizes the paper and provides potential future directions.
2 Automated Parking System
2.1 Parking Use cases
Parallel parking: The system attempts to align the vehicle in parallel to the curb or the road as illustrated in 2(a). In such a strategy, the vehicle usually parks in one maneuver, and further maneuvers are required for alignment with curb and the vehicles around. Robust object detection and curb classification has to be implemented to minimize the distance between the vehicle and the curb and ensure the vehicles in front and behind are avoided. Conventional ultrasonic sensors are capable of detecting curbs, however fusion with cameras greatly enhances the classification and position accuracy.
Perpendicular parking: The system tries to find a lateral parking slot, where the width of the slot is sufficient for the vehicle, with additional room for opening the doors and safety distances. If the slot is found to fit the required size, then a trajectory that minimizes the number of maneuvers necessary is planned to reach the slot target. This parking strategy can be performed in backward direction as illustrated in Figure 2(b) or forward direction as shown in Figure 2(c). Ultrasonic sensors are quite unreliable in the detection of other vehicle’s corners due to missing and incorrect reflections of the ultrasonic waves,resulting in the multiple re-measurements to improve the detection. This may result in some additional maneuvers to overcome the error introduced from using ultrasonic sensors only. As well as this ultrasonics are only useful in parking between two objects, being unable to detect road markings. Fusion with a camera sensor provides improved performance in multiple aspects. For instance, computer vision techniques can provide complementary information for depth estimation using Structure from Motion (SFM). Cameras are also able to detect the white line markings which allow for detection of slots where there are multiple empty slots in a group.
Ambiguous Parking: This parking scenario is neither parallel or perpendicular. The orientation must be detected from the surrounding vehicles as in Figure 2(d). Due to the increased detection range, and the complete sensor coverage around the vehicles that cameras provide, computer vision provides a more appropriate reaction of the ego-vehicle in such situations. For instance, ultrasonic sensors do not provide information about the ego-vehicle’s flank, objects have to be tracked blindly in that area using the vehicles motion, while this information is provided in a 360 surround-view while using fisheye cameras. By using the complementary color information provided by cameras, systems will also be able to detect any suddenly occurring objects with higher confidence and thus react in a more timely manner compared to ultrasonics alone.
Fishbone Parking: Figure 2(e) shows an example of fishbone parking where there is a huge limitation in ultrasonic sensors. To be able to detect the slot orientation using ultrasonic sensors only, the vehicle has to drive inside the slot to detect the orientation from the surrounding vehicles, as the density of reflections is too low when the vehicle is outside the slot. Therefore, detection of such a slot during the search phase is not possible. Fusion with camera enables an increased range of detection using both object detection and slot marking detection. This use case cannot be covered using ultrasonic sensors solely.
Home Parking: Thanks to the huge progress in computer vision and self-parking technology, higher-level applications have been introduced for more comfort and better driving experiences. One of which is “Home Parking” where the system is trained by the driver to follow a set trajectory and park in a particular spot. The surrounding area is stored on the system and particular landmarks recorded. By doing this the vehicle is capable of localizing itself within the environment in future and driving completely autonomously onto the stored trajectory and following it to it’s regular parking space.
Valet Parking: Significant progress has been made in automated parking even without a stored trajectory. In this case, the system is completely autonomous in it’s slot-search, selection, and parking without having any prior knowledge about the environment or a predefined trajectory.
2.2 Necessary Vision Modules
Parking slot detection: The first and foremost step in automated parking is the selection of a valid parking space, in which a car can be safely parked. An ideal parking slot detection algorithm shall detect several types of parking slots, as shown in Figure 2. Parking slot detection can be further broken down into several stages. It involves detection of line markings, curbs, vehicles, shrubs and walls as all of these are necessary in recognizing an open parking slot. Additionally, it is of vital importance an accurate measurement of the width and length of the slot can be made to ensure the vehicle can safely fit within.
Freespace detection: The final objective of autonomous parking system or complete autonomous driving systems is navigating the car to a target. Therefor the freespace (area free of pedestrians, vehicles, cyclists or any other objects that have potential risk of damage or injury while passing over them) or “driveable” area information is critical. Such information is also crucial in situations when evasive maneuvers are needed in real time to minimize the risk of collision.
Pedestrian detection: Collision risk usually arises from object classes that can be moving. One of such classes is the pedestrian class. Pedestrian detection comprises a challenging task due to several reasons. For instance, they are very difficult to track because pedestrian motion can be erratic and difficult to predict. A pedestrian may suddenly appear behind a vehicle while attempting to park. Knowing the object belongs to the pedestrian class, the system should expect it to move away, and thus should not abort at that moment. Pedestrian classification is very helpful in other autonomous driving situations as well, e.g. a child suddenly crosses the street and the vehicle has to suddenly brake. Infrared cameras can be utilized to maximize the performance of pedestrian detection systems, due to their capability to capture thermal energy , but this can be costly in production systems.
Vehicle detection: Vehicle detection is one of the most important automotive computer vision tasks. It is very helpful in the scope of autonomous parking for many reasons. For example, the ability to distinguish between high obstacles, such as shrubs or walls and vehicles. In a parking situation it is of vital importance the system can recognize a vehicle which has the ability to move and obstruct the planned trajectory of our car, and a wall which we plan to park alongside, knowing it will be stationary throughout our manoeuvre. Typically, in the AD scenario, the system has to react to dynamic vehicles surrounding the ego-vehicle. Such vehicles have to be tracked to avoid suddenly occurring vehicles after occlusion. The first step to perform such a task is vehicle classification.
Cyclist detection: Cyclists can be classified as pedestrians. However, cyclists have the ability to move faster with less maneuverability. Thus, distinguishing between cyclists and pedestrians provides additional information for the system that helps in tracking such objects.
Soiling Detection: Cameras embedded within the vehicles are directly exposed to an external environment and there is a good chance that they get soiled due to bad weather conditions such as rain, fog, snow, etc . Moreover, dust and mud have a strong affect of degraded computer vision performance. Compared to other types of sensors, cameras have much higher degradation in performance due to soiling. Thus, it is critical to robustly detect soiling on the cameras, especially for higher levels of autonomous driving. Soiling detection was first implemented to alarm the driver that there will be degraded performance in the environment perception system. In a high-level autonomous system there could be fatal consequences if information from soiled cameras is relied on, without having prior information that it is not correct.
3 Parking System Architecture
3.1 Overall Software Architecture
The block diagram of our system is illustrated in Figure 3. The first step in an industrial system is the SOC (System on Chip) selection for embedded systems, based on criteria including performance (Tera Operations Per Second (TOPS), utilisation, bandwidth), cost, power consumption, heat dissipation, high to low end scalability and programmability. The SOC choice provides the computational bounds in the design of algorithms. A typical embedded system is shown on top left of the block diagram. In computer vision, deep learning is playing a dominant role in various recognition tasks and gradually for geometric tasks, like depth and motion estimation also. The progress in CNN has also led to the hardware manufacturers including a custom hardware intellectual property core to provide a high throughput of over TOPS. The current system we are developing our algorithms on, has TOPS of compute power, consuming less than watts of power.
The necessary object detection modules were discussed in Section 2.2. In previous systems, some modules, for instance pedestrian detection, was done using machine learning techniques while others, like parking slot detection were done using classical computer vision techniques. Due to recent advancements in deep learning, all of the necessary vision modules can now be done using deep learning models. Thus, we propose a unified multi-task architecture for doing all these tasks, that runs on a Hardware accelerator (Green in the block diagram (Fig. 3)). This will be discussed in more detail in the next section. The deep learning model provides necessary functionality for parking. However, to add robustness, additional cues like motion estimation and depth estimation can be used along with other sensors like Ultrasonics, Radar, \peek_meaning:NTF . etc \peek_catcode:NTF a etc. etc.. In this paper, we focus on the basic solution for a parking system using deep learning only. Any detected objects from the four cameras are recorded in image coordinates, mapped to world coordinates to create a common representation and fed into a virtual map to plan maneuvering of the car for automated parking. Road markings and curbs are handled in the same way, also being sent to the map building a viable model for the world around us. Bounding boxes can be established around objects such as pedestrians and vehicles by assuming a flat ground plane and mapping the foot-point (intersection of object to ground plane) to a world position using the vehicle and camera calibration. Depth estimation can handle cases where the foot-point is occluded or the road is no flat.
3.2 Proposed Multi-task Architecture
Various visual perception tasks like semantic segmentation , bounding object detection , motion segmentation , depth estimation and soiling detection are commonly addressed using
an encoder-decoder style architecture in deep learning. Many works have focused on solving these tasks independently. However, multi-task learning [20, 5, 21] enables the solving of these tasks using a single model. The main advantage of a multi-task network is its high computational efficiency, which is most suitable for a low cost embedded device. In a simple scenario, where a multi-task network solving two tasks using a common encoder that shares of common load is comparatively much better than independent networks consuming the whole processing power available without common load sharing. In this case, an additional task can also be solved with remaining computing resources. This, in fact, offers scalability for adding new tasks at a minimal computation complexity.  provided a detailed overview on negligible incremental computational complexity while increasing number of joint tasks solved by a multi-task network. On the other hand, using pre-trained encoders (say ResNet ) as a common encoder stage in multi-task networks reduces training time and alleviates the daunting requirements of massive data to optimize. Reusing the encoder also provides regularization across different tasks.
Network Architecture: We propose a multi-task network called FisheyeMultiNet, having a shared encoder and three independent decoders that perform joint semantic segmentation, object detection and soiling detection as shown in Figure 4. A semantic segmentation decoder provides valuable lane markings, road and sidewalk information, while an object detection decoder provides bounding boxes of pedestrians, cyclists, vehicles, etc. These two tasks primarily provide solutions to the major vision modules discussed in Section 2. A soiling detection decoder outputs the presence of external contamination on the camera lens, providing classification per tile for obtaining the localization of soiling in the image. We treat the camera soiling detection task as a mixed multilabel-categorical classification problem focusing on a classifier, which jointly classifies a single image with a binary indicator array, where each or corresponds to a missing or present class respectively, and simultaneously assigns a categorical label. The classes to detect are . Typically, opaque soiling arises from mud and dust, and transparent soiling arises from water and ice.
The raw fisheye images are passed to a common encoder built using the ResNet10  encoder. This encoder is pre-trained on ImageNet  and then trained on raw fisheye WoodScape images. The semantic segmentation network is built using the FCN8  decoder with skip connections from the ResNet10 encoder. The object detection decoder is built using a grid level softmax layer, while the soiling decoder is built using a grid level softsign layer. The categorical cross entropy is used as a loss metric for semantic segmentation and soiling detection, while average precision is used as the loss metric to express individual task losses. The total loss of the network is expressed as a weighted arithmetic combination of individual task losses and optimized using the Adam  optimizer. We do this intending to have a drastic increase in memory available and computational efficiency with just a small reduction in accuracy.
We make use of several standard optimization techniques to further improve the runtime, and achieve fps for four cameras on an automotive grade low power SOC. Some examples are: (1) Reducing number of channels in each layer, (2) Reducing number of skip connections for memory efficiency, and (3) Restricting segmentation decoder to image below the horizon line (only for roadway objects).
Datasets: The development of our architecture was primarily done on our internal parking dataset, which originates from three distinct geographical locations: USA, Europe, and China. While the majority of data was obtained from saloon vehicles, there is a significant subset that comes from a sports utility vehicle (SUV) ensuring a strong mix in sensor mechanical configurations. It consists of four Megapixel RGB fisheye cameras ( hFOV). After the collection of images, an instance selection algorithm is applied to remove redundancy  and produce the final dataset which consists of samples. To the best of the authors’ knowledge, this is the first public dataset for automated parking. The dataset is split into three chunks in a ratio of , namely training, validation, and testing. This dataset and the baseline multi-task model will be made public to the research community via our WoodScape project .
3.3 Results and Discussion
In this section, we explain the experimental settings including the datasets used, training algorithm details, etc. and discuss the results. We used our fisheye dataset comprising of images. We implemented our baseline object detection, semantic segmentation networks and our proposed multi-task network using Keras. All input images were resized to because of memory requirements needed for multiple tasks. Table 1 summarizes the obtained results for the single task (STL) independent networks and multi-task (MTL) networks on our parking fisheye datasets.
One of the main challenges of MTL is to balance the loss functions of all three tasks as the magnitude of the losses vary at different scales. This led to a faster convergence of certain tasks and divergence of other tasks. To handle this, we make use of a weighted loss function to normalize the losses. We update the task weights every epoch, based on loss gradients. We weigh the different tasks based on gradients observed after every epoch in a similar fashion to GradNorm . We tested 3 configurations of the MTL loss, the first one (MTL) uses a simple sum of the segmentation loss and detection loss (). The two other configurations MTL and MTL, use a weighted sum of the task losses where the segmentation loss is weighted with a weight and respectively. This compensates the difference of task loss scaling and consistently improves the performance of the segmentation task for all the three datasets. Experimental results show that performance of MTL networks are marginally lower than the STL networks. However, the computational gains offered by multi-task networks and a potential to improve performance by further fine-tuning, would make multi-task networks a more suitable option for future embedded deployment.
In this paper, we provided a high level overview of a commercial grade automated parking system. We covered various aspects of the system in detail, including the embedded system architecture, parking use cases which need to be handled and the vision algorithms which solve these use cases. We have focused on a minimal system which can be designed via an efficient multi-task learning architecture using four fisheye cameras which provides view surrounding the vehicle. We provided detailed quantitative results of the proposed deep learning architecture and show that the accuracy of an MTL network is not that much lower than an STL, despite the reduction in memory consumption and computational power. In addition, we released a dataset comprising of images with semantic segmentation & bounding box annotation to encourage further research.
- (2017) Efficient pedestrian detection at nighttime using a thermal camera. Sensors 17, no. 8: 1850. Cited by: §2.2.
- Y. Bengio and Y. LeCun (Eds.) (2015) 3rd international conference on learning representations, ICLR 2015, san diego, ca, usa, may 7-9, 2015, conference track proceedings. External Links: Cited by: 8.
- (2017) Gradnorm: gradient normalization for adaptive loss balancing in deep multitask networks. arXiv preprint arXiv:1711.02257. Cited by: §3.3.
- (2019) MultiNet++: multi-stream feature aggregation and geometric loss strategy for multi-task learning. arXiv preprint arXiv:1904.08492. Cited by: §3.2.
- (2019) AuxNet: auxiliary tasks enhanced semantic segmentation for automated driving. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP), Cited by: §3.2.
- (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.2, §3.2.
- (2017) Computer vision in automated parking systems: design, implementation and challenges. Image and Vision Computing 68, pp. 88–101. Cited by: §1.
- (2015) Adam: A method for stochastic optimization. See 3rd international conference on learning representations, ICLR 2015, san diego, ca, usa, may 7-9, 2015, conference track proceedings, Bengio and LeCun, External Links: Cited by: §3.2.
- (2018) Near-field depth estimation using monocular fisheye camera: a semi-supervised learning approach using sparse lidar data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Deep Vision: Beyond Supervised learning, Cited by: §1.
- (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, Cited by: §3.2.
- (2018) Visual slam for automated driving: exploring the applications of deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 247–257. Cited by: §1.
- (2016) Enet: a deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147. Cited by: §3.2.
- (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §3.2.
- (2017) Computer vision for driver assistance. Springer-Cham Switzerland. Cited by: §1.
- (2019) A factor analysis of consumer expectations for autonomous cars. Journal of Computer Information Systems 59 (1), pp. 52–60. Cited by: §1.
- (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115 (3), pp. 211–252. External Links: Cited by: §3.2.
- (2017) Deep semantic segmentation for automated driving: taxonomy, roadmap and challenges. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pp. 1–8. Cited by: §1.
- (2018) Rtseg: real-time semantic segmentation comparative study. In 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 1603–1607. Cited by: §1.
- (2018) Modnet: motion and appearance based moving object detection network for autonomous driving. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Cited by: §3.2.
- (2019) NeurAll: towards a unified model for visual perception in automated driving. arXiv preprint arXiv:1902.03589. Cited by: §3.2.
- (2018-06) MultiNet: real-time joint semantic reasoning for autonomous driving. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), pp. 1013–1020. Cited by: §3.2.
- (2019) Challenges in designing datasets and validation for autonomous driving. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,, pp. 653–659. External Links: Cited by: §3.2.
- (2019) SoilingNet: soiling detection on automotive surround-view cameras. In 2019 22nd International Conference on Intelligent Transportation Systems (ITSC), Note: To appear Cited by: §2.2.
- (2019) WoodScape: a multi-task, multi-camera fisheye dataset for autonomous driving. arXiv preprint arXiv:1905.01489. Cited by: FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System, §3.2.