Dans cet article, nous proposons une architecture pour le développement de systèmes de cloud computing nouveaux et expérimentaux. Le système proposé vise à renforcer les capacités de calcul, de communication et d’analyse de services de navigation routière par la fusion de plusieurs technologies indépendantes, à savoir les systèmes de navigation embarqués basés sur la vision, les systèmes de cloud computing de premier plan et les réseaux Ad-Hoc de véhicules (VANET). Ce travail présente nos premières investigations en décrivant la conception d’un système générique global. Le système conçu a été expérimenté à travers deux scénarios de services routiers basés sur la vidéo. En outre, l’architecture associée a été mise en œuvre sur un simulateur à échelle réduite d’un système embarqué véhiculaire. L’architecture développée a été testée dans le cas d’une application routière simulée visant à aider certains services de police. Le but de cette application est de reconnaître et de pister des véhicules et des individus recherchés en temps réel moyennant un système de surveillance formé par des véhicules en circulation. Le travail présenté démontre le potentiel de notre système pour améliorer efficacement et pour diversifier les applications nécessitant des traitements de vidéos en temps réel dans des environnements routiers.
In this paper, we propose a design for novel and experimental cloud computing systems. The proposed system aims at enhancing computational, communicational and annalistic capabilities of road navigation services by merging several independent technologies, namely vision-based embedded navigation systems, prominent Cloud Computing Systems (CCSs) and Vehicular Ad-hoc NETwork (VANET). This work presents our initial investigations by describing the design of a global generic system. The designed system has been experimented with various scenarios of video-based road services. Moreover, the associated architecture has been implemented on a small-scale simulator of an in-vehicle embedded system. The implemented architecture has been experimented in the case of a simulated road service to aid the police agency. The goal of this service is to recognize and track searched individuals and vehicles in a real-time monitoring system remotely connected to moving cars. The presented work demonstrates the potential of our system for efficiently enhancing and diversifying real-time video services in road environments.
Preprint]Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems
K. Hammoudi N. Ajam M. Kasraoui F. Dornaika K. Radhakrishnan K. Bandi Q. Cai S. Liu]K. Hammoudi N. Ajam M. Kasraoui F. Dornaika K. Radhakrishnan K. Bandi Q. Cai S. Liu
Research Institute on Embedded Electronic Systems (IRSEEM), IIS Group, Technopôle du Madrillet, St-Etienne-du-Rouvray, France
ESIGELEC School of Engineering, Department of ICT (MS Students), Technopôle du Madrillet, St-Etienne-du-Rouvray, France
Department of Computer Science and Artificial Intelligence, University of the Basque Country, San Sebastián, Spain
IKERBASQUE, Basque Foundation for Science, Bilbao, Spain
Keywords: Vehicular Network (VANET), Vehicular Cloud Computing (VCC), Image-based Recognition, Fusion of Multi-source Imagery, Real-time Video Services, Cooperative Monitoring System, Sensor Networks.
In this work, we propose to exploit cloud computing systems for developing real-time road video services from embedded navigation systems and VANETs (Vehicular Ad-hoc NETworks). The proposed systems will have a final objective to be experimented on a vehicle fleet. More particularly, this paper presents the design, the implementation and the simulation parts of a cloud-based recognition system for extending real-time road video services. Indeed, the proposed global generic system will exploit a cloud-based embedded recognition systems and VANET technologies; on the one hand, for analyzing the road traffic (e.g.; vehicular or navigation information) and on the other hand, for mutualizing the computational resources as well as for sharing relevant information visually extracted. Notably, the designed system will be useful for identifying dynamical Points Of Interest from embedded cameras (e.g., traffic-based POI) and then sharing the identified POIs to external stakeholders potentially interested (e.g., surrounding vehicles or road agencies).
For instance, these technologies can be exploited for improving the road traffic, the emergency mapping or the citizen security by cooperatively analyzing acquired georeferenced road images. Respectively, we present below some scenarios that will be based on the detection of dynamical POIs:
Sc.: a vehicle can detect an available parking area and to transmit its GPS location in a pre-defined neighborhood for informing surrounding drivers by exploiting a cloud computing system and VANET,
Sc.: each vehicle can similarly transmit images for analyzing and mapping the road meteorology in real-time. Thus, drivers can define an itinerary based on meteorological criteria, notably to reduce the moving in areas having bad weather (e.g., snowy roads),
Sc.: a vehicle can extract on-the-fly the license plate of preceding vehicles and then, sending the extracted plate characters to police services searching to localize stolen vehicles by matching extracted data with their reference databases,
Sc.: similarly, a vehicle can extract on-the-fly people faces from the streets and then, sending the extracted face images to police services that aim to localize searched individuals.
In this study, we have experimented the proposed cloud-based system by considering the last scenarios related to the police service application.
Nowadays, cloud computing developments are revolutionizing the world by providing to companies more and more powerful services. In particular, many companies tend to store their data on external servers or data centers. Indeed, this technology improves the Quality of Service (QoS); notably for the data management, the data security as well as for the data distribution. By this way, the providers of cloud computing systems allow many companies to develop services specifically focused on their principal activities. More precisely, cloud computing can be defined as a technology providing resources at three levels, namely infrastructures, software platforms and services [WSGB14]. The cloud computing was initially employed through wire-based network for internet and it has been progressively extended to the mobile network (e.g., through cellular networks). Notably, the cloud computing technologies facilitate the development of hybrid systems as well as the mutualizing of computational resources.
In this work, we are particularly interested by the development of cloud computing systems on the basis of VANET for enhancing and diversifying real-time road services. VANET networks have the particularity to exploit Ad-hoc systems. In other terms, these systems are self-organizing in the sense that each of them can communicate with others without the necessity of exploiting a pre-defined infrastructure. The development of VANET had a primary goal of supporting Intelligent Transport System through Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V) communications (e.g., [Mas11]).
Besides, the novel generation of general public vehicles is equipped with computer-aided embedded navigation and vision systems such as Advanced Driver Assistance Systems (ADAS systems). In particular, ADAS systems are more and more employed for detecting road obstacles (e.g.; self-parking) or for detecting the visibility degree of roads (e.g.; automatic lighting systems). In parallel, experimental multi-camera vehicle systems are actively developed for the research in the fields of cartography and machine vision in order to reconstruct urban environments in 3D as well as to develop full autonomous navigation vehicles [HM13, HDS13].
To the best of our knowledge, video services in vehicular clouds are not very developed. In [GWP13], Gerla et al. presented an image-on-demand service named “Pics-on-wheels” where some vehicles will send their acquired images for example by analyzing detected accidents. These images can then be used for assurance claims. In our case, we present a generic cloud computing system that could be used for developing various real-time video services by exploiting a distributed computing system. Notably, this system will be employed for sharing traffic information (e.g.; in aided-navigation or road safety) by exploiting embedded vision-based systems (e.g., recognition system), CCSs and VANETs (see Figure 1).
In our case, it is assumed that the vehicles will be equipped with embedded camera system, a GPS module and a VANET connecting system (). Notably, new generation vehicles are equipped with various types of sensors such as cameras located at the front and rear end. The proposed vision-based cloud computing system will take advantage of distributed computing and storing capabilities of conventional CCS and VANET (see Figure 1) for providing video services requiring high resources in term of data processing. In particular, the proposed system will be useful for visually recognizing dynamical objects of interest such as, for the search of stolen vehicles or individuals.
More precisely, the proposed system will exploit vehicular networks or external data center according to the needs. Yu et al. classify some cloud-based systems related to VANET [YZG13]. First, vehicular cloud is exclusively composed of vehicles. It allows vehicles to dynamically schedule on demand computational and storage resources. Second, roadside cloud is composed of dedicated servers and RSUs (Road Side Units). The later permits access to the cloud. This cloud is exclusively used by vehicles localized within the radio coverage of the RSU. Vehicles roam between successive RSUs to continuously benefit from the service. Third, central cloud is based either on dedicated servers in the Internet or data centers on VANET itself. In our case, we are using the concept of Hybrid Vehicular Cloud (HVC) which shares the processing between the Vehicular Cloud (VC) and the central cloud.
Moreover, we visualize in Figure 2 the architecture that has been developed for supporting the various data transfer and data processing. First, vehicles communicate with internet access point by using vehicle to infrastructure (V2I) or vehicle to vehicle (V2V) communications. RSUs are exploited for removing redundancy in captured images and GPS information. Second, the collected georeferenced raw data are then sent to a customized storage cloud (e.g.; Amazon cloud). Computing machines continuously run the face extraction, GPS extraction and number plate recognition algorithms in parallel. The extracted license plate numbers as well as the extracted GPS information are saved in a database (textual information). The extracted images are copied to file servers. Users access the service by connecting to a load balancing server, which distributes the requests to several working web servers.
In Figure 3, we observe the global dataflow diagram of experimented scenarios (Sc.). As can be observed, it worth mentioning that our architecture can also be used for the processing of other scenarios related to new real-time road video services (e.g.; Sc.).
Figure 4 illustrated the developed embedded vehicular monitoring simulator. This vehicular monitoring simulator is composed of a car prototype (a rigid mock-up) as well as its associated vision-based embedded system (see Sub-figure 3(a)). This embedded system is equipped with a Logitech HD camera (see Sub-figure 3(b)) connected to a Raspberry Pi micro-computer (see Sub-figure 3(c)). This micro-computer includes a SD card for storing the acquired images. For simulating the moving of the car prototype, a screen has been placed in front of the webcam and a video corresponding to a vehicle path acquired by an external Mobile Mapping System has been filmed (e.g., videos from the Kitty research dataset \@footnotemarkORI\@footnotetexthttp://www.cvlibs.net/datasets/kitti/ [GLU12, FKG13]). For such databases, the GPS information related to the images are provided. The micro-computer includes a wifi adapter that was used for simulating the VANET network. This embedded car prototype is connected to three workstations, the one simulating the RSU, the two others simulating the cloud nodes.
More precisely, two python scripts are running on Raspberry Pi, one aims to capture images and to geo-tag them, and the other aims to transfer the images to RSU by FTP. In RSU, a bash script is written to send those images to two simulated cloud nodes by using SSH. By this way, the data flow is evenly distributed to the cloud nodes through WiFi. The computing machines (also cloud nodes) will process the images in storage servers and get the extracted faces, license plates and GPS information by running a python script invoking the corresponding algorithms. On the web server, we implemented a RESTful API to access the database. The extracted images are archived in file servers, while the license number, GPS and time are updated to the database. Thus, the updated information can be visualized. Moreover, new extraction algorithms can be developed for various query applications.
In this study, applications related to police services previously mentioned (Scenarios ) have been experimented by deploying computer vision approaches well-known for their efficiency on the proposed generic processing architecture (one simulated mobile node). Notably, open-source CSharp Emgu CV routines \@footnotemarkORI\@footnotetexthttp://www.emgu.com/ have been exploited for carrying out the face extraction as well as the OCR-based license plate extraction. Data matching has been experimented by comparing extracted features with a reference database generated by an operator. The proposed experimentation pipeline distributes the flow of collected images and extracted features are localized and labeled on Google Maps-based application in quasi real-time. Time information associated to the data transfer and data processing for one image of (resolution of x) can be observed in Table 1.
This paper presents our initial investigations for the design, the implementation and the simulation of a cloud computing system for enhancing and diversifying real-time video services through VANET and Onboard Navigation Systems. A vehicular monitoring simulator has been developed for carrying out indoor experiments. A generic hardware and software architecture is proposed for experimenting new video service applications.
Accordingly, next stage will consist of transferring this technology on two modular chassis that will be fixed on vehicle windshields for experiments in real mobile conditions (i.e., two moving nodes). Moreover, research will be pursued in indoor for improving the architecture of the developed simulator and simulations of the network architecture will be implemented under and Network Simulators \@footnotemarkORI\@footnotetexthttp://nsnam.isi.edu/nsnam/ \@footnotemarkORI\@footnotetexthttp://www.nsnam.org/. Furthermore, we will tackle research in imagery for the detection of available parking areas in order to develop parking services. A corresponding targeted application was described in Scenario and has been illustrated in Figure 5.
This work is part of the SAVEMORE project \@footnotemarkORI\@footnotetexthttp://www.savemore-project.eu/. The SAVEMORE project has been selected in the context of the INTERREG IVA France (Channel) - England European cross-border co-operation programme, which is co-financed by the ERDF.
- [FKG13] Fritsch J., Kuehnl T., Geiger A.: A new performance measure and evaluation benchmark for road detection algorithms. In International Conference on Intelligent Transportation Systems (ITSC) (2013).
- [GLU12] Geiger A., Lenz P., Urtasun R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR) (2012).
- [GWP13] GERLA M., WENG J. T., PAU G.: Pics-on-wheels: Photo surveillance in the vehicular cloud. In IEEE International Conference on Computing, Networking and Communications (ICNC) (2013), pp. 1123–1127.
- [HDS13] Hammoudi K., Dornaika F., Soheilian B., Vallet B., McDonald J., Paparoditis N.: A synergistic approach for recovering occlusion-free textured 3D maps of urban facades from heterogeneous cartographic data. International Journal of Advanced Robotic Systems. Vol. 10 (2013), 10p.
- [HM13] Hammoudi K., McDonald J.: Design, implementation and simulation of an experimental multi-camera imaging system for terrestrial and multi-purpose mobile mapping platforms: A case study. Applied Mechanics and Materials, Trans Tech Publications, Selected papers from the International Conference on Optimization of the Robots. Vol. 332 (2013), 139–144.
- [Mas11] Maslekar N.: Adaptative traffic signal control system based on inter-vehicular communication. Ph.D. thesis, University of Rouen, Esigelec School of Engineering, 2011.
- [WSGB14] Whaiduzzaman M., Sookhak M., Gani A., Buyya R.: A survey on vehicular cloud computing. Journal of Network and Computer Applications. Vol. 40 (2014), 325–344.
- [YZG13] Yu R., Zhang Y., Gjessing S., Xia W., Yang K.: Toward cloud-based vehicular networks with efficient resource management. In IEEE Network (2013), pp. 48–55.