Object Detection based on LIDAR Temporal Pulses using Spiking Neural Networks

Object Detection based on LIDAR Temporal Pulses using Spiking Neural Networks

Shibo Zhou
Department of Electrical and Computer Engineering
Binghamton University
The State University of New York
Binghamton, NY 13902
&Wei Wang
Department of Computer Science and Engineering
University at Buffalo
The State University of New York
Buffalo, NY 14260

Neural networks has been successfully used in the processing of Lidar data, especially in the scenario of autonomous driving. However, existing methods heavily rely on pre-processing of the pulse signals derived from Lidar sensors and therefore result in high computational overhead and considerable latency. In this paper, we proposed an approach utilizing Spiking Neural Network (SNN) to address the object recognition problem directly with raw temporal pulses. To help with the evaluation and benchmarking, a comprehensive temporal pulses data-set was created to simulate Lidar reflection in different road scenarios. Being tested with regard to recognition accuracy and time efficiency under different noise conditions, our proposed method shows remarkable performance with the inference accuracy up to 99.83% (with 10% noise) and the average recognition delay as low as 265 ns. It highlights the potential of SNN in autonomous driving and some related applications. In particular, to our best knowledge, this is the first attempt to use SNN to directly perform object recognition on raw Lidar temporal pulses.


Object Detection based on LIDAR Temporal Pulses using Spiking Neural Networks

  Shibo Zhou Department of Electrical and Computer Engineering Binghamton University The State University of New York Binghamton, NY 13902 szhou19@binghamton.edu Wei Wang Department of Computer Science and Engineering University at Buffalo The State University of New York Buffalo, NY 14260 wwang49@buffalo.edu

Keywords Spiking Neural Network   Object Detection   LIDAR

1 Introduction

In computer vision for autonomous driving, one of the most difficult problems is to detect objects in three-dimensional space with low delay at long distance [1]. Traditional method for 2D image processing usually perform poorly in this task as a single camera could not obtain accurate depth information [2]. Generating depth map based on multi-camera can improve depth accuracy, however, its high computing complexity prevents it to meet the real-time requirement [3]. Radio detection and ranging (Radar) is immune to lighting variations as it uses radio waves to determine the range, angle, or velocity of objects. However, its short wavelength properties do not allow the detection of small objects nor provide users precise object images [4]. By using laser signal with relatively shorter wavelength than radio waves, Light detection and ranging (Lidar) system provides better range, higher spatial resolution and a larger field of view than Radar to help detect obstacles on the curves [5].

Comparing to other sensors, Lidar has significant advantage in detecting and recognizing objects at long distance in a wide view, so that self-driving cars at high speed can take evasive actions in time.

Although significant amount of research has been done for object classification from 3-D point clouds, It remains an open question on how to do object detection and recognition with raw Lidar temporal pulses. Lidar uses active sensors which emit their own energy source for illumination, and detects/measures the reflected energy from the objects. To analyze sensor output, traditional Lidar system needs multi-stages of signal processing, including analog to digital conversion, averaging and filtering, in order to produce high quality digital 3-D point clouds for later object detection and recognition by different learning algorithms, such as traditional feature extraction method [6, 7, 8, 9, 10] and novel neural network method [11, 12, 13, 14, 15, 16, 17]. The signal processing flow between Lidar sensor and object detection consumes high computing power and causes unignorable delays, it is important to find a solution which can realize object detection and recognition based on analog temporal pulses from Lidar sensors.

Spiking Neural Network (SNN), catches our attention as it can directly take temporal pulses as input and output classification result. Moreover, SNN has a special advantage over regular neural networks on processing temporal information [18]: For regular neural networks, all neurons are synchronized, so every neuron in the same layer needs to be evaluated before information can be passed to the next layer. On the contrary, SNN processes information in an event-based manner, which is asynchronous. Here event-based processing has several advantages, first of all, when the neuron is addressed by an event, only this neuron is activated, allowing SNN to become much more energy efficient [19]. Second, the event can be responded directly by a neuron and does not have to wait until all the neurons in layer are evaluated, nor to the next discrete time step to fire its response. It shows that the ability of SNN to process information without delay [20]. Third, the event can be processed using a relatively small number of spikes, reducing the computational complexity [21]. In a word, SNN can implement real-time pulse signal processing. Based on the advantage of SNN and spike nature of itself, SNN could perfectly process the Lidar temporal pulse signal.

In this paper, we explore the usage of Spiking Neural Network to perform object detection and recognition based on raw temporal pulses from Lidar sensors. In our simulations, Lidar system fires laser signals to the front of vehicle at certain frequency. Raw temporal pulses are generated by Lidar single photon detector array when it receives reflected photons from the object. In this way, depth information is recorded by the different delay of temporal pulses. Spiking neural network circuit takes temporal pulses as input, and directly output classification result.

Our contributions are:

  • We created a complete temporal pulses datasets of Lidar with different road conditions and target objects in different noise environment. Although the Lidar datasets already exists, these datasets are preprocessed through DSP, such as Udacity and KITTI datasets [22]. Our datasets directly simulate the Lidar’s temporal pulses signal, so it not only meets the requirements of the simulations but also has practical significance.

  • We firstly applied SNN to directly process raw temporal pulses signal from Lidar sensors. We carefully studied existing SNN models and adapt an adequate model for this task. The temporal pulses signal from Lidar do not require any preprocessing and are directly used as input of SNN.

  • The performance of the SNN-based object detection system is carefully evaluated under different noise conditions and show high accuracy as well as ultra-low delay. The simulation results show inference accuracy up to 99.83% with 10% noise, and the average recognition delay is only 265 ns after the first temporal pulse arrives.

The paper structure is organized as follows. Section II introduces the background and related works. Section III details Lidar temporal pulses and SNN with temporal coding. The detection performance of the whole system is evaluated and analysis in section IV, and in section V we conclude the paper.

2 Background

2.1 Object detection approaches using 3D-Lidar

Figure 1: Current pattern recognition on Lidar and Our proposed approach.

In recent years, Lidar is beginning to gain people’s attention due to its high resolution and 3D monochromatic image on the object. Generally, it works in the following manner: The device emits laser pulses which move outwards in various directions until the signals reach an object, and then reflect and return to the receiver. At the same time, an inner processor saves each reflection points of a laser and generates a 3D cloud points of the environment. Furthermore, the time interval between pulses leaving the device and their returning to Lidar sensors are measured through the same processor. The distance between a detected object and a Lidar receiver can be determined from the running time. However, this whole process requires a series of transformations as illustrated in the Figure 1. When the signal return to the receiver, the detector will generates a single photon detector array (SPDA) and records different temporal pulses information. Next, the time corresponding to the temporal pulses needs to be converted into a digital form followed by histogram, DSP denoise. Finally, the 3D cloud points are generated. In some cases, the 3D cloud points need to be further processed such as the mapping of 3D to 2D. Based on the 3D cloud points from different objects, the poster-processing system can differentiate objects. In addition to using 3D cloud points to detect object, sometimes the intensity of return light can also be used as a method to detect object. The light intensity is directly related to the reflectivity of the object, so the light intensity can also be used as an object detection, but the same transformation is required before the light intensity is entered into the poster-processing system. After transformation, how to detect objects quickly and accurately, the poster-processing system plays an important role in the detection of objects. Now, multiple poster-processing system have been designed to execute object detection. These poster-processing systems have followed one of the two approaches. One way is based on traditional approach such as hierarchical segmentation, sliding windows. the other way is a combination of neural networks. More detail please refer to section III related work.

2.2 Spiking Neural Network model

(a) SNN
(b) Neuron
Figure 2: (a) A 3-layer SNN; (b) A Neuron

Spiking Neural Network (SNN) is regarded as the third generation of Artificial Neural Network (ANN) [23] and it has been successfully applied to different domains. These applications include pattern generator and controller for different biological model of neuro-prosthetics system [24], adaptive robot path planning [1], obstacle recognition and avoidance by modeling and classifying spatio-temporal video data [25], financial data forecasting based on Polychronous Spiking Network in a unsupervised learning manner [26], composer classification of a musical composition [27], deal with spatio- and spectro-temporal brain data [28],and real time gait-event detection in a supervised learning manner [1]. At the same time, ROLLS microprocessor along with a Dynamic Vision Sensor [29] and TrueNorth chip [30] has demonstrated superior performance in detection and classification.

In SNNs, neurons communicate with spikes or actions potentials through layers of network. When the membrane potential of a neuron reaches to its firing threshold, the neuron will fire a spike to other connected neurons. SNN topologies can be classified into three general categories: feedforward networks, recurrent networks and hybrid networks [18]. The SNN topology used in this work is feedforward networks. Figure 2(a) shows an example of fully-connected feedforward SNN, it includes three layers as input layer, hidden layer, and output layer. Number of hidden layers can be more than one for more powerful and complex neural networks. Figure 2(b) illustrates a model of spike neuron, which involves accumulation operation and threshold comparative. In this work, we use non-leaky integrate and fire (n-LIF) neurons model with exponentially decaying synaptic current kernels [31]. The neuron’s membrane dynamic is described by:


Where is the membrane potential of neuron j. is the weight of the synaptic connection from neuron i to neuron j and is the time of the spike from neuron i. is the synaptic current kernel function as given below:


where is the only time constant. When the neuron receives a spike, the synaptic current will jump instantaneously, then decays exponentially with time constant . Figure 3 shows how this SNN works. The spike is transformed by the synaptic current kernel and a neuron is only allowed to spike once. The linear relation between input and output spike will be different due to the set of causal input spikes changes [31].

3 Related work

Figure 3: The working principle of model in two situations. In (a), the membrane voltage potential of neuron cell reaches the threshold and fire at time after receiving 4 spikes with weights at the times . In (b), the membrane voltage potential of neuron cell reaches the threshold and fire before the fourth input spikes arrives, which contrasts sharply with (a). One more thing, a neuron is only allowed to spike once, unless the network is reset or a new input pattern is present.

This section gives a concise overview of 3D-Lidar-based object detection methods, including traditional methods and neural network methods. The so-called traditional method refers to the realization of object detection using the framework of sliding window or the technique of segmentation, mainly including three steps: the first step is to use the sliding window or segmentation to locate the region of interest as the candidate. Next, extract visual features related to candidate area. Finally, use classifier for identification. As the traditional object detection methods encounter bottlenecks, object detection based on deep leaning is developed. So the method of neural network appears.

For traditional approaches, Behley et al. [6] propose a hierarchical segmentation of laser range data approach to realize object detection. Wang and Posner [7] applying the voting scheme to process Lidar range data and reflectance values to enable 3D object detection. Gonzales et al. [8] explore the fusion of RGB and Lidar-based depth maps (front view) by high-definition light detection and ranging. Tatoglu and Pochiraju [9] presented techniques to model the intensity of the laser reflection return from a point during Lidar scanning to determine diffuse and specular reflection properties of the scanned surface. Hernandez et al. [10] taking advantage of the reflection of the laser beam identify lane markings on a road surface.

Recently, people begin to apply neural networks to process Lidar data as they show high accuracy and low latency. Li et al. [11] used a single 2D end-to-end fully-Convolutional Network in a 2D point map from projection of 3D-Lidar range data. Li [12] extended the fully convolutional network based detection techniques to 3D and processed the 3D boxes from Lidar point cloud data. Chen et al. [13] provided a top view representation of point cloud from Lidar range data, and combine it with ConvNet-based fusion network for 3D object detection. Oh et al. [14] used one of the two independent ConvNet-based classifiers in the depth map from Lidar point cloud. Kim and Ghosh [15] proposed a framework using Fast R-CNN to integrate Lidar range data to improve the detection of regions of interest and subsequent identification of road user. Asvadi et al. [16] introduced Deep Concolutional Neural Network to process 3D-Lidar point cloud to predict 2D Bounding Boxes at proposal phase. Asvadi et al. [17] used deep convolutional neural network to process a dense reflection Map which is generated from 3D-Lidar reflection intensity for object detection.

However, for all aforementioned methods, object detection systems operate on range data or reflection intensity but not raw Lidar signals. As showed in Figure 1, all existing methods require heavy pre-processing of Lidar signal before they can be used for object detection or other purposes, which introduces unignorable delay in applications. In contrast, our SNN features spike communications, and can directly process temporal pulses from Lidar sensors to implement object detection and recognition.

4 Object detection with Spiking Neural Network

Figure 4: The pipeline of the SNN-based object detection system.

Figure 4 shows the architecture of our proposed SNN-based object detection system for Lidar signal. As shown in Figure 1 the laser emits a pulse which is reflected by the target, and after one photon reaches the detector. The SNN could directly process the temporal pulses from raw SPDA array thus obtain high precision and faster target recognition. This architecture generally consists of three parts: 1. single photon detector array’s (SPDA) temporal pulses contain object information. 2. SNN with temporal coding process directly the temporal pulses signal. 3. The SNN model provides reliable computations and implement object detection and classification.

4.1 Spiking Neural Network with Temporal Coding

The different delayed temporal pulses from Lidar’s single photon detector array contain object information. For different objects, the temporal pulses have different delay sequences. If this sequence can be directly simulated by a coding form, the pulse signal processing can achieve lower latency and higher accuracy performance. Therefore, how to implement simulation through coding, plays a crucial role in our research.

We develop a simulation method that SNN with temporal coding. Although there are multiple coding form based on SNN, our proposed temporal coding has some unique advantages. The following is a detail introduction. Firstly, for computer simulation, the pulse cannot become the input, we need to find a value that corresponding to pulse as input and output of network. In most cases, the value is spike counts or spike rates at a particular time window, but there are some problems. The spike counts are discrete and training phase is a great challenge. To avoid the above problem, we chose the continuous and more precision spike times as the information-carrying quantities. Thus, the spike times is not only relate to temporal pulses but also used as a coding form of SNN. Secondly, For the temporal coding, even though some algorithms have been proposed, the algorithms still has some drawbacks. Such as the SpikeProp algorithms [32], which described the cost function in term of the difference between desired and actural spike times, was limited to learning a single spike. Supervised Hebbian learning [33], and ReSuMe [34], they primarily suitable for training in single-layer networks, cannot perform more complex computation. To address this problem, we develop a conventional network model that relies only on simple neural and synaptic dynamics instead of complex and discontinuous dynamics of spiking neurons. If so, we not only avoid the discreteness of spike, but could extend naturally to multi-layer networks. Therefore, the temporal coding we proposed can realize SNN directly processing temporal pulses signal from Lidar and active function derived from the non-leaky integrate and fire neurons [31].


Where is the neuron’s response time of next layer. is the firing time of i-th source neuron. is the weight corresponding to i-th source neuron. .

4.2 The SNN process temporal pulses signal

Figure 5: The implementation of object detection based on Spiking Neural Network.

SNN based on temporal coding could implement temporal pulses signal processing with high accuracy and real time. Firstly, the active function Eq. 3 expresses the relationship between the input spike times and the time of first spike of the output neuron. Based on this relationship, we could achieve the weights of trained. Input the trained weights to the network as illustrated in Figure 5, for real pulse input, and we set the original time is zero. When the first spike reach to the SNN, it will multiple weight and result is compared with threshold. If the result less than the threshold, the neuron will not fire and cumulate next spike ’s result until the sum of result large than threshold. Once the neuron of subsequent layer fire, that neuron will not receive any spike until the network is reset, and a new input pattern is presented. Therefore, when the SNN finish a pattern recognition, it maybe just needs to input a few spikes not all spikes. This allows the SNN to have faster respond. Furthermore, We set the classification through the first fire neuron of output layer, which is beneficial to the acceleration of results. So, the trained SNN could process directly temporal pulses signal from Lidar and implement object detection with high accuracy and real time.

4.3 Database

Figure 6: The source data-set which include different road condition, car, pedestrians and truck.
Figure 7: A pattern with different range of noise.
(a) Pulses distribution of different noise ranges.
(b) Recognition time of every pattern.
Figure 8: The recognition performance of SNN Object Detection System is reflected by (a) the number of spikes, (b) recognition time

We have to create the complete temporal pulses database due there no such database. The existing Lidar data-set has been pre-processed such as Udacity and KITTI, not the raw temporal pulses from Lidar, so they do not meet our requirements. In order to meet the experimental requirements and actual situation, we use the Velodyne VLP-16 as a simulation Lidar. The Lidar at X time emits some pulses which is reflected by the obstacle, and after these pulses reaches the sensor cell at different delay times. So, these pulses own a time delay property, and form the times delay array. There are two cases: For the same obstacle, different parts of obstacle have different delay times. For different obstacles, farther away from Lidar, delay time more longer and vice-versa. Based on the above rules, the source data is generated through MATLAB and include 30 classes as shown in Figure 6. The source data from left to right are different road conditions, car, pedestrians and truck. The road conditions include 6 classes: tunnel, road, road with walls, lower bridge, upper bridge, road with wall and street lamp. The target objects include car, pedestrians and truck can combine different road conditions to produce 24 different combinations. The 30 classes correspond to 30 different images. Each image use Lidar’s 3D projection to represent test scenario, where Lidar is located at the center of the bottom edge of image and different gray scales represent different time delay. Because the size of image is 16 16, it can not have too many kinds of obstacles. If We put too many kinds of obstacles, it will happen very serious overlap. So we have only designed eight different combinations for each target object. In total, we get 30 classes. On the basis of source data, we can adjust the position information of each class, thus training data and test data can be generated. Let us think it, the source data consists different delay time, and delay time relate to the type of object and position information. If we add some random values to delay time of the whole object, it will produce a new pattern with delay time array and means different position information of object as well as simulate the dynamic characteristics of object. And then if we add different noises to each pattern, it will produce scenes with different noise. Because of the particularity of the data-set, the data-set generated by this method has three advantages. First of all, each class has a variety of patterns. Next, different noise scenarios are generated. Finally, the test data different from the training data. Therefore, the data-set satisfies the application requirements for testing the performance of our system.

4.4 Noise

For the Noise: First of all, the noise here is not a real noise, it simply refers to the object is detected without a label or suspended matter in the air.

  • If the delay time of the noise is greater than that of the object, we do not need to worry about the effect of noise on the classification of object. Because, the network we designed is only processes the first pulse of input, and the latter pulses would not be input into the network. And that’s why our method enable ultra-fast information processing, and has higher accuracy.

  • On condition that the delay time of noise less than the delay time of the object, we have to consider the effect of noise on the classification of objects. For the actual situation, it means that there are some noise that only affect the pulses from the objects. However, the object is not affected. Figure 7 shows the impact of different range noise on a pattern. From Figure 7(a) the range of noise is from 0 to 0.1, and the noise is not a great influence on a pattern. From Figure 7(b) the range of noise is from 0 to 0.2, and the influence of noise on a pattern is a little bit. From Figure 7(c) the range of noise is from 0 to 0.33, and the influence of noise on a pattern is a lot. From Figure 7(d) the range of noise is from 0 to 0.5, and the influence of noise on a pattern is very much. Furthermore, Figure 7(c) and 7(d) appear the distortion. In the case of noise, the greater of range of noise, the greater the impact on the detection of object. Since the total channel is fixed, as the effect of noise increase, the information from object will decrease. However, how to add the noise to the data-set, we first generated a matrix with uniform distribution values that is same size as a pattern of data-set, and then add the matrix to this pattern. In this process, The matrix is unknown without a label and can not be classified. When we put the matrix to the pattern, the values of pattern will be disturbed and have some values without labels, thereby a new pattern with noise will be generated. Since the uniform distribution of values can be set in different ranges and are random. First of all, For the same range, the every new pattern will be different from each other. Secondly, for different ranges, the new patterns not only own a different range of noise, but also reflect the effects extent of noise on the pattern. Through this method, we can obtain data-sets with different range of noise and test our system’s ability to resist noise.

5 Experimental Results and Analysis

5.1 Database and Neuron Network parameter

SNN with temporal coding is evaluated with object detection from the Database. SNN with temporal coding is relying on Velodyne VLP-16 Lidar. The maximum time delay in SNN algorithm is limited to 1s. The database contains 3000 training data-sets and 600 testing data-sets for 30 different categories: Pedestrian, Car, Truck, Building, Bridge and son on. The feedforward network with fully connected has three layers: input layer of 256 neurons, hidden layer of 400 neurons and output layer of 30 neurons.

5.2 Training phase

During training phase, we tried to different learning rate and batch size, when the learning rate is set to 0.01 and batch size is set to 60, the network convergence is the fastest. In addition, the standard stochastic gradient descent (SGD) and max epochs of 100 with L2 regularization was employed for the SNN training. With the increase of epoch, the trend of test accuracy is shown in Figure 9, and quantitative inference results are provided in Table I. As can be noted from the table, the average accuracy is up to 99.83% with 10% noise and reduced by about 32.5 percentage point from narrow range (0-0.1) to wide range (0-0.5). When the range of noise is 0 to 0.5 and the scene was badly distorted, the average accuracy of network has been reduced to 68.16%Ṫherefore the maximum anti-noise range of network is from 0 to 0.5.

Noise range Average accuracy
[0 0.10] 99.83%
[0 0.20] 96.16%
[0 0.33] 82.66%
[0 0.50] 68.16%
Table 1: The average accuracy of Object detection in different noise ranges

5.3 Object detection under different noise conditions

Figure 9: The average accuracy of test.

We tested four different ranges of noise. From Figure 9 shows that four different average accuracy curves red, green, blue and black correspond to different ranges of noise. At the beginning these four curves have different convergence rates, the latter three curves convergence faster than the first. The fastest convergence range of red curve is from 14 epochs to 20 epochs whereas the other curves are from 1 epoch to 8 epochs. It is proved that noise contributes to the convergence of our network under certain conditions. But it does not mean that the greater the noise range, the faster the convergence. For example the black curve with a noise range of 0 to 0.5 is slower than the blue curve with a noise range of 0 to 0.33. Figure 9 also shows that Even though we have set the maximum number of epochs to be 100, after 32 epochs, the four curves are almost stable and the average accuracy leveled off. Based on the above results, it proves that our network not only has good anti-noise ability but also has a fast convergence speed.

5.4 Performance Analysis of SNN Object Detection System

The SNN Object Detection System was evaluated with the recognition time, recognition time distribution and number of spikes based on recognition of 600 patterns. Figure 8 shows the recognition performance of the SNN Object Detection System. The four boxes in Figure 8(a) represent that the distribution of the number of spikes under different noise condition when the 600 patterns are recognition. The range of noise include 0 to 0.1, 0 to 0.2, 0 to 0.33 and 0 to 0.5 and the number of spikes from 16 to 180, 13 to 180, 39 to 180 and 15 to 180. However, traditional network need to input 256 spikes to complete a pattern recognition comparison with our network maximum input spike number of 180. Figure 8(b) show that the recognition time and the main distribution of recognition time. The maximum delay time is 1 s and the total number of patterns is 600. The four curves in Figure 8(b) represent the recognition time for each pattern. The four groups correspond to four different noise ranges and three bars display three different time ranges. The height of each bar is the total number of recognition time in the corresponding time range. As seen in Figure 8(b), most recognition time are distributed between 0.23 s and 0.3 s, and the total number of recognition time is 495, 517, 476 and 440 respectively. According the performance of spike number and recognition time, it proved that the SNN Object Detection System can achieved very low latency.

6 Conclusions

In this paper we introduced the SNN Object Detection System: implement an object detection using Spiking Neural Network with temporal coding and Temporal Pulses from a Lidar perfect combination. We also proved the benefits of the system through quantitative and qualitative experiments. As future works we plan to exploit the networks which deeper than three layers. Detection of more object class can be considered and explored for future works. Moreover, the SNN can be combined with crossbar to implement further acceleration.


The authors would like to thank the support of Binghamton University.


  • [1] Tiffany Hwu, Alexander Y Wang, Nicolas Oros, and Jeffrey L Krichmar. Adaptive robot path planning using a spiking neuron algorithm with axonal delays. IEEE Transactions on Cognitive and Developmental Systems, 10(2):126–137, 2018.
  • [2] Thomas Lu and Tien-Hsin Chao. A single-camera system captures high-resolution 3d images in one shot. SPIE Newsroom, 2006.
  • [3] Jong-Il Park and Seiki Inoue. Acquisition of sharp depth map from multiple cameras. Signal Processing: Image Communication, 14(1-2):7–19, 1998.
  • [4] Julius DiFranco and Bill Rubin. Radar detection. The Institution of Engineering and Technology, 2004.
  • [5] Claus Weitkamp. Lidar: introduction. In Laser Remote Sensing, pages 19–54. CRC Press, 2005.
  • [6] Jens Behley, Volker Steinhage, and Armin B Cremers. Laser-based segment classification using a mixture of bag-of-words. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, pages 4195–4200. IEEE, 2013.
  • [7] Dominic Zeng Wang and Ingmar Posner. Voting for voting in online point cloud object detection. Robotics: Science and Systems, 1(3), 2015.
  • [8] Alejandro González, David Vázquez, Antonio M López, and Jaume Amores. On-board object detection: Multicue, multimodal, and multiview random forest of local experts. IEEE transactions on cybernetics, 47(11):3980–3990, 2017.
  • [9] Akin Tatoglu and Kishore Pochiraju. Point cloud segmentation with lidar reflection intensity behavior. In Robotics and Automation (ICRA), 2012 IEEE International Conference on, pages 786–790. IEEE, 2012.
  • [10] Danilo Caceres Hernandez, Van-Dung Hoang, and Kang-Hyun Jo. Lane surface identification based on reflectance using laser range finder. In System Integration (SII), 2014 IEEE/SICE International Symposium on, pages 621–625. IEEE, 2014.
  • [11] Bo Li, Tianlei Zhang, and Tian Xia. Vehicle detection from 3d lidar using fully convolutional network. arXiv preprint arXiv:1608.07916, 2016.
  • [12] Bo Li. 3d fully convolutional network for vehicle detection in point cloud. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pages 1513–1518. IEEE, 2017.
  • [13] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3d object detection network for autonomous driving. In IEEE CVPR, volume 1, page 3, 2017.
  • [14] Sang-Il Oh and Hang-Bong Kang. Object detection and classification by decision-level fusion for intelligent vehicle systems. Sensors, 17(1):207, 2017.
  • [15] Taewan Kim and Joydeep Ghosh. Robust detection of non-motorized road users using deep learning on optical and lidar data. In Intelligent Transportation Systems (ITSC), 2016 IEEE 19th International Conference on, pages 271–276. IEEE, 2016.
  • [16] Alireza Asvadi, Luis Garrote, Cristiano Premebida, Paulo Peixoto, and Urbano J Nunes. Depthcn: Vehicle detection using 3d-lidar and convnet. In Intelligent Transportation Systems (ITSC), 2017 IEEE 20th International Conference on, pages 1–6. IEEE, 2017.
  • [17] Alireza Asvadi, Luis Garrote, Cristiano Premebida, Paulo Peixoto, and Urbano Nunes. Deep convnet-based vehicle detection using 3d-lidar reflection intensity data. Iberian Robotics Conference, 2017.
  • [18] Filip Ponulak and Andrzej Kasinski. Introduction to spiking neural networks: Information processing, learning and applications. Acta neurobiologiae experimentalis, 71(4):409–433, 2011.
  • [19] Gianluca Susi, Pilar Garces, Alessandro Cristini, Emanuele Paracone, Mario Salerno, Fernando Maestu, and Ernesto Pereda. Fns: an event-driven spiking neural network framework for efficient simulations of large-scale brain models. arXiv preprint arXiv:1801.00864, 2018.
  • [20] Henry Martin and Jörg Conradt. Spiking neural networks for vision tasks. 2015.
  • [21] RH Rasshofer and K Gresser. Automotive radar and lidar systems for next generation driver assistance functions. Advances in Radio Science, 3(B. 4):205–209, 2005.
  • [22] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11):1231–1237, 2013.
  • [23] Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural networks, 10(9):1659–1671, 1997.
  • [24] Filip Ponulak and Andrzej Kasinski. Resume learning method for spiking neural networks dedicated to neuroprostheses control. In Proceedings of EPFL LATSIS Symposium 2006, Dynamical Principles for Neuroscience and Intelligent Biomimetic Devices, pages 119–120. Citeseer, 2006.
  • [25] Chenjie Ge, Nikola Kasabov, Zhi Liu, and Jie Yang. A spiking neural network model for obstacle avoidance in simulated prosthetic vision. Information Sciences, 399:30–42, 2017.
  • [26] David Reid, Abir Jaafar Hussain, and Hissam Tawfik. Financial time series prediction using spiking neural networks. PloS one, 9(8):e103656, 2014.
  • [27] Chaitanya Prasad, Krishnakant Saboo, and Bipin Rajendran. Composer classification based on temporal coding in adaptive spiking neural networks. In Neural Networks (IJCNN), 2015 International Joint Conference on, pages 1–8. IEEE, 2015.
  • [28] Nikola K Kasabov. Neucube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data. Neural Networks, 52:62–76, 2014.
  • [29] Ning Qiao, Hesham Mostafa, Federico Corradi, Marc Osswald, Fabio Stefanini, Dora Sumislawska, and Giacomo Indiveri. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128k synapses. Frontiers in neuroscience, 9:141, 2015.
  • [30] Paul A Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andrew S Cassidy, Jun Sawada, Filipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197):668–673, 2014.
  • [31] Hesham Mostafa. Supervised learning based on temporal coding in spiking neural networks. IEEE transactions on neural networks and learning systems, 29(7):3227–3235, 2018.
  • [32] Sander M Bohte, Joost N Kok, and Han La Poutre. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing, 48(1-4):17–37, 2002.
  • [33] Robert Legenstein, Christian Naeger, and Wolfgang Maass. What can a neuron learn with spike-timing-dependent plasticity? Neural computation, 17(11):2337–2382, 2005.
  • [34] Filip Ponulak and Andrzej Kasiński. Supervised learning in spiking neural networks with resume: sequence learning, classification, and spike shifting. Neural computation, 22(2):467–510, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description