\mathcal{G}lidar3DJ: A View-Invariant gait identification via flash lidar data correction

lidar3DJ: A View-Invariant gait identification via flash lidar data correction

Abstract

Gait recognition is a leading remote-based identification method, suitable for real-world surveillance and medical applications. Model-based gait recognition methods have been particularly recognized due to their scale and view-invariant properties. We present the first model-based gait recognition methodology, lidar3DJ using a skeleton model extracted from sequences generated by a single flash lidar camera. Existing successful model-based approaches take advantage of high quality skeleton data collected by Kinect and Mocap, for example, are not practicable for application outside the laboratory. The low resolution and noisy imaging process of lidar negatively affects the performance of state-of-the-art skeleton-based systems, generating a significant number of outlier skeletons. We propose a rule-based filtering mechanism that adopts robust statistics to correct for skeleton joint measurements. Quantitative measurements validate the efficacy of the proposed method in improving gait recognition.

lidar3DJ: A View-Invariant gait identification via flash lidar data correction

Nasrin Sadeghzadehyazdi,  Tamal Batabyal, A. Glandon, Nibir K. Dhar, B. O. Familoni, K. M. Iftekharuddin, Scott T. Acton
Department of Electrical and Computer Engineering, University of Virginia,
Charlottesville, USA, Night Vision and Electronic Sensors Directorate Fort Belvoir, USA,
Electrical and Computer Engineering, Old Dominion University, Norfolk, USA


This paper is accepted to be published in: 2019 IEEE International Conference on Image Processing, Sept 22-25, 2019, Taipei, Taiwan. IEEE Copyright Notice: c⃝IEEE 2019 Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Index Terms—  gait recognition, lidar, feature correction

1 Introduction

Gait recognition has been an active area of research in the last decade due to the widespread application in forensic cases, surveillance, and medical studies of patients affected by motion-related diseases like Parkinson’s disease [1]. Gait recognition uses the features from both structure and motion for person identification. Previous studies have shown the relative uniqueness of gait for individuals [2]. Unlike other biometric features such as the iris, face, and fingerprint, gait recognition does not require a subject’s cooperation, nor does it require high quality data. Under uncontrolled real-world conditions, there are many scenarios in which direct contact between subjects and sensors is not possible, or there is a considerable distance between cameras and subjects that makes reliable data acquisition difficult or impossible. Under such conditions, many biometric methods fail; whereas, several studies have shown promising results for person identification with the gait-based biometric features [3, 4]. Furthermore, unlike color and texture, which are among the prevalent features in many identification studies, features extracted from gait are resilient to changes in clothing and lighting conditions.

Fig. 1: Sample frames of lidar data. From top, first row: intensity data, second row: range data, third row: sample frames with correctly detected skeleton, last row: frames with faulty skeletons

In recent years, depth-sensing cameras such as Kinect and lidar have become popular for gait analysis due to their ability to provide range (depth) and intensity data [5, 6, 7, 8, 9]. Each pixel in the range data provides distance information, so three-dimensional information (with the additional range dimension) can be recorded in time. Unlike ordinary cameras, the performance of depth cameras is not affected by changes in lighting conditions. In this work, we use data that was collected by a single flash lidar camera. A lidar sensor is a time-of-flight camera that uses laser beam to measure the distance of targets from the camera. A laser beam can be focused into small spots to suit the objects of interest and does not expand considerably by the object’s surface, which gives a lidar sensor the capability of providing detailed images of a scene. As a result of such properties, lidar sensors have found application in areas such as archaeology, forestry, geology, geography, space missions, transportation and autonomous vehicles.

Fig. 2: Pipeline for person identification using joint location correction

Gait recognition methods using video data are generally divided into two main categories, model-free methods and model-based methods. Model-free approaches require features from clean silhouettes [10, 11]. Model-based methods fit a model, like a skeleton, to human body and exploit features from the fitted model for recognition. Unlike model-free methods, model-based approaches are view and scale-invariant suited for real-world scenarios. On the other hand, model-based methods are computationally expensive. But, with depth-sensing modalities like Kinect that provide a direct estimation of joint positions, this expense is not an issue. However, Kinect sensors have the issues of limited range and unreliability of range information in outdoor environments, in particular under direct sunlight, where the high intensity infrared of environment cannot be easily differentiated from the infrared light of the sensor [12]. Compared with Kinect, a flash lidar camera has a drastically extended range ( meters) and its performance is not affected in outdoor environments due to the high irradiance power of pulsed laser compared with the background [13].

Fig. 3: Left: Skeleton model, Right: features. Each green arrow shows one 3-dimensional vector in the feature vector.

Existing state-of-the-art model-based methods avoid the challenge of erroneous features by adopting high-quality skeleton data provided by Kinect or Mocap. In contrast, the data collected by a flash lidar camera is noisy and has low resolution that negatively affects the skeleton extraction performance. Faulty skeleton models result in features that are plagued with missing and erroneous measurements that in turn present a major challenge for a successful gait identification. This work takes on the challenging task of gait identification using flash lidar data. Our main contributions can be described as follows. First, we present the first model-based approach for gait recognition using flash lidar data. The only existing lidar-based person identification methods are model-free, and rely on silhouette extraction from a point cloud [14, 15]. Second, we propose a rule-based filtering mechanism to correct for erroneous skeleton joint coordinates by modeling each joint location as a time sequence and adopting robust statistics measures of the nearest neighbors. Third, experiments are performed using the flash lidar data to evaluate the performance of the proposed methodology.

2 Proposed methodology

Figure 2 demonstrates the workflow of the proposed methodology. For a video sequence like with frames, the input into the proposed gait recognition system are the intensity and range data , recorded by a single flash lidar camera, where images are preprocessed to reduce the noise. Figure 1 shows sample intensity and range data in the first and second row. OpenPose, a state-of-the-art real-time pose detector [16], is leveraged to extract a skeleton model from the intensity information of lidar. The employed skeleton model is illustrated in the left side of Figure 3. For each frame, we present the 2-dimensional coordinates of skeleton joints in the vectorized form

(1)

where is the number of joints and are the coordinates of the joint in the image frame of reference. The sample frames with the correct detected skeleton models can be seen in the third row of Figure 1. Next, the range data is used to project the 2D locations of joints, provided by OpenPose, into a real-world coordinate system:

(2)

where is the real-world location of joint in the direction and is the corresponding location in the image coordinate system. is the number of pixels in the direction, is the angle of view, and is the range value of joint . Several factors in the data negatively affect the quality of features that are computed from the resulting joints. As the subjects loom closer to the camera, range data are affected by noise. The intensity data lack color, and there is similarity between human clothing, skin and the background. The last row in Figure 1 shows a few examples of the detected faulty skeletons, which are the result of noisy nature of lidar data and erroneous joint localization of OpenPose. Features that are computed from such faulty skeletons contain missing and erroneous values that presents a big challenge for a successful gait recognition. To resolve this problem, we present a filtering mechanism to correct joint location values and extract features after joint correction. Furthermore, to incorporate the dynamic of the motion, we perform feature concatenation with a new criterion.

Fig. 4: Sample joint location time sequence in one direction before (top), after joint correction (middle), and after smoothing (bottom).

3 Joint location correction

The top row of Figure 4 shows a typical joint location time sequence in one direction. The significant number of missing values and sudden jumps in the joint location sequence results into features that are plagued with erroneous and outlier measurements. The outcome will be gait identification with lower accuracy. To resolve this problem, we propose a rule-based short-memory median filter to correct joint location time sequence. , the joint location time sequence in the x direction, extended over frames, is defined as follows

(3)

where and can be defined in the same way. We correct at time , if there is a missing value (), or a sudden jump. We define the sudden jump using the relative distance of the current and previous values of the joint location

(4)

Each instant corresponds with one frame, and is the derivative of joint location in the x direction with respect to time. If any of the above conditions are satisfied, the corrected joint location value at frame will be calculated according to the following equation:

(5)

where is the number of previous closest non-zero neighbors of joint location values at time . is the joint location value at previous closest non-zero neighbors of frame . We define as the array of closest nonzero neighbors at time . is updated continuously, containing at each instant the previous nonzero values of joint location sequence that are closest in time to the current instant. This includes the corrected joint location values.

The length of is selected to follow the local pattern of . We used as the smallest possible number of previous nearest neighbors. and are not proper choices for this problem. Choosing corresponds with the previous nearest non-zero neighbor. However, if the nearest non-zero neighbor is noisy, the error will propagate as a result of correction. On the other hand, considering for the length of , transforms median into mean. Median is selected as a robust statistic that is less affected by outliers. In this problem, by choosing median over mean, the effect of erroneous joint location values can be ameliorated. Furthermore, median filtering is effective in removing the impulse-like signal features, which occur in this application due to be the pattern of missing joints and jumps in the joint location sequence. The middle row of Figure 4 shows a sample joint location sequence along one direction after applying joint correction.

Figure 5 shows a comparison between the proposed rule-based median filter with the moving median. Our rule-based filter is in particular successful in correcting missing values when they occur over consecutive frames. This is mainly the result of the way each of the above two filters work. In particular, the moving median uses the neighborhood information irrespective of their values. In contrast, our rule-based median filter uses the previous neighbors’ values only if they are nonzero and there is no sudden jump between consecutive values. However, as can be seen in Figure 4 and 5, this can also cause the flattening of the signal in some regions.

Neck to R Shoulder Neck to L Shoulder Neck to R Hip Neck to L Hip
R Shoulder to R Elbow L Shoulder to L Elbow R Hip to R Knee L Hip to L Knee
R Elbow to R Wrist L Elbow to L Wrist R Knee to R Ankle L Knee to L Ankle
Table 1: List of three-dimensional vectors in the feature vector (L refers to the left joints and R refers to the right joints)

Finally, in order to alleviate the effects of signal flattening and lower amplitude impulses, both the result of joint correction in regions with consecutive missing values or sudden jumps, we employ RLowess (locally weighted scattered plot smoothing) [17], that locally fits first order polynomial using weighted linear regression, where regression weights are estimated through a robust procedure. The robustness of the employed method is essential due to the existence of low-amplitude impulses that act as outliers. In Figure 4 the last row shows the smoothed joint location time sequence.

Fig. 5: From top to bottom: original joint location in one direction, after joint correction with the proposed rule-based median, after applying moving median filter of the same window length

4 feature vectors

Like [18], our features is comprised by a set of 3-dimensional vectors measured between selected joints of skeleton model. Compared to features that describe the distance between joints [19], or to features that only consider angles between selected joints [20], we can implicitly encode both the distance, and the angles of selected joints in the skeleton in different postures. Each three-dimensional vector in our feature vector is defined according to equation 6:

(6)

where and are the indices of selected joints. Unlike features in [18] that were defined with respect to a reference joint, our features are formulated between different joints. Table 1 describes the name of the joints that form each of the three-dimensional vectors. Figure 3, right illustrates the described vectors.

[4] [19] **
Accuracy 43.07% 45.67% 56.26%
F-Score 42.41% 43.72% 57.24%
Table 2: Correct identification scores (accuracy and F-score) for the proposed features (**), the methods in [4] and [19]. Features are computed without joint correction.
[4] [19] lidar3DJ lidar3DJ(F-C)
Accuracy 61.20% 70.59% 81.24% 85.11%
F-Score 57.41% 65.15% 80.30% 84.33%
Table 3: Correct identification scores for lidar3DJ, lidar3DJ with feature concatenation (F-C), and the methods in [4] and [19]. Features are computed from corrected joints.

5 feature concatenation

When two individuals have similar body measurements in multiple postures, motion dynamics can play a crucial role in gait identification. A common practice to include motion dynamics in many model-based methods is to compute features like speed and step length using ankle-to-ankle distance sequence [21, 22], or to calculate moments like variance, maximum and average of selected features in each gait cycle [4, 19, 23]. The gait cycle is estimated by looking at the distance between the two ankle joints time sequence, which is straightforward with clean joint position data. Figure 6 shows a comparison between ankle-to-ankle distance sequences, one from Kinect, and one from joint-corrected samples of our lidar data set. As can be seen in this figure, while a clear cyclic pattern can be observed for the Kinect sample, it is difficult to determine a gait cycle in the lidar time sequence. To compensate for such shortcomings, instead of calculating feature moments, we concatenate feature vectors in the consecutive frames to encode the dynamics of the motion.

To find a proper window length for feature concatenation, we use the idea of gait cycle; however, the gait cycle can vary from subject to subject and even during a course of walking sequence. To resolve this issue, we first remove the small and large values of gait cycle from each of the ankle-to-ankle distance curve. These small and large values occur in the beginning and end of the motion, as well as when the subjects change their motion direction. Once such outlier gait cycles are removed, majority voting is performed on the remaining gait cycles. The gait cycle that appears the most is selected as the length of window for feature concatenation.

Fig. 6: Ankle-to-ankle distance for Kinect (left) vs. lidar(right).

6 Results

The data set recorded by a single flash lidar camera includes 34 sequences of walking from 10 subjects in which the camera is fixed during all the actions. The walking action is performed in three different ways, capturing multiple views of the subjects: walking toward and away from the camera, walking on a diamond shape, and walking on a diamond shape while holding a yard stick in one hand. We used 70% of the sequences for training and the rest for testing, where the classifier is tested on a type of walking that it was not trained on. K-nearest neighbors of and Manhattan distance is adopted as our classifier. The performance of the proposed approach is compared with the works in [4] and [19], which are among state-of-the-art model-based methods. Table 2 shows the correct identification scores with the proposed features, the features in [4] and [19] without joint correction. Results in Table 3 report identification scores with the proposed joint correction. It also shows the scores after applying feature concatenation on lidar3DJ. By comparing the results, it is clear that joint correction can drastically improve gait identification accuracy in all of the cases. The results also demonstrate the advantage of feature concatenation over the model-based approaches that rely on a combination of static anthropometric-based attributes, and statistical moments that describe dynamic features [4, 19].

7 Conclusion

In this work, we introduce lidar3DJ, the first model-based approach for gait recognition using flash lidar data. Our model-based approach is scale and view invariant, which is essential for real-world application. We address a major limitation of the current state-of-the-art model-based methods that require high quality Mocap or Kinect data. The noisy, low resolution flash lidar data used in this study present challenges to the performance of state-of-the-art skeleton detectors, degrading joint localization and gait identification accuracy. A rule-based short-memory median filter is presented that improves the quality of features and gait recognition accuracy. The proposed median filter has the potential application for predicting missing values in time series. Furthermore, we introduce a new feature concatenation criterion to incorporate the dynamic of motion and improve gait recognition methodology. Experimental results support the effectiveness of lidar3DJ for gait recognition despite noisy data with faulty missing features.

References

  • [1] Silvia Del Din, Alan Godfrey, and Lynn Rochester, “Validation of an accelerometer to quantify a comprehensive battery of gait characteristics in healthy older adults and parkinson’s disease: Toward clinical and at home use.,” IEEE J. Biomedical and Health Informatics, vol. 20, no. 3, pp. 838–847, 2016.
  • [2] James E Cutting and Lynn T Kozlowski, “Recognizing friends by their walk: Gait perception without familiarity cues,” Bulletin of the psychonomic society, vol. 9, no. 5, pp. 353–356, 1977.
  • [3] Lily Lee and W Eric L Grimson, “Gait analysis for recognition and classification,” in Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on. IEEE, 2002, pp. 155–162.
  • [4] Aniruddha Sinha, Kingshuk Chakravarty, and Brojeshwar Bhowmick, “Person identification using skeleton information from kinect,” in Proc. Intl. Conf. on Advances in Computer-Human Interactions, 2013, pp. 101–108.
  • [5] Tamal Batabyal, Scott T Acton, and Andrea Vaccari, “Ugrad: A graph-theoretic framework for classification of activity with complementary graph boundary detection,” in Image Processing (ICIP), 2016 IEEE International Conference on. IEEE, 2016, pp. 1339–1343.
  • [6] Tamal Batabyal, Andrea Vaccari, and Scott T Acton, “Ugrasp: A unified framework for activity recognition and person identification using graph signal processing,” in Image Processing (ICIP), 2015 IEEE International Conference on. IEEE, 2015, pp. 3270–3274.
  • [7] Ross A Clark, Kelly J Bower, Benjamin F Mentiplay, Kade Paterson, and Yong-Hao Pua, “Concurrent validity of the microsoft kinect for assessment of spatiotemporal gait variables,” Journal of biomechanics, vol. 46, no. 15, pp. 2722–2725, 2013.
  • [8] Ferda Ofli, Rizwan Chaudhry, Gregorij Kurillo, René Vidal, and Ruzena Bajcsy, “Sequence of the most informative joints (smij): A new representation for human skeletal action recognition,” Journal of Visual Communication and Image Representation, vol. 25, no. 1, pp. 24–38, 2014.
  • [9] Tamal Batabyal, Tanushyam Chattopadhyay, and Dipti Prasad Mukherjee, “Action recognition using joint coordinates of 3d skeleton data,” in Image Processing (ICIP), 2015 IEEE International Conference on. IEEE, 2015, pp. 4107–4111.
  • [10] Jinguang Han and Bir Bhanu, “Individual recognition using gait energy image,” IEEE Transactions on Pattern Analysis & Machine Intelligence, , no. 2, pp. 316–322, 2006.
  • [11] Imad Rida, Xudong Jiang, and Gian Luca Marcialis, “Human body part selection by group lasso of motion for model-free gait recognition,” IEEE Signal Processing Letters, vol. 23, no. 1, pp. 154–158, 2016.
  • [12] Péter Fankhauser, Michael Bloesch, Diego Rodriguez, Ralf Kaestner, Marco Hutter, and Roland Y Siegwart, “Kinect v2 for mobile robot navigation: Evaluation and modeling,” in 2015 International Conference on Advanced Robotics (ICAR). IEEE, 2015, pp. 388–394.
  • [13] Radu Horaud, Miles Hansard, Georgios Evangelidis, and Clément Ménier, “An overview of depth cameras and range scanners based on time-of-flight technologies,” Machine vision and applications, vol. 27, no. 7, pp. 1005–1020, 2016.
  • [14] Bence Gálai and Csaba Benedek, “Feature selection for lidar-based gait recognition,” in Computational Intelligence for Multimedia Understanding (IWCIM), 2015 International Workshop on. IEEE, 2015, pp. 1–5.
  • [15] Csaba Benedek, “3d people surveillance on range data sequences of a rotating lidar,” Pattern Recognition Letters, vol. 50, pp. 149–158, 2014.
  • [16] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, “Realtime multi-person 2d pose estimation using part affinity fields,” arXiv preprint arXiv:1611.08050, 2016.
  • [17] William S Cleveland, “Robust locally weighted regression and smoothing scatterplots,” Journal of the American statistical association, vol. 74, no. 368, pp. 829–836, 1979.
  • [18] MS Kumar and R Venkatesh Babu, “Human gait recognition using depth camera: a covariance based approach,” in Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing. ACM, 2012, p. 20.
  • [19] Ke Yang, Yong Dou, Shaohe Lv, Fei Zhang, and Qi Lv, “Relative distance features for gait recognition with kinect,” Journal of Visual Communication and Image Representation, vol. 39, pp. 209–217, 2016.
  • [20] Adrian Ball, David Rye, Fabio Ramos, and Mari Velonaki, “Unsupervised clustering of people from’skeleton’data,” in Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. ACM, 2012, pp. 225–226.
  • [21] Johannes Preis, Moritz Kessel, Martin Werner, and Claudia Linnhoff-Popien, “Gait recognition with kinect,” in 1st international workshop on kinect in pervasive computing. New Castle, UK, 2012, pp. 1–4.
  • [22] Kenji Koide and Jun Miura, “Identification of a specific person using color, height, and gait features for a person following robot,” Robotics and Autonomous Systems, vol. 84, pp. 76–87, 2016.
  • [23] Wenzheng Chi, Jiaole Wang, and Max Q-H Meng, “A gait recognition method for human following in service robots,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 9, pp. 1429–1440, 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
362613
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description