On-chip Face Recognition System Design with Memristive Hierarchical Temporal Memory

On-chip Face Recognition System Design with Memristive Hierarchical Temporal Memory

Abstract

Hierarchical Temporal Memory is a new machine learning algorithm intended to mimic the working principle of neocortex, part of the human brain, which is responsible for learning, classification, and making predictions. Although many works illustrate its effectiveness as a software algorithm, hardware design for HTM remains an open research problem. Hence, this work proposes an architecture for HTM Spatial Pooler and Temporal Memory with learning mechanism, which creates a single image for each class based on important and unimportant features of all images in the training set. In turn, the reduction in the number of templates within database reduces the memory requirements and increases the processing speed. Moreover, face recognition analysis indicates that for a large number of training images, the proposed design provides higher accuracy results (83.5%) compared to only Spatial Pooler design presented in the previous works.

1Introduction

The Hierarchical Temporal Memory (HTM) is a cognitive learning algorithm developed by Numenta Inc. [1]. HTM was designed based on various principles of neuroscience and, therefore, is said to be able to emulate the working principle of neocortex, a part of the human brain responsible for learning, classification, and making predictions [2].

After successful software realization of this learning algorithm, several works such as [3] have been conducted to realise its hardware implementation, including [7], one of the latest works that propose analog circuit design of HTM Spatial Pooler based on the combination of memristive crossbar circuits with CMOS technology. One of the main advantages of the latest work is that the processing of input data is performed in the analog domain, which indeed can offer higher processing speed, primarily, due to the absence of analog-to-digital and digital-to-analog converters used in digital systems. Thus, inspired by design described in [7] and from the idea of creating new analog add-on system that may move processing from digital domain to analog domain at the sensory level, this work proposes a system design of Hierarchical Temporal Memory for face recognition applications.

In particular, this work proposes a system level design for HTM that exploits the combination of a memristive crossbar based Spatial Pooler [7], and conceptual analog Temporal Memory based on learning mechanism inspired from the Hebbian rule. Further, we present face recognition algorithm for the proposed system and provide its performance results and analysis.

2Background

2.1Hierarchical Temporal Memory

The main core of the proposed system is HTM, that consists of two parts: Spatial Pooler and Temporal Memory [2]. Spatial Pooler (SP) is responsible for the generation of the sparse distributed representations of input data and can be used for feature extraction and pattern recognition applications on its own, whereas Temporal Memory (TM) is responsible for learning input patterns and making predictions based on the temporal changes in the given input stream [8].

HTM was initially developed as the software algorithm [8] and some research works, such as [9], were presented to illustrate and verify the capabilities of algorithmic implementation of HTM for performing classification, learning patterns, detecting abnormalities, and making predictions.

Since HTM is a new machine learning algorithm, few attempts were made to implement it in hardware level. For instance, [3] presented HTM hardware design for digital application-specific integrated circuit (ASIC) architecture, [4] depicts the design for the FPGA implementation of digital HTM, [5] proposed computing blocks for HTM using memristive crossbar arrays and spin-neurons, which process data in both digital and analog domains, and the latest works [6] proposed circuits for HTM Spatial Pooler based on memristive crossbar architecture.

2.2Hebbian theory

Introduced by Donald Hebb in 1949, Hebbian theory (also known as the Hebbian rule or Hebbian postulate) serves as one of the many learning mechanisms used in the design of artificial neural networks. In particular, it describes the basic idea behind synaptic plasticity and states that synaptic efficacy increases when a presynaptic cell takes part in repeated or persistent stimulations and firing of neighboring postsynaptic cell [12]. Following the Hebbian rule, the activation of postsynaptic units in neural nets depends on the weighted activations of presynaptic units, which can be represented by (Equation 1)

where represents the output of neuron , stands for the input, and is the weight of the connection from neuron to [12].

In other words, the Hebbian theory claims that the synaptic weight between two neurons increases when both neurons simultaneously experience the activation or deactivation, and it decreases when they activate or deactivate separately. The equation for the change in synaptic weight of the connection can be shown by (Equation 2), which is known as a learning mechanism of Hebbian theory [12].

where is the learning rate. This learning mechanism in the artificial neural networks is used to alter weights between neurons.

Figure 1: High level block diagram of the proposed system illustrating operating principle of the HTM spatial pooler
Figure 1: High level block diagram of the proposed system illustrating operating principle of the HTM spatial pooler

2.3Memristor models

Currently, there are several available memristor models, which incorporate not only the characteristics of real existing devices, but also provide the possibility to switch from one device parameters to another, so that suitability of these devices can also be assessed. For example, [13] proposed a model with nanosecond switching time, which is crucial in designing real-time systems. Recent models proposed in [14] allow simulation of large-scale networks of memristors as well since parallelism and scalability of the system play important role in processing huge amount of data.

3Proposed HTM System

3.1System Design

A high-level block diagram of the proposed system is illustrated in Figure 1. Input data controller reads the input image, places it in data storage and sends this input image to HTM Spatial Pooler by partially retrieving it from the data storage. The data controller is substantial as partial sending is required to ensure that the selected size of HTM SP is capable of processing an entire image. In turn, HTM SP is responsible for feature extraction of the input image and thus, provides a binary output. If the input image is a training image, the output data controller directs its extracted features to HTM TM, which creates a single image/template for each class having common features of all the training images belonging to that particular class. During the testing phase, resulting images stored within TM are then used by pattern matcher to calculate the similarity score between the input testing image and each of the trained classes.

3.2System Algorithm

In this work, we also propose Algorithm ? that can be used to analyze the effectiveness of the proposed system. The algorithm shows interconnections between main processing stages of the entire system: pre-processing, HTM SP, HTM TM, and pattern matching. Pre-processing stage, shown in lines 2-3 in Algorithm ?, is necessary to convert the input image into system compatible format. In this stage, we convert input image to grayscale and enhance its quality using standard deviation filtering. This is achieved by either external means or by input data controller.

HTM SP stage models feature extraction process achieved by the Spatial Pooler block that is illustrated in lines 5-21 of Algorithm ?. This is done by initially generating random weights matrix , so that each weight in would have analog value between and . The weights matrix has dimensions of , which also defines dimensions of each column within Spatial Pooler. Lines - define connectivity of each synapse, so that if its weight is higher than the threshold , the synapse is connected and represented by , but if , it is disconnected and represented by . Synapse connectivity is used to determine overlap value for each column , which is represented as the of the products of synaptic weight matrix and bits of the image within the region . This overlap value represents the importance of bits connected to each particular column.

Lines - define the inhibition rule implemented in the proposed system. According to the rule, inhibition is performed in a block-by-block manner, each having dimensions of columns and is based on overlap values achieved by columns lying within that inhibition block . This is done by comparing individual overlap values of columns with threshold value , which is determined as the maximum overlap that is detected within that particular inhibition block. Then, the column or columns with overlap value greater than or equal to are considered as important and represented by logical high value. Otherwise, columns are considered as unimportant and represented by logical low value. As a result, the binary feature extracted output image after HTM SP processing is formed by concatenating all inhibition blocks.

HTM TM stage defines a learning mechanism that is activated during training stage when binary feature extracted an image from HTM SP is moved by the output data controller block to the HTM TM block. Lines - define that proposed TM should create certain for each class in image classes, which reflects temporal variations of spatial features. This is done by making TM update every time new feature extracted image is fetched to TM block. Based on whether bit has the value of or within , corresponding memory cell within is either increased or decreased by value, respectively. At the end of training phase, according to lines -, of each class is binarized.

Recognition and image classification stage defines a pattern matching process that is active during testing phase when binary feature extracted image is moved by the output data controller block to the Pattern Matcher block. According to lines -, the similarity score between extracted features of the testing image and each of the class maps stored within HTM TM is defined as the sum of XOR logic high outputs. Since XOR operation produces logic high output at places where two compared bits are of different value, a class of the tested image can be defined as the that produces the least value.

4Circuits for the Proposed System

4.1Spatial Pooler

In the proposed system HTM SP processing can be implemented with memristive crossbar based SP [7]. Memristor devices due to its ability to memories and being able to mimic neurons find various applications [15]. Figure 2 illustrates single memristive crossbar processing unit. Memristive-CMOS circuits allow precise realization of the feature extraction process described by algorithm lines - with additional advantages in terms of parallel synaptic processing and compact storage of synaptic weights. Figure 3 illustrates modified version of the WTA circuit designed by [18], which is when combined with SP circuits presented in [7] allow implementation of inhibition processing described by algorithm lines -.

Figure 2: Single memristive crossbar processing unit as presented in
Figure 2: Single memristive crossbar processing unit as presented in
Figure 3: Winner-Take-All Circuit taken from
Figure 3: Winner-Take-All Circuit taken from

4.2Temporal Memory

Instead of saving all the feature extracted images as it was done in the previous SP design [7], the proposed work incorporates conceptual analog TM into the entire system. It is, in turn, intended to reduce memory requirements and processing time by creating single training image, which is called a class map in this work. Such class map incorporates features of all training images belonging to a single class and allows pattern matching to be performed by comparing the testing image with the only single image for each of the memorized (trained) classes.

This is realized by making TM learn by focusing on both important and unimportant features and by reflecting how features change with time.

Focus is achieved by placing Temporal Memory circuitry after Spatial Pooler so that the inputs are not the original training images, but feature extracted images provided at the output of Spatial Pooler. Since each of these outputs is binary in nature, such placement allows Temporal Memory to differentiate important and unimportant features.

Reflection is achieved by changing the weights of the Temporal Memory cells according to the importance of the corresponding input bit. This is realized by implementing learning mechanism of the Hebbian theory given by (Equation 2), which, in general, is used to determine the weight change between presynaptic and postsynaptic units. In the proposed design of Temporal Memory, the binary pixels of feature extracted images are used as presynaptic units, whereas postsynaptic units are represented by a matrix of ones of the same size as the image. The realization of postsynaptic units as a matrix of ones is required to ensure that every pixel of feature extracted image is treated as equally important. Moreover, such arrangement allows alteration of the weights of Temporal Memory cells with respect to their importance.

In particular, if the input bit of feature extracted image is , meaning that it represents an important feature, then the weight of the corresponding Temporal Memory cell increases by positive weight update () value. Contrary, if the input bit of the feature extracted image is , meaning that it represents an unimportant feature, then the weight of the corresponding Temporal Memory cell decreases by negative weight update () value. This algorithm of differentiating important and unimportant features using learning mechanism of the Hebbian rule is illustrated in Figure 4.

Figure 4: Example of determining required weight updates (positive or negative) using Hebbian learning mechanism at each particular pixel within Temporal Memory
Figure 4: Example of determining required weight updates (positive or negative) using Hebbian learning mechanism at each particular pixel within Temporal Memory

As a result, instead of having multiple binary images with extracted features, TM creates single analog image incorporating important and unimportant features of and having the same dimensions as each of the input images belonging to a single class. Figure 5 illustrates the formation of the class map for the first class by fetching feature extracted binary images belonging to the first class to TM. All of the TM cells, initially having the same weight, eventually become distinguishable at the end of the training sequence.

Figure 5: The main principle of single class map formation using Temporal Memory and feature extracted images obtained from Spatial Pooler
Figure 5: The main principle of single class map formation using Temporal Memory and feature extracted images obtained from Spatial Pooler

However, such learning mechanism requires TM to be multi-valued. This is to ensure that the weights can take values not only of 1 and 0, as feature extracted images do, but can be changed according to the weight update value, which is .

Hence, Figure 6 illustrates the design of TM required to memorize single class map and utilizing multi-valued memory cells. The total number of the required memory cells correspond to the product of the number of memorized classes and the number of bits of a single class map. For example, for 13 class maps, each having dimensions of , the required number of memory cells is . The multi-valued memory cells, in turn, can be realized by using n-bit memristor-based memory, which is described in [19].

Figure 6: The design of Temporal Memory consisting of 120\times 160 = 19200 memory cells and that is used for storing a single class map trained by fetching input images having dimensions of 120~bits\times 160~bits.
Figure 6: The design of Temporal Memory consisting of memory cells and that is used for storing a single class map trained by fetching input images having dimensions of .

4.3Pattern Matcher

After class maps were formed within TM during the training phase, the testing is performed by comparing each input testing image with all the class maps learned by the system. In the proposed system this can be achieved by fetching a feature extracted input testing image and all the class maps into memristive pattern matcher (Fig. ?), which is realized by memristive XOR gates illustrated in Fig. ? and described in [7].

Figure 7 illustrates the principle used to determine a similarity score between any feature extracted the image and two arbitrary class maps. The class maps are thresholded at the mean value of , so that XOR logic can be accomplished. For two input images, memristive XOR gates will produce output image having logical at the regions where both images represent important or unimportant features (i.e. either both have at that region or both have at that region) and having logical at the regions where two images represent different features. Hence, the class of an input testing image is determined by the class map that has the least number of white bits (or the greatest number of black bits) at the corresponding XOR output.

The described pattern matching process emphasizes the advantage of the proposed design in terms of faster processing speed: the time required for the system to determine similarity score reduces, as the number of templates required to be compared with the input testing image decreases to a single image (that is class map) per class.

Figure 7: The face recognition process done by calculating similarity score between the feature extracted testing image and two class maps using memristive XOR gates
Figure 7: The face recognition process done by calculating similarity score between the feature extracted testing image and two class maps using memristive XOR gates

5Results and Discussion

The performance metrics were based on face recognition accuracy of AR database [20]. The database consists of 100 classes each having 26 images that were taken in two sessions. These images then were divided into two separate sets. The first set consisted of 13 images of each class taken during the first session and was used to train Temporal Memory, whereas the second set consisted of 13 images of each class taken during the second session and was used for testing.

Based on the above mentioned set up, the first analysis was aimed to determine optimal delta () required to update the weights of Temporal Memory for a different number of training images. Figure 8 illustrates the recognition accuracy results achieved for different combinations of and the size of the training set. As can be seen, for the number of training images between 1 and 13 the maximum recognition accuracy is achieved when value is lower than or equal to . Moreover, as the size of the training set increases, the maximum achieved accuracy increases for value of and decreases for large values of .

This result indicates that achieving maximum recognition accuracy with a large number of training images is possible, when the weight update value is small. Another point that should be taken into account is that the value of memristor directly proportional to the duration of applied constant voltage. These two statements imply that the increased number of training images decreases the duration applied voltage required to update the weights, which means the consecutive input images can be processed at a higher speed.

Figure 8: Optimal Delta estimation based on recognition accuracy results
Figure 8: Optimal Delta estimation based on recognition accuracy results
Table 1: Recognition accuracy of classifying test images in each category of AR database done by two different architectures using single template or class map per each class.
Architecture Emotions Light conditions Occlusions (glasses) Occlusions (scarf) Total
Spatial Pooler
Spatial Pooler and Temporal Memory

After optimal weight update value was determined to be , the analysis was performed to compare the effectiveness of the architecture based on Spatial Pooler only [7] and the proposed architecture combining Spatial Pooler and Temporal Memory on face recognition task. To make common settings, for the architecture of only Spatial Pooler the training images belonging to a single class were initially averaged and the averaged images then were processed by Spatial Pooler to provide a feature extracted training templates. For the proposed architecture, the training images were processed by Spatial Pooler and extracted feature outputs were used to create class maps within Temporal Memory. Table 1 illustrates recognition accuracy results for the condition when both architectures had one template or class map for each of 100 classes (giving in total 100 templates or class maps) with which all of 13 testing images of each class (giving in total 1300 testing images) were compared. Comparing these results with these reported in [7], it can be seen that as the number of training images increases, the architecture incorporating Temporal Memory provides higher recognition accuracy at lower memory requirements and faster processing at pattern matching stage.

6Conclusion

In this paper, we proposed system realization of HTM Spatial Pooler and Temporal Memory using memristor-CMOS circuits. The main difference from the existing HTM system with memristor is in that the system incorporates Temporal Memory with learning capability. The learning process of the system involves collecting important features from the training data of given class and creating its class map - a single image, based on the extracted features. Hence, the main advantage of the system is less memory occupation of HTM that provides with higher processing speed. The results of performance analysis indicate that for a large set of training images the proposed recognition system provides a higher accuracy compared to the results presented in the previous work.

References

  1. D. George and J. Hawkins, “A hierarchical bayesian model of invariant pattern recognition in the visual cortex,” in Neural Networks, 2005. IJCNN ’05. Proceedings. 2005 IEEE International Joint Conference on, vol. 3, July 2005, pp. 1812–1817 vol. 3.
  2. J. Hawkins and D. George, “Hierarchical temporal memory: Concepts, theory and terminology,” Technical report, Numenta, Tech. Rep., 2006.
  3. W. J. Melis, S. Chizuwa, and M. Kameyama, “Evaluation of the hierarchical temporal memory as soft computing platform and its vlsi architecture,” in 39th International Symposium on Multiple-Valued Logic.1em plus 0.5em minus 0.4emIEEE, 2009, pp. 233–238.
  4. A. M. Zyarah, “Design and analysis of a reconfigurable hierarchical temporal memory architecture,” Master’s thesis, 2015.
  5. D. Fan, M. Sharad, A. Sengupta, and K. Roy, “Hierarchical temporal memory based on spin-neurons and resistive memory for energy-efficient brain-inspired computing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 9, pp. 1907–1919, Sept 2016.
  6. T. Ibrayev, A. P. James, C. Merkel, and D. Kudithipudi, “A design of htm spatial pooler for face recognition using memristor-cmos hybrid circuits,” in 2016 International Symposium on Circuits and Systems (ISCAS). 1em plus 0.5em minus 0.4emIEEE, 2016.
  7. A. P. James, I. Fedorova, T. Ibrayev, and D. Kudithipudi, “Htm spatial pooler with memristor crossbar circuits for sparse biometric recognition,” IEEE Transactions on Biomedical Circuits and Systems, vol. PP, no. 99, pp. 1–12, 2017.
  8. J. Hawkins, S. Ahmad, and D. Dubinsky, “Hierarchical temporal memory including htm cortical learning algorithms,” Techical report, Numenta, Inc, Palto Alto http://www. numenta. com/htmoverview/education/HTM_CorticalLearningAlgorithms. pdf, 2010.
  9. N. Farahmand, M. H. Dezfoulian, H. GhiasiRad, A. Mokhtari, and A. Nouri, “Online temporal pattern learning,” in 2009 International Joint Conference on Neural Networks, June 2009, pp. 797–802.
  10. I. Ramli and C. Ortega-Sanchez, “Pattern recognition using hierarchical concatenation,” in Computer, Control, Informatics and its Applications (IC3INA), 2015 International Conference on, Oct 2015, pp. 109–113.
  11. A. B. Csapo, P. Baranyi, and D. Tikk, “Object categorization using vfa-generated nodemaps and hierarchical temporal memories,” in Computational Cybernetics, 2007. ICCC 2007. IEEE International Conference on, Oct 2007, pp. 257–262.
  12. C. Fyfe, Hebbian learning and negative feedback networks.1em plus 0.5em minus 0.4emSpringer Science & Business Media, 2007.
  13. C. Yakopcic, T. M. Taha, G. Subramanyam, and R. E. Pino, “Memristor spice model and crossbar simulation based on devices with nanosecond switching time,” in Neural Networks (IJCNN), The 2013 International Joint Conference on.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 1–7.
  14. D. Biolek, Z. Kolka, V. Biolkova, and Z. Biolek, “Memristor models for spice simulation of extremely large memristive networks,” in 2016 IEEE International Symposium on Circuits and Systems (ISCAS).1em plus 0.5em minus 0.4emIEEE, 2016, pp. 389–392.
  15. A. K. Maan, D. A. Jayadevi, and A. P. James, “A survey of memristive threshold logic circuits,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 8, pp. 1734–1746, Aug 2017.
  16. A. K. Maan, A. P. James, and S. Dimitrijev, “Memristor pattern recogniser: isolated speech word recognition,” Electronics Letters, vol. 51, no. 17, pp. 1370–1372, 2015.
  17. O. Krestinskaya, T. Ibarayev, and A. James, “Hierarchical temporal memory features with memristor logic circuits for pattern recognition,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. PP, 2017.
  18. J. Lazzaro, S. Ryckebusch, M. A. Mahowald, and C. A. Mead, “Winner-take-all networks of o (n) complexity,” CALIFORNIA INST OF TECH PASADENA DEPT OF COMPUTER SCIENCE, Tech. Rep., 1988.
  19. H. Mostafa and Y. Ismail, “Process variation aware design of multi-valued spintronic memristor-based memory arrays,” IEEE Transactions on Semiconductor Manufacturing, vol. 29, no. 2, pp. 145–152, 2016.
  20. A. Martinez and R. Benavente, “The ar face database,” Rapport technique, vol. 24, 1998.
2641
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description