Convolutional Neural Networks for Automatic Meter Reading

Convolutional Neural Networks for Automatic Meter Reading

Rayson Laroca Victor Barroso Federal University of Paraná, Laboratory of Vision, Robotics and Imaging, Department of Informatics, Av. Coronel Francisco Heráclito dos Santos 100, Curitiba, Brazil, 81530-000 Matheus A. Diniz Federal University of Minas Gerais, Smart Surveillance Interest Group, Department of Computer Science, Av. Antônio Carlos 6627, Belo Horizonte, Brazil, 31270-010 Gabriel R. Gonçalves Federal University of Minas Gerais, Smart Surveillance Interest Group, Department of Computer Science, Av. Antônio Carlos 6627, Belo Horizonte, Brazil, 31270-010 William Robson Schwartz Federal University of Minas Gerais, Smart Surveillance Interest Group, Department of Computer Science, Av. Antônio Carlos 6627, Belo Horizonte, Brazil, 31270-010 David Menotti Federal University of Paraná, Laboratory of Vision, Robotics and Imaging, Department of Informatics, Av. Coronel Francisco Heráclito dos Santos 100, Curitiba, Brazil, 81530-000
Abstract

In this paper, we tackle \gls*amr by leveraging the high capability of \glspl*cnn. We design a two-stage approach that employs the Fast-YOLO object detector for counter detection and evaluates three different \gls*cnn-based approaches for counter recognition. In the \gls*amr literature, most datasets are not available to the research community since the images belong to a service company. In this sense, we introduce a new public dataset, called UFPR-AMR dataset, with 2,000 fully and manually annotated images. This dataset is, to the best of our knowledge, three times larger than the largest public dataset found in the literature and contains a well-defined evaluation protocol to assist the development and evaluation of \gls*amr methods. Furthermore, we propose the use of a data augmentation technique to generate a balanced training set with many more examples to train the \gls*cnn models for counter recognition. In the proposed dataset, impressive results were obtained and a detailed speed/accuracy trade-off evaluation of each model was performed. In a public dataset, state-of-the-art results were achieved using less than 200 images for training.

automatic meter reading; convolutional neural networks; deep learning; public dataset
\newacronymstyle

long-short-br \GlsUseAcrEntryDispStylelong-short\GlsUseAcrStyleDefslong-short \setacronymstylelong-short-br \cftpagenumbersofffigure \cftpagenumbersofftable

\newacronym

amrAMRAutomatic Meter Reading \newacronymannANNArtificial Neural Network \newacronymbflopBFLOPbillion floating-point operations \newacronymccaCCAConnected Components Analysis \newacronymcnnCNNConvolutional Neural Network \newacronymcopelCopelEnergy Company of Paraná \newacronymcrnnCRNNConvolutional Recurrent Neural Network \newacronymctcCTCConnectionist Temporal Classification \newacronymfpsFPSframes per second \newacronymhogHOGHistogram of Oriented Gradients \newacronymiouIoUIntersection over Union \newacronymlprLPRLicense Plate Recognition \newacronymlstmLSTMlong short-term memory \newacronymocrOCROptical Character Recognition \newacronymroiROIRegion of Interest \newacronymfcnFCNFully Convolutional Network \newacronymmlpMLPMultilayer Perceptron \newacronymmserMSERMaximally Stable Extremal Regions \newacronymstnSTNSpatial Transformer Network \newacronymsvmSVMSupport Vector Machine \newacronymyoloYOLOYou Only Look Once

*Rayson Laroca, \linkablerblsantos@inf.ufpr.br

1 Introduction

\glsresetall\gls

*amr refers to automatically record the consumption of electric energy, gas and water for both monitoring and billing [1, 2, 3]. Despite the existence of smart readers [4], they are not widespread in many countries, especially in the underdeveloped ones, and the reading is still performed manually on site by an operator who takes a picture as reading proof [5, 2]. Since this operation is prone to errors, another operator needs to check the proof image to confirm the reading [5, 2]. This offline checking is expensive in terms of human effort and time, and has low efficiency [6]. Moreover, due to a large number of images to be evaluated, the inspection is usually done by sampling [7] and errors might go unnoticed.

Performing the meter inspection automatically would reduce mistakes introduced by the human factor and save manpower. Furthermore, the reading could also be executed fully automatically using cameras installed in the meter box [1, 8]. Image-based \gls*amr has advantages such as lower cost and fast installation since it does not require renewal or replacement of existent meters [9].

A common \gls*amr approach includes three phases, namely: (i) counter detection, (ii) digit segmentation and (iii) digit recognition. Counter detection is the fundamental stage, as its performance largely determines the overall accuracy and processing speed of the entire \gls*amr system.

Despite the importance of a robust \gls*amr system and that major advances have been achieved in computer vision using deep learning [10], to the best of our knowledge, only in Ref. 11, published very recently, \glspl*cnn were employed at all \gls*amr stages. Previous works relied, in at least one stage, on handcrafted features that capture certain morphological and color attributes of the meters/counters. These features are easily affected by noise and might not be robust to different types of meters.

Deep learning approaches are particularly dependent on the availability of large quantities of training data to generalize well and yield high classification accuracy for unseen data [12]. Some previous works [2, 6, 11] employed large datasets (e.g., more than , images) to train and evaluate their systems. However, these datasets were not made public. In the \gls*amr literature, the datasets are usually not publicly available since the images belong to the [electricity, gas, water] company. In this sense, we introduce a new public dataset, called UFPR-AMR dataset, with , fully annotated images to assist the development and evaluation of \gls*amr methods. The proposed dataset is three times larger than the largest public dataset [13] found in the literature.

In this paper, we design a two-stage approach for \gls*amr. We first detect the counter region and then tackle the digit segmentation and recognition stages jointly by leveraging the high capability of \glspl*cnn. We employ a smaller version of the YOLO object detector, called Fast-YOLO [14], for counter detection. Afterward, we evaluate three CNN-based approaches, i.e. CR-NET [15], Multi-Task Learning [16] and \gls*crnn [17], for the counter recognition stage (i.e., digit segmentation and recognition). CR-NET is a YOLO-based model proposed for license plate character detection and recognition, while Multi-Task and \gls*crnn are segmentation-free approaches designed respectively for the recognition of license plates and scene text. These approaches were chosen since promising results have been achieved through them in these applications. Finally, we propose the use of a data augmentation process to train the \gls*cnn models for counter recognition to explore different types of counter/digit deformations and their influence on the models’ performance.

The experimental evaluation demonstrates the effectiveness of the \gls*cnn models for \gls*amr. First, all counter regions were correctly located through Fast-YOLO in the proposed dataset and also in two public datasets found for this task [5, 13]. Second, the CR-NET model yielded promising recognition results, outperforming both Multi-Task and \gls*crnn models in the UFPR-AMR dataset. Finally, an impressive recognition rate of % was achieved using Fast-YOLO and CR-NET in a set of images proposed for end-to-end evaluations of \gls*amr systems, called Meter-Integration subset [5], against % and % achieved by the baselines [5, 2]. In addition, the CR-NET and Multi-Task models are able to achieve outstanding \gls*fps rates in a high-end GPU, being possible to process respectively and \gls*fps.

Considering the aforementioned discussion, the main contributions of our work are summarized as follows:

  • A two-stage \gls*amr approach with \glspl*cnn being employed for both counter detection and recognition. In the latter, three different types of \gls*cnn are evaluated;

  • A public dataset for \gls*amr with , fully and manually annotated images/meters (i.e., , digits) with a well-defined evaluation protocol, allowing a fair comparison between different approaches for this task;

  • The \gls*cnn-based approaches outperformed all baselines in public datasets and achieved impressive results in both accuracy and computational time in the proposed UFPR-AMR dataset.

The remainder of this paper is organized as follows. We briefly review related works in Section 2. The UFPR-AMR dataset is introduced in Section 3. The methodology is presented in Section 4. We report and discuss the results in Section 5. Conclusions and future work are given in Section 6.

2 Related Work

\gls

*amr intersects with other \gls*ocr applications, such as license plate recognition [18] and robust reading [19], as it must reliably extract text information from images taken under different conditions. Although \gls*amr is not as widespread in the literature as these applications, a satisfactory number of works have been produced in recent years [20, 11, 6, 3, 7]. Here, we briefly survey these works by first describing the approaches employed for each \gls*amr stage. Next, we present some papers that address two stages jointly or using the same method. Then, we discuss the deep learning approaches and datasets used so far. Finally, we conclude this section with final remarks.

Counter Detection: Many pioneering approaches exploited the vertical and horizontal pixel projections histograms for counter detection [21, 1, 8]. Projection-based methods can be easily affected by the rotation of the counter. Refs. 22, 7, 20, 2, 6, 13 took advantage of prior knowledge such as counter’s position and/or its colors (e.g., green background and red decimal digits). A major drawback of these techniques is that they might not work on all meter types and the color information might not be stable when the illumination changes. Other works include the use of template matching [7] and the AdaBoost classifier [3]. In the latter, normalized gradient magnitude, \gls*hog and LUV color channels were adopted as low-level feature descriptors.

Digit Segmentation: Projection and color-based approaches have also been widely employed for digit segmentation [22, 9, 23]. The use of morphological operations with \gls*cca was considered in Refs. 6, 20. However, it presents the drawback of depending largely on the result of binarization as it cannot segment digits correctly if they are connected or broken. In Ref. 8, a binary digit/non-digit \gls*svm was applied in a sliding window fashion, while Gallo et al. [2] exploited \gls*mser. In Ref. 2, the \gls*mser algorithm failed to segment digits in images with problems such as bluring and perspective distortions.

Digit Recognition: Template matching [22, 23, 21] along with simple measures of similarity have been widely used for digit recognition. Nevertheless, it is known that if a digit is different from the template due to any font change, rotation or noise, this approach produces incorrect recognition [18]. Thus, many authors have employed an \gls*svm classifier for digit recognition. In Refs. 8, 5, simple features such as pixel intensity were used in training, while \gls*hog descriptors were adopted as features in Refs. 7, 2. Although some promising results have been attained, it should be noted that it is not trivial to find the appropriate hyper-parameters of \gls*svm classifiers as well as the best features to be extracted. The open-source Tesseract \gls*ocr Engine [24] was applied in Refs. 25, 6, 5, however, satisfactory results were not obtained in any of them. Cerman et al. [6] achieved a remarkable improvement in digit recognition when using a \gls*cnn inspired by the LeNet- architecture instead of Tesseract.

\gls

*amr presents an unusual challenge in \gls*ocr: rotating digits. Typically, this is the major cause of errors, even when robust approaches are employed for digit recognition [26, 3]. In Ref. 23, this problem was addressed using a Hausdorff distance based algorithm, achieving excellent recognition results in real time. Note that all images were extracted from a single meter and, as pointed out by the authors, a controlled environment was required since there were no preprocessing stage and no algorithm for angle correction.

Miscellaneous: Nodari & Gallo [25] exploited an ensemble of \gls*mlp networks to perform the counter detection and digit segmentation without preprocessing and postprocessing stages. Since low F-measure rates were achieved, extra techniques were added in Ref. 5, an extension of Ref. 25. In summary, a watershed algorithm was applied to improve counter detection and Fourier analysis was employed to avoid false positives in digit segmentation. Although better results were attained, only images were used to evaluate their system performance, which may not be representative enough. It should be noted that, to the best of our knowledge, this was the first work to make the images used in the experiments publicly available.

Gao et al. [3] designed a bidirectional \gls*lstm network for counter recognition. In their approach, a feature sequence is first generated by a network that combines convolutional and recurrent layers. Then, an attention decoder predicts, recurrently, one digit at each step according to the feature representation. A promising accuracy rate was reported, with most of the errors appearing in cases of half digits.

Gómez et al. [11] presented a segmentation-free \gls*amr system able to output readings directly without explicit counter detection. A \gls*cnn architecture was trained in an end-to-end manner where the initial convolutional layers extract visual features of the whole image and the fully connected layers predict the probabilities for each digit. Even though an impressive overall accuracy was achieved, their approach was evaluated only on a large private dataset which has almost training samples and mostly images with the counter well centered and occupying a good portion of the image. Thus, as pointed out by the authors, small-meter images pose difficulties to their system.

Datasets: To the best of our knowledge, only Refs. 13, 5 made available the datasets used in their experiments. These datasets are composed of gas meter images with resolution of pixels (mostly) and the counter occupying a large portion of the image, which facilitates its detection. Additionally, both datasets are small ( and images, respectively) and the cameras used to capture them were not specified. It is important to note that in the dataset introduced in Ref. 5, images are divided into different subsets for the evaluation of each stage and only images are for the end-to-end evaluation of the \gls*amr system. Also, there is no split protocol in Ref. 13, which prevents a fair comparison between different approaches.

Deep Learning: Recently, deep learning approaches have won many machine learning competitions and challenges, even achieving superhuman visual results in some domains [27]. Such a fact motivated us to employ deep learning for \gls*amr, since we could find only three works [6, 3, 11] employing \glspl*cnn in this context and all of them made use of large private datasets, overlooking the public datasets. This suggests that these models are able to generalize only with many training samples (e.g., , images in the segmentation-free system proposed in Ref. 11). Moreover, (i) conventional image processing with handcrafted features was used in at least one stage in Refs. 6, 3, (ii) the images used in Ref. 3 are mostly sharp and very similar, which does not represent real-world conditions, and (iii) the poor digit segmentation accuracy obtained in Ref. 6, i.e. %, through a sequence of conventional image processing methods, discourages its use in real-world applications.

Final Remarks: The approaches developed for \gls*amr are still limited. In addition to the aforementioned points (i.e., private datasets and handcrafted features), many authors do not report the computational time of their approaches, making it difficult an accurate analysis of their speed/accuracy trade-off, as well as their applicability. In this paper, \glspl*cnn are used for both counter detection and recognition. We evaluate the \glspl*cnn that achieved state-of-the-art results in other applications in both the proposed and public datasets, reporting the accuracy and the computational time to enable fair comparisons in future works.

3 The UFPR-AMR Dataset

Figure 1: Sample images of the UFPR-AMR dataset (some images were slightly resized for display purposes). Note the diversity of meter types and conditions, as well as the existence of several textual blocks similar to the counter region.

The proposed dataset contains , images taken from inside a warehouse of the \gls*copel, which directly serves more than million consuming units in the Brazilian state of Paraná [28]. Therefore, our dataset presents electric meters of different types and in different conditions. The diversity of the dataset is shown in Fig. 1. One can see that (i) the counter occupies a small portion in the image, which makes its location more difficult; (ii) there are several similar textual blocks (e.g., meter specifications and serial number) to the counter region. The UFPR-AMR dataset is publicly available to the research community at https://web.inf.ufpr.br/vri/databases/ufpr-amr/.

Meter images commonly have some artifacts (e.g., blur, reflections, low contrast, broken glass, dirt, among others) due to the meter’s conditions and the misuse of the camera by the human operator, which may impair the reading of electric energy consumption. In addition, it is possible that the digits are rotating or in-between positions (e.g., a digit going from to ) in some types of counters. In such cases, we consider the lowest digit as the ground truth, since this is the protocol adopted at \gls*copel. The exception, to have a reasonable rule, is between digits and , where it should be labeled as .

The images were acquired with three different cameras and are available in the JPG format with resolution between ,, and ,, pixels. The cameras used were: LG G3 D855, Samsung Galaxy J7 Prime and iPhone 6s. As the cameras (cell phones) belong to different price ranges, the images presumably have different levels of quality. Additional information can be seen in Table 1.

Camera Images
LG G3
J7 Prime
iPhone 6s
Total ,
(a)
Info Counters Digits
Minimum Size
Maximum Size
Average Size
Aspect Ratio
(b)
Table 1: Additional information about the UFPR-AMR dataset: (a) how many images were captured with each camera; (b) dimensions of counters and digits (width height in pixels). It is noteworthy the large variation in the sizes of counters and digits.

Every image has the following annotations available in a text file: the camera in which the image was taken, the counter position , the reading, as well as the position of each digit. All counters of the dataset (regardless of meter type) have digits, and thus , digits were manually annotated.

Remark that a brand new meter starts with 00000 and the most significant digit positions take longer to be increased. Then, it is natural that the less significant digits (i.e.,  and ) have many more instances than the others. Nonetheless, digits - have a fairly similar number of examples. Fig. 2 shows the distribution of the digits in the UFPR-AMR dataset.

Figure 2: Frequency distribution of digits in the UFPR-AMR dataset. It is worth noting that the first position (i.e., the most significant) consists almost exclusively of s and s. On the other hand, the frequency of digits in the other positions is very well balanced.

The dataset is split into three sets: training ( images), validation ( images) and test ( images). We adopt this protocol (i.e., with a larger test set) since it has already been adopted in other datasets [29, 30] and to provide more samples for analysis of statistical significance. It should be noted that this division was made randomly and the sets generated are explicitly available along with the UFPR-AMR dataset. Additionally, experiments carried out by us suggested that dividing the dataset multiple times and then averaging the results is not necessary, as the proposed division is representative.

4 Methodology

Meters have many textual blocks that can be confused with the counter’s reading. Moreover, the \gls*roi (i.e., the counter) usually occupies a small portion of the image and its position varies according to the meter type. Therefore, we propose to first locate the counter region and then perform its recognition in the detected patch. We tackle both stages by leveraging the high capability of state-of-the-art \glspl*cnn. It is remarkable that, to the best of our knowledge, this is only the second work in which both stages are addressed using \glspl*cnn [11] and the first with the experiments being performed on public datasets.

In the following sections, we describe the \gls*cnn models employed for counter detection and counter recognition. It is worth noting that all parameters (e.g., \glspl*cnn input size, number of epochs, among others) specified here are defined based on the validation set and presented in Section 5, where the experiments are reported.

4.1 Counter Detection

Recently, great progress has been made in object detection through models inspired by YOLO [14, 31, 32], a \gls*cnn-based object detection system that (i) reframes object detection as a single regression problem; (ii) achieved outstanding and state-of-the-art results in the PASCAL VOC and COCO detection tasks [33]. For that reason, we decided to fine-tune it for counter detection. However, as we want to detect only one class and the computational cost is one of our main concerns, we chose to use a smaller model, called Fast-YOLO [14], which uses fewer convolutional layers than YOLO and fewer filters in those layers. Despite being smaller, Fast-YOLO (architecture shown in Table 2) yielded outstanding results, i.e. detections with \gls*iou  with the ground truth, in preliminary experiments. The \gls*iou is often used to assess the quality of predictions in object detection tasks [34] and can be expressed by the formula

(1)

where and are the predicted and ground truth bounding boxes, respectively. The closer the \gls*iou is to , the better the detection. For this reason, we believe that very deep models are not necessary to handle the detection of a single class of objects.

Layer Filters Size Input Output
conv
max
conv
max
conv
max
conv
max
conv
max
conv
max
conv
conv
conv
detection
Table 2: Fast-YOLO network used to detect the counter region.

For counter detection, we use the weights pre-trained on ImageNet [35] and perform two minor changes in the Fast-YOLO model. First, we recalculate the anchor boxes for the UFPR-AMR dataset using the algorithm available in Ref. 36. Anchors are initial shapes that serve as references at multiple scales and aspect ratios. Instead of predicting arbitrary bounding boxes, YOLO only adjusts the size of the nearest anchor to the size of the object. Predicting offsets instead of coordinates simplifies the problem and makes it easier for the network to learn [33]. Then, we reduce the number of filters in the last convolutional layer from to to output class instead of . The number of filters in the last layer is given by

(2)

where is the number of anchor boxes (we use  = ) used to predict bounding boxes. Each bounding box has four coordinates , a objectness value [37] (i.e., how likely the bounding box contains an object) along with the probability of that object belonging to each of the classes, in our case (i.e., only the counter region) [33]. Remark that the choice of appropriate anchor boxes is very important, and thus our boxes are similar to counters in size and aspect ratio.

We employ Fast-YOLO’s multi-scale training [33]. In short, every batches, the network randomly chooses a new image dimension size from to pixels (default values). These dimensions were chosen considering that the Fast-YOLO model down samples the image by a factor of . As pointed out in Ref. 33, this approach forces the network to learn to predict well across a variety of input dimensions. Then, we use images as input since the best results (speed/accuracy trade-off in the validation set) were obtained with this dimension as input. It is remarkable that, although YOLO networks have a input aspect ratio, previous works [30, 15] have attained excellent object detection results (over % recall) in images with different aspect ratios (e.g., ,,). All image resizing operations were performed using bilinear interpolation.

In cases where more than one counter is detected, we consider only the detection with the highest confidence since each image/meter has only one counter. To avoid losing digits in cases where the counter is not very well detected, we add a margin (with size chosen based on the validation set) on the detected patch so that all digits are within it for the recognition stage. A negative recognition result is given in cases where no counter is found.

4.2 Counter Recognition

We employ three \gls*cnn-based approaches for performing counter recognition: CR-NET [15], Multi-Task Learning [16] and \gls*crnn [17]. These models were chosen because promising results were obtained through them in other \gls*ocr applications, such as license plate recognition and scene text recognition. It is noteworthy that, unlike CR-NET, the last two models do not need the coordinates of each digit in the training phase. In other words, Multi-Task Learning and \gls*crnn approaches only need the counter’s reading. This is of paramount importance in cases where a large number of images is available for learning (e.g., millions or hundreds of thousands), since manually labeling each digit is very costly and prone to errors.

The remainder of this section is organized into four parts, one to describe the data augmentation method, which is essential to effectively train the deep models, and one part for each \gls*cnn approach employed for counter recognition.

4.2.1 Data Augmentation

It is well known that unbalanced data is undesirable for neural network classifiers since the learning of some patterns might be biased. For instance, some classifiers may learn to always classify the first digit as , but this is not always the case (see Fig. 2), although it is by far the most common. To address this issue, we employ the data augmentation technique proposed in Ref. 16. Using this technique, we are able to create a new set of images, where each digit class is equally represented in every position. This set consists of permutations of the original images. The order and frequency of the digits in the generated counters are chosen to uniformly distribute the digits along the positions. Note that the location of each digit (i.e., its bounding box) is required to apply this data augmentation technique.

Some artificially generated images when applying the method in the UFPR-AMR dataset are shown in Fig. 3. We also perform random variations of brightness, rotation and crop coordinates to increase even more the robustness of our augmented images, creating new training examples for the \glspl*cnn. As can be seen, the data augmentation approach works on different types of meters.

Figure 3: Data augmentation examples, where the images in the upper-left corner of (a) and (b) are the originals, and the others were generated automatically. In (a) and (b), counters of different types and aspect ratios are shown.

The adjustment of parameters is of paramount importance for the effectiveness of this technique since the presence of very large variations in brightness, rotation or cropping, for instance, might impair the recognition through the generation of images that do not match real scenarios. Therefore, the parameter ranges were empirically determined based on experiments performed on the validation set, i.e., brightness variation of the pixels , rotation angles between  and  and cropping from % to % of the counter size. Once these ranges were established, new counter images were generated using random values within those ranges for each parameter.

4.2.2 Cr-Net

CR-NET is a YOLO-based model proposed for license plate character detection and recognition [15]. This model consists of the first eleven layers of YOLO and four other convolutional layers added to improve non-linearity. In Ref. 15, CR-NET (with an input size of pixels) was capable of detecting and recognizing license plate characters at \gls*fps. Laroca et al. [30] also achieved great results applying CR-NET for this purpose.

The CR-NET architecture is shown in Table 3. As in the counter detection stage, we recalculate the anchors for our data and make adjustments in the number of filters in the last layer. Furthermore, we adapt the input image size taking into account the aspect ratio of the counters, which have a different aspect ratio when compared to license plates in Ref. 15. Then, we use as input an image with resolution of pixels since the results obtained when using other sizes (e.g., and ) were worse or similar, but with a higher computational cost.

Layer Filters Size Input Output
conv
max
conv
max
conv
conv
conv
max
conv
conv
conv
conv
conv
conv
conv
detection
Table 3: CR-NET with some modifications for counter recognition: input size of pixels and filters in the last layer.

We consider only the five digits detected/recognized with highest confidence, since commonly more than five digits are predicted. However, we noticed that the same digit might be detected more than once by the network. Therefore, we first apply a non-maximal suppression algorithm to eliminate redundant detections. Although highly unlikely (i.e., %), it is also possible that less than five digits are detected by the CR-NET, as shown in Fig 4. In such cases, we reject the counter’s recognition.

Figure 4: A counter where less than digits were detected/recognized by the CR-NET. We could employ leading zeros (e.g., ), however, this could result in a large error in the meter reading.

4.2.3 Multi-Task Learning

Multi-Task Learning is another approach for character string recognition developed for license plates [16, 38]. This method skips the character segmentation stage and directly recognizes the character string of an image (here, the cropped counter). Since there might be multiple characters, each character is modeled as a task on the network.

For the UFPR-AMR dataset, we use a similar architecture adding the restraint that each character must be a digit, transforming the output space from (their work considers numbers and letters) to for each digit. The architecture holistically segments and recognizes all five characters due to its multi-task output.

Table 4 shows the architecture of the model, which is very compact with only convolutional layers followed by a fully connected shared layer and two fully connected layers for each digit, indexed from to . Each output represents the classification of one of the digits. Thus, no explicit segmentation is performed in this approach.

Layer Filters Size Input Output
conv
max
conv
conv
max
conv
max
flatten

Layer
Neurons Input Output

dense
dense[]
dense[]
Table 4: Multi-Task layers and hyperparameters.

4.2.4 Convolutional Recurrent Neural Network

\gls

*crnn [17] is a model designed for scene text recognition that consists of convolutional layers followed by recurrent layers, in addition to a custom transcription layer to convert the per-frame predictions into a label sequence. Given the counter patch, containing the digits, the convolutional layers act as a feature extractor, which is then transformed into a sequence of feature vectors and fed into an \gls*lstm [39] recurrent layer. This layer handles the input as a sequence labeling problem, predicting a label distribution for each feature vector from the feature map.

The \gls*ctc [40] cost function is adopted for sequence decoding. The \gls*ctc has a softmax layer with a label more than the original digits. The activation of each feature vector corresponds to a unique label that can be one of the ten digits or a ‘blank’ (i.e., the absence of digit). Thus, this model is able to predict a variable number of digits, differently from Multi-Task where digits are always predicted. As the classification is done through the whole feature map from the convolutional layers, digit segmentation is not required.

We evaluate different network architectures with variations in the input size and in the number of filters and convolutional layers. As shown in Table 5, the input size is pixels and there are only one \gls*lstm layer (instead of two, as in Ref. 17) since the best results (considering the speed/accuracy trade-off) in the validation set were obtained with these parameters.

Layer Filters Size Input Output
conv
max
conv
max
conv
conv
max
conv
batch
conv
batch
max
conv
Layer Input Hidden Layer Output
13 \gls*lstm
Table 5: \gls*crnn layers and hyperparameters.

5 Experimental Results

In this section, we report the experiments carried out to verify the effectiveness of the \gls*cnn-based methods in the UFPR-AMR dataset and also in public datasets. All experiments were performed on a computer with an AMD Ryzen Threadripper X GHz CPU, GB of RAM and an NVIDIA Titan Xp GPU (, CUDA cores and GB of RAM).

We first assess counter detection since the counter regions used for recognition are from the detection results, rather than cropped directly from the ground truth. This is done to provide a realistic evaluation of the entire \gls*amr system, where well-performed counter detection is essential to achieve outstanding recognition results. Next, each approach for counter recognition is evaluated and a comparison between them is presented.

Counter detection is evaluated in the UFPR-AMR dataset and also in two public datasets [5, 13], while counter recognition is assessed only in the UFPR-AMR dataset. This is because (i) two different sets of images were used to evaluate digit segmentation and recognition in Ref. 5, and thus it is not possible to use these sets in the counter recognition approaches (where these stages are performed jointly); (ii) Ref. 13 performed digit recognition on a subset of their dataset which was not made publicly available.

We will finally evaluate the entire \gls*amr pipeline in a subset of images () taken from the public dataset introduced by Vanetti et al. [5]. This subset, called Meter-Integration, was used to perform an overall evaluation of the \gls*amr methods proposed in Refs. 5, 2. It should be noted that other subsets of the dataset, containing different images, were used to evaluate each \gls*amr stage independently and the training images (in the overall evaluation) are from these subsets [5]. Aiming at a fair comparison, we employ the same protocol.

5.1 Counter Detection

For evaluating counter detection, we employ the bounding box evaluation defined in the PASCAL VOC Challenge [34], where the predicted bounding box is considered to be correct if its \gls*iou with the ground truth is greater than % (\gls*iou ). This metric was also used in previous works [25, 5], being interesting once it penalizes both over- and under-estimated objects.

According to the detection evaluation described above, the network correctly detected % of the counters with an average \gls*iou of , failing to locate the counter in just two images (/). However, in these two cases, it is still possible to recognize the digits from the detected counters, since they were actually detected (with \gls*iou ) and all digits are within the \gls*roi after adding a margin (as explained in Section 4.1). In the validation set, a margin of % (of the bounding box size) is required so that all digits are within the \gls*roi. Thus, we applied a % margin in the test set as well. Fig. 5 shows both cases where the counters were detected with \gls*iou   before and after adding this margin. Note that, in this way, all counter digits are within the located region using Fast-YOLO.

Figure 5: Bounding boxes predicted by the Fast-YOLO model before (a) and after (b) adding the margin (% of the bounding box size).

Some detection results achieved by the Fast-YOLO model are shown in Fig. 6. As can be seen, well-located predictions were attained on counters of different types and under different conditions.

Figure 6: Samples of counter detection obtained with the Fast-YOLO model in the UFPR-AMR dataset.

In terms of computational speed, the Fast-YOLO model takes about ms per image ( \gls*fps). The model was trained using the Darknet framework [41] and the following parameters were used for training the network: iterations (max batches) and learning rate = [---] with steps at and iterations.

5.1.1 Counter Detection on Public Datasets

To demonstrate the robustness of Fast-YOLO for counter detection, we employ it on the public datasets found in the literature [5, 13] and compare the results with those reported in previous works. Vanetti et al. [5] employed a subset of images of their dataset specially for the evaluation of counter detection, being for training and for testing. In Ref. 13, a larger dataset (with images) was introduced, but no split protocol was defined.

As the dataset introduced in Ref. 5 has a split protocol, we employed the same division in our experiments. We randomly removed images from the training set and used them as validation. For the experiments performed in the dataset introduced in Ref. 13, we perform -fold cross-validation with images assigned to folds randomly in order to achieve a fair comparison. Thus, in each run, we used images () for training and images () for each validation and testing, i.e., a split protocol.

As mentioned in the related work section, both datasets are composed of gas meter images. Such a fact is relevant since gas meters usually have red decimal digits that should be discarded in the reading process [5, 2, 13, 11]. Therefore, we manually labeled, in each image, a bounding box containing only the significant digits for training Fast-YOLO. These annotations are also publicly available to the research community at https://web.inf.ufpr.br/vri/databases/ufpr-amr/.

Approach F-measure
Dataset [13] Dataset [5]
Nodari & Gallo [25] %
Gonçalves [13] % %
Vanetti et al. [5] %
Fast-YOLO 100.00% 100.00%
\hdashline
Fast-YOLO (\gls*iou ) % %
Table 6: F-measure values obtained by Fast-YOLO and previous works in the public datasets found in the literature.

The Fast-YOLO model correctly detected % of the counters in both datasets, outperforming the results obtained in previous works, as shown in Table 6. It is noteworthy the outstanding \gls*iou values attained: on average % in the dataset proposed in Ref. 5 and % in the dataset introduced in Ref. 13. We believe that these excellent results are due to the fact that, in these datasets, the counter occupies a large portion of the image and the meters/counters are quite similar when comparing with the UFPR-AMR dataset. Fig. 7 shows a counter from each dataset detected using Fast-YOLO.

Figure 7: Examples of counter detection obtained with the Fast-YOLO model. Note that the counter region in the images of the Dataset [13] (left) and Dataset [5] (right) is quite larger than in the UFPR-AMR dataset.

Additionally, we reported the result with a higher detection threshold (i.e., \gls*iou  ). It is remarkable that more than % of the counters were located with an \gls*iou (with the ground truth) greater than in both datasets. We noticed that the detections with a lower \gls*iou occurred mainly in cases where the meter/counter was inclined or tilted, as illustrated in Fig. 8.

Figure 8: Samples of counters detected with a lower \gls*iou with the ground truth. (a) and (b) show images of the datasets proposed in Refs. 13 and 5, respectively. The predicted position and ground truth are outlined in red and green, respectively. Observe that all digits would be within the \gls*roi with the addition of a small margin.

5.2 Counter Recognition

For this experiment, we report the mean of runs for both digit and counter recognition accuracy. While the former is the number of correctly recognized digits divided by the number of digits in the test set, the latter is defined as the number of correctly recognized counters divided by the test set size, since each image has only a single meter/counter. Additionally, all \gls*cnn models were trained with and without data augmentation, so that we can analyze how data augmentation (described in Section 4.2.1) affects the performance of each model.

For a fair comparison, we (i) generated , images and applied them for training all \glspl*cnn (more images were not generated due to hardware limitations); (ii) disabled the Darknet’s (hence CR-NET’s) built-in data augmentation, which creates a number of images with changed colors (hue, saturation, exposure) randomly cropped and resized; and (iii) evaluated different margin values (less than the % applied previously) in the predictions obtained by Fast-YOLO, since each approach might work better with different margin values.

The recognition rates achieved by all models are shown in Table 7. We performed statistical paired t-tests at a significance level , which showed that there is a significant difference in the results obtained with different models. As expected, the results are greatly improved when taking advantage of data augmentation. The best results were achieved with the CR-NET model, which correctly recognized % of the counters with data augmentation against % and % through \gls*crnn and Multi-Task Learning, respectively. This suggests that segmentation-free approaches require a lot of training data to achieve promising recognition rates, as in Ref. 11 where , images were used for training.

Approach Accuracy (%)
Digits Counters
Multi-Task (original training set)
\gls*crnn (original training set)
CR-NET (original training set)
Multi-Task (with data augmentation)
\gls*crnn (with data augmentation)
CR-NET (with data augmentation)
Table 7: Recognition rates obtained in the UFPR-AMR dataset using Fast-YOLO for counter detection and each of the \gls*cnn models for counter recognition.

It is important to highlight that it was not possible to recognize any counter when training the Multi-Task model without data augmentation. We performed several experiments reducing the size of the Multi-Task network to verify if a smaller network could learn a better discriminant function. However, better results were not achieved. This is because the dataset is biased and so is the recognition. Even though the first digit has the strongest bias (given the large amount of and s in that position), the other digits still have a considerable bias due to the low number of training samples. For example, the Multi-Task network may learn to predict the last digit/task as ‘’ on every occasion it sees a particular combination of the other digits that is present in the training set. In other words, the network may learn correlations between the outputs that do not exist in practice (in other applications this may be beneficial, but in this case it is not). Such a fact explains why the segmentation-free approaches had a higher performance gain with data augmentation, which balanced the training set and eliminated the undesired correlation between the outputs.

To assess the speed/accuracy trade-off of the three \gls*cnn models, we list in Table 8 the time required for each approach to perform the recognition stage. We report the \gls*fps rate achieved by each approach considering only the recognition stage and also considering the detection stage (in parenthesis), which takes about ms per image using Fast-YOLO. The reported time is the average time spent processing all images, assuming that the network weights are already loaded. For completeness, for each network, we also list the number of parameters as well as the number of \gls*bflop required for a single forward pass over a single image.

Approach \acrshort*bflop Parameters Time (ms) \gls*fps Accuracy (%)
Multi-Task M ()
\gls*crnn M ()
CR-NET M ()
Table 8: Results obtained in the UFPR-AMR dataset and the computational time required for each approach to perform counter recognition. In parentheses is shown the \gls*fps rate when considering the detection stage.

The CR-NET and Multi-Task approaches achieved impressive \gls*fps rates. Looking at Table 8, the difference between using each one of them is clear. The CR-NET model achieved an accuracy of % at \gls*fps, while the Multi-Task model was capable of processing \gls*fps with a recognition rate of %. When considering the time spent in the detection stage, it is possible to process and \gls*fps using the CR-NET and Multi-Task models, respectively.

It is worth noting that: (i) even though the Multi-Task network has many more parameters than CR-NET and \gls*crnn, it is still the fastest one; (ii) the \gls*crnn model requires a lower number of floating-point operations for a single forward pass than the CR-NET and Multi-Task networks, however, it is still the model that takes more time to process a single image. In this sense, we claim that there are several factors (in addition to those mentioned above) that affect the time it takes for a network to process a frame, e.g., the input size, its specific characteristics and the framework in which it is implemented. For example, two networks may require exactly the same number of floating-point operations (or have the same number of parameters) and still one be much faster than the other. Although much effort was made to ensure fairness in our experiments, the comparison might not be entirely fair since we used different frameworks to implement the networks and there are probably differences in implementation and optimization between them. The CR-NET model was trained using the Darknet framework [41], whereas the \gls*crnn and Multi-Task models were trained using PyTorch [42] and Keras [43], respectively.

Fig. 9 illustrates some of the recognition results obtained in the UFPR-AMR dataset when employing the CR-NET model (i.e., the one with the best accuracy). It is noticeable that the model is able to generalize well and correctly recognize counters from meters of different types and in different conditions. Regarding the errors, we noticed that they occurred mainly due to rotating digits and artifacts in the counter region, such as reflections and dirt.

Figure 9: Results obtained by the CR-NET model in the UFPR-AMR dataset. The first three rows show examples of successfully recognized counters, while the last two rows show samples of incorrectly recognized counters. Some images were slightly resized for display purposes.

5.3 Overall Evaluation on the Meter-Integration Subset

The Meter-Integration subset [5] was used to evaluate the \gls*amr methods proposed in Refs. 5, 13. Thus, we decided to perform experiments on this dataset and compare the results with those obtained in both works. As previously mentioned, the training images are from other subsets of the dataset proposed in Ref. 5. Remark that there are only and training images for counter detection and recognition, respectively.

We employ only the CR-NET model in this experiment since it outperformed both Multi-Task and \gls*crnn models in the UFPR-AMR dataset. The mean accuracy of runs is reported for both digit and counter recognition accuracy. As the counters in the training set have from to digits and not a fixed number of digits, we adopted a confidence threshold (we report it for sake of reproducibility) to deal with a variable number of digits, instead of always considering  digits per counter. This threshold was chosen based on validation images (i.e., %) randomly taken from the training set. Table 9 shows the results obtained in previous works and using the Fast-YOLO and CR-NET networks for counter detection and recognition, respectively.

Approach Accuracy (%)
Digits Counters
Gallo et al. [2] (original training set)
Vanetti et al. [5] (original training set)
Fast-YOLO & CR-NET (original training set)
Fast-YOLO & CR-NET (data augmentation)
Table 9: Results obtained in the Meter-Integration subset by previous works and using Fast-YOLO & CR-NET.

As expected, the recognition rate accomplished by our deep learning approach was considerably better than those obtained in previous works (% ), which employed methods based on conventional image processing with handcrafted features. It is noteworthy the ability of both Fast-YOLO and CR-NET models to generalize with very few training images in each stage, i.e., for counter detection and for counter recognition.

The results were improved when using data augmentation, as in the experiments carried out on the UFPR-AMR dataset. The accuracy achieved was %, significantly outperforming the baselines. It is worth noting that, on average, only - counters were incorrectly classified and generally the error occurred in the rightmost digit of the counter. Two samples of errors are shown in Fig. 10: the last digit was incorrectly labeled as in one of the cases, probably due to some noise in the image, while in the other case the last digit was detected/recognized with confidence lower than , apparently due to the m text touching the digit (there were no similar examples in the training set).

Figure 10: Incorrect readings obtained with the Fast-YOLO & CR-NET approach, where the last digit was incorrectly classified (left), and the last digit was detected/recognized with a confidence value below the threshold (right).

6 Conclusions

In this paper, we presented a two-stage \gls*amr approach with \glspl*cnn being employed for both counter detection and recognition. The Fast-YOLO [14] model was employed for counter detection, while three CNN-based approaches (CR-NET [15], Multi-Task Learning [16] and \gls*crnn [17]) were employed for counter recognition. In addition, we proposed the use of data augmentation for training the \gls*cnn models for counter recognition, in order to construct a balanced training set with many more examples.

We also introduced a public dataset that includes , images (with , manually labeled digits) from electric meters of different types and in different conditions, i.e., the UFPR-AMR dataset. It is three times larger than the largest dataset found in the literature for this task and contains a well-defined evaluation protocol, allowing a fair comparison of different methods. Furthermore, we labeled the region containing the significant digits in two public datasets [5, 13] and these annotations are also publicly available to the research community.

The counter detection stage was successfully tackled using the Fast-YOLO model, which was able to detect the region containing the significant digits in all images of every dataset evaluated in this work. For counter recognition, the CR-NET model yielded the best recognition results in the UFPR-AMR dataset (i.e., %), outperforming both Multi-Task and \gls*crnn models which achieved % and %, respectively. These results were attained by taking advantage of data augmentation, which was essential to accomplishing promising results. In a public dataset [5], outstanding results (i.e., an overall accuracy of %) were achieved using less than images for training the Fast-YOLO and CR-NET models, significantly outperforming both baselines.

The CR-NET and Multi-Task models achieved impressive \gls*fps rates on a high-end graphic card. When considering the time spent in the detection stage, it is possible to process and \gls*fps using the CR-NET and Multi-Task models, respectively. Therefore, these approaches can be employed (taking a few seconds) in low-end setups or even in some mobile phones.

As future work, we intend to create an extension of the UFPR-AMR dataset with more than , images of meters of different types and under different conditions acquired by the company’s employees to perform a more realistic analysis of deep learning techniques in the \gls*amr context. Additionally, we plan to explore the meter’s model in the \gls*amr pipeline and investigate in depth the cases where the counter has rotating digits since this is one of the main causes of errors in \gls*amr.

Acknowledgements

This work was supported by grants from the National Council for Scientific and Technological Development (CNPq) (# 428333/2016-8, # 311053/2016-5 and # 313423/2017-2), the Minas Gerais Research Foundation (FAPEMIG) (APQ-00567-14 and PPM-00540-17) and the Coordination for the Improvement of Higher Education Personnel (CAPES) (Deep-Eyes Project). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. We also thank the \acrfull*copel for allowing one of the authors (Victor Barroso) to collect the images for the UFPR-AMR dataset.

References

  • [1] D. Shu, S. Ma, and C. Jing, “Study of the automatic reading of watt meter based on image processing technology,” in IEEE Conference on Industrial Electronics and Applications, 2214–2217 (2007).
  • [2] I. Gallo, A. Zamberletti, and L. Noce, “Robust angle invariant GAS meter reading,” in International Conference on Digital Image Computing: Techniques and Applications, 1–7 (2015).
  • [3] Y. Gao, C. Zhao, J. Wang, et al., “Automatic watermeter digit recognition on mobile devices,” in Internet Multimedia Computing and Service, 87–95, Springer Singapore (2018).
  • [4] Y. Kabalci, “A survey on smart metering and smart grid communication,” Renewable and Sustainable Energy Reviews 57, 302 – 318 (2016).
  • [5] M. Vanetti, I. Gallo, and A. Nodari, “Gas meter reading from real world images using a multi-net system,” Pattern Recognition Letters 34(5), 519–526 (2013).
  • [6] M. Cerman, G. Shalunts, and D. Albertini, “A mobile recognition system for analog energy meter scanning,” in Advances in Visual Computing, 247–256, Springer International Publisher (2016).
  • [7] D. Quintanilha et al., “Automatic consumption reading on electromechanical meters using HoG and SVM,” in Latin American Conference on Networked and Electronic Media, 11–15 (2017).
  • [8] V. C. P. Edward, “Support vector machine based automatic electric meter reading system,” in IEEE International Conference on Computational Intelligence and Computing Research, 1–5 (2013).
  • [9] Y. Zhang, S. Yang, X. Su, et al., “Automatic reading of domestic electric meter: an intelligent device based on image processing and ZigBee/Ethernet communication,” Journal of Real-Time Image Processing 12, 133–143 (2016).
  • [10] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436 (2015).
  • [11] L. Gómez, M. Rusiñol, and D. Karatzas, “Cutting sayre’s knot: Reading scene text without segmentation. application to utility meters,” in 13th IAPR International Workshop on Document Analysis Systems (DAS), 97–102 (2018).
  • [12] J. Salamon and J. P. Bello, “Deep convolutional neural networks and data augmentation for environmental sound classification,” IEEE Signal Processing Letters 24, 279–283 (2017).
  • [13] J. C. Gonçalves, “Reconhecimento de dígitos em imagens de medidores de consumo de gás natural utilizando técnicas de visão computacional,” Master’s thesis, Universidade Tecnológica Federal do Paraná - UTFPR (2016).
  • [14] J. Redmon, S. Divvala, R. Girshick, et al., “You only look once: Unified, real-time object detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 779–788 (2016).
  • [15] S. Montazzolli and C. R. Jung, “Real-time brazilian license plate detection and recognition using deep convolutional neural networks,” in 30th Conference on Graphics, Patterns and Images (SIBGRAPI), 55–62 (2017).
  • [16] G. R. Gonçalves, M. A. Diniz, R. Laroca, et al., “Real-time automatic license plate recognition through deep multi-task networks,” in 31th Conference on Graphics, Patterns and Images (SIBGRAPI), 110–117 (2018).
  • [17] B. Shi, X. Bai, and C. Yao, “An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 2298–2304 (2017).
  • [18] S. Du, M. Ibrahim, M. Shehata, et al., “Automatic license plate recognition (ALPR): A state-of-the-art review,” Trans. on Circuits and Systems for Video Technology 23, 311–325 (2013).
  • [19] D. Karatzas et al., “ICDAR 2015 competition on robust reading,” in International Conference on Document Analysis and Recognition (ICDAR), 1156–1160 (2015).
  • [20] A. Anis, M. Khaliluzzaman, M. Yakub, et al., “Digital electric meter reading recognition based on horizontal and vertical binary pattern,” in Int. Conference on Electrical Information and Communication Technology, 1–6 (2017).
  • [21] S. Zhao, B. Li, J. Yuan, et al., “Research on remote meter automatic reading based on computer vision,” in IEEE PES Transmission and Distribution Conference and Exposition, 1–4 (2005).
  • [22] L. A. Elrefaei, A. Bajaber, S. Natheir, et al., “Automatic electricity meter reading based on image processing,” in IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), 1–5 (2015).
  • [23] M. Rodriguez, G. Berdugo, D. Jabba, et al., “HD MR: a new algorithm for number recognition in electrical meters,” Turkish Journal of Elec. Engineering & Comp. Sciences 22, 87–96 (2014).
  • [24] R. Smith, “An overview of the Tesseract OCR Engine,” in International Conference on Document Analysis and Recognition, 2, 629–633 (2007).
  • [25] A. Nodari and I. Gallo, “A multi-neural network approach to image detection and segmentation of gas meter counter.,” in IAPR Conference on Machine Vision Applications, 239–242 (2011).
  • [26] L. Zhao, Y. Zhang, Q. Bai, et al., “Design and research of digital meter identifier based on image and wireless communication,” in International Conference on Industrial Mechatronics and Automation, 101–104 (2009).
  • [27] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks 61, 85–117 (2015).
  • [28] Copel, “Energy Company Of Paraná.” http://www.copel.com/hpcopel/english/. Accessed: 2018-04-24.
  • [29] G. R. Gonçalves, S. P. G. da Silva, D. Menotti, et al., “Benchmark for license plate character segmentation,” Journal of Electronic Imaging 25(5) (2016).
  • [30] R. Laroca, E. Severo, L. A. Zanlorensi, et al., “A robust real-time automatic license plate recognition based on the yolo detector,” in 2018 International Joint Conference on Neural Networks (IJCNN), 1–10 (2018).
  • [31] B. Wu, F. Iandola, P. H. Jin, et al., “SqueezeDet: Unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 446–454 (2017).
  • [32] S. Tripathi, G. Dane, B. Kang, et al., “LCDet: Low-complexity fully-convolutional neural networks for object detection in embedded systems,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, 411–420 (2017).
  • [33] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6517–6525 (2017).
  • [34] M. Everingham, L. Van Gool, C. K. I. Williams, et al., “The pascal visual object classes (VOC) challenge,” International Journal of Computer Vision 88, 303–338 (2010).
  • [35] J. Deng, W. Dong, R. Socher, et al., “ImageNet: A large-scale hierarchical image database,” in Conference on Computer Vision and Pattern Recognition, 248–255 (2009).
  • [36] AlexeyAB, “YOLOv2 and YOLOv3: how to improve object detection.”
  • [37] B. Alexe, T. Deselaers, and V. Ferrari, “Measuring the objectness of image windows,” IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 2189–2202 (2012).
  • [38] J. Špaňhel, J. Sochor, R. Juránek, et al., “Holistic recognition of low quality license plates by CNN using track annotated data,” in IEEE Intern. Conference on Advanced Video and Signal Based Surveillance, 1–6 (2017).
  • [39] F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: continual prediction with LSTM,” in International Conference on Artificial Neural Networks, 2, 850–855 vol.2 (1999).
  • [40] A. Graves, S. Fernández, F. Gomez, et al., “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in International Conference on Machine Learning (ICML), 369–376 (2006).
  • [41] J. Redmon, “Darknet: Open source neural networks in C.” http://pjreddie.com/darknet/ (2013–2019).
  • [42] A. Paszke et al., “Automatic differentiation in PyTorch,” (2017).
  • [43] F. Chollet et al., “Keras.” https://keras.io (2015).

Rayson Laroca received his bachelor’s degree in software engineering from the State University of Ponta Grossa, Brazil. Currently, he is a master’s student at the Federal University of Paraná, Brazil. His research interests include machine learning, pattern recognition and computer vision.

Victor Barroso is an undergraduate student in computer science at the Federal University of Paraná, Brazil. His research interests include machine learning, computer vision, pattern recognition and its applications.

Matheus A. Diniz is a master’s student at the Federal University of Minas Gerais, Brazil, where he also received his bachelor’s degree in computer science. His research focuses on deep learning techniques applied to computer vision and surveillance.

Gabriel R. Gonçalves is a PhD student at Federal University of Minas Gerais. He received a bachelor’s degree in computer science from Federal University of Ouro Preto, Brazil and a master’s degree in computer science from the Federal University of Minas Gerais, Brazil. His research interests includes machine learning, computer vision and pattern recognition, specially applied to smart surveillance tasks.

William Robson Schwartz is an associate professor in the Department of Computer Science at the Federal University of Minas Gerais, Brazil. He received a PhD from the University of Maryland, College Park, Maryland, USA. His research interests include computer vision, smart surveillance, forensics, and biometrics, in which he authored more than 100 scientific papers and coordinated projects sponsored by several Brazilian Funding Agencies. He is also the head of the Smart Surveillance Interest Group.

David Menotti is an associate professor at the Federal University of Paraná, Brazil. He received his BS and MS degrees in computer engineering and applied informatics from the Pontifical Catholic University of Paraná, Brazil, in 2001 and 2003, respectively, and his PhD degree in computer science from the Federal University of Minas Gerais, Brazil, in 2008. His research interests include machine learning, image processing, pattern recognition, computer vision, and information retrieval.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
341445
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description