Rethinking Object Detection in Retail Stores
The convention standard for object detection uses a bounding box to represent each individual object instance. However, it is not practical in the industry-relevant applications in the context of warehouses due to severe occlusions among groups of instances of the same categories. In this paper, we propose a new task, i.e., simultaneously object localization and counting, abbreviated as Locount, which requires algorithms to localize groups of objects of interest with the number of instances. However, there does not exist a dataset or benchmark designed for such a task. To this end, we collect a large-scale object localization and counting dataset with rich annotations in retail stores, which consists of images with more than million object instances in categories. Together with this dataset, we provide a new evaluation protocol and divide the training and testing subsets to fairly evaluate the performance of algorithms for Locount, developing a new benchmark for the Locount task. Moreover, we present a cascaded localization and counting network as a strong baseline, which gradually classifies and regresses the bounding boxes of objects with the predicted numbers of instances enclosed in the bounding boxes, trained in an end-to-end manner. Extensive experiments are conducted on the proposed dataset to demonstrate its significance and the analysis discussions on failure cases are provided to indicate future directions. Dataset is available at https://isrc.iscas.ac.cn/gitlab/research/locount-dataset.
Keywords:Object localization and counting, benchmark dataset, retail, cascade network.
Object detection is one of the most fundamental tasks in the computer vision community, which aims to answer the question: “where are the instances of the particular object classes?”. It is extremely useful in the retail scenarios, such as identifying commodity on the shelves to provide review or price information, and the navigation in supermarkets, to promote the sales. The conventional standard is using a bounding box to represent object instance. However, it is not achievable in the industry-relevant applications in the context of warehouses due to the severe occlusions among groups of instances of the same categories. For example, as shown in Fig. 1(c), it is extremely difficult to annotate the stacked dinner plates even by a well-trained annotator. Meanwhile, it is almost impossible for object detectors to detect all stacked dinner plates accurately, even for the state-of-the-art detectors
Inspired by the definitions of object detection [4, 34] and crowd counting [17, 35], we propose a new task, i.e., simultaneously object localization and counting, abbreviated as Locount, which requires algorithms to localize groups of objects of interest with the number of instances. Specifically, as shown in Fig. 1(d), if some object instances are severely occluded each other (e.g., the bowls and dinner plates in Fig. 2(l)) or belonging to the same product semantically (e.g., the carbonated drinks and chopsticks in Fig. 6 (g) and (h)), we merge the annotated bounding boxes of them and use the minimum enclosing bounding box with a predicted instance number to indicate this group of instances. To the best of our knowledge, there does not exist a dataset or benchmark attempt to solve this issue. That is, object detection and crowd counting problems are considered individually by their own evaluation protocols, as shown in Fig. 1(a)(b).
To solve the above issues, we collect a large-scale object localization and counting dataset at different stores and apartments, which consists of images with the JPEG image resolution of pixels. We hire over domain experts to annotate more than million object instances in categories (including Jacket, Shoes, Oven, etc.) for more than two months, and conduct several rounds of cross-checking to ensure the annotation quality. To facilitate data usage, we divide the dataset into two subsets, i.e., training and testing sets, including images for training and images for testing. Meanwhile, to fairly evaluate the performance of algorithms in the Locount task, we design a new evaluation protocol inspired by conventional object detection and counting protocols [21, 35]. It can penalize algorithms for missing object instances, for duplicate detections of one instance, for false positive detections, and for false counting numbers of detections.
Moreover, we present a cascaded localization and counting network (CLCNet) as a strong baseline, to solve object localization and counting simultaneously. Specifically, inspired by Cascade R-CNN , our CLCNet gradually classifies and regresses the bounding boxes of objects and counts the number of instances enclosed in the predicted bounding boxes with increasing IoU and count thresholds, respectively. As shown in Fig. 2(I), for the counting problem, it is challenging to predict the accurate numbers of instances enclosed in the bounding boxes due to similar appearance, especially for the stacked objects (e.g., bowls and dinner plates). To that end, we design a coarse-to-fine multi-stage classification process to gradually narrow the ranges of instance numbers instead of directly regressing instance numbers, to generate accurate results. We define the quality of a hypothesis as its localization intersection-over-union (IoU) and counting accuracy (CA) with the ground-truth, and use the increasing IoU thresholds and more accurate counting partition to generate positives/negatives for training. The whole CLCNet is trained in an end-to-end manner with the multi-task loss, formed by three terms, i.e., classification loss, regression loss, and counting loss. Extensive experiments are conducted on the proposed dataset to demonstrate its effectiveness for Locount. We also provide the analysis and discussions on failure cases to indicate future directions and improvements.
Contributions. (1) We propose a new task, i.e., Locount, which aims to localize groups of objects of interest with the numbers of enclosing instances. (2) We construct a large-scale object localization and counting dataset in retails stores and a new evaluation protocol to evaluate the performance of algorithms for Locount. (3) We present the CLCNet method to solve the Locount task, which uses a coarse-to-fine multi-stage process to gradually classify and regress the bounding boxes of objects and narrow the ranges of instance numbers, instead of directly regressing them, to generate accurate results. (4) Extensive experiments are conducted on the proposed dataset to validate the effectiveness of the proposed methods, and some analysis and discussions on failure cases are provided to indicate future directions.
2 Related work
We briefly discuss some prior work in constructing object detection datasets in retail scenarios, and the state-of-the-art object detection and counting methods.
Existing datasets. Commodity detection is critical for several applications in the retail scenarios. Several datasets are collected to boost the research and development in such field. The SOIL-47 dataset  contains only images with product categories for object recognition. The Supermarket dataset  focuses on recognizing fruits and vegetables, which consists of images in categories. The images in the dataset contains one or more items belonging to the same category with pure color backgrounds in various poses, see Fig. 2(b). D2S  is designed for product detection and recognition, which includes images in categories. Each image contains several items belonging to different categories with various poses, illumination conditions, and backgrounds. The RPC dataset  considers commodity detection in the automatic checkout scenarios, which consists of images in categories. However, the aforementioned datasets focus on image classification or commodity detection in constrained environments, which are much easier than the commodity detection in supermarkets or shopping malls in the mobile shooting views, see Fig. 2.
Recently, some attempts focus on the commodity detection task in the supermarket or shopping mall scenarios in the mobile shooting views, see Fig. 2 (g) (h) and (i). Merler et al.collect the Grozi-120 dataset , which is formed by images in categories for groceries recognition. Grozi-3.2k  contains images collected from the Internet for training, and images acquired from the real-world supermarket shelves for testing. The SHORT dataset  contains several high-resolution grocery product images captured by hand-held smart phones under various illumination conditions. The Freiburg Groceries dataset  consists of images covering different classes of groceries, including images for training and images for testing. The grocery shelves dataset  uses cameras to acquire images in product categories from the shelves in approximate stores, which includes groceries. Karlinsky et al. collect two datasets, i.e., the GameStop dataset, and the Retail-121 dataset, for fine-grained recognition. The former one consists of video clips including categories of game chunks acquired from retail stores, while the later one contains video clips with several products in retail product categories. The TGFS dataset  contains images in fine-grained categories, which is acquired in the self-service vending machines for automatic self-checkout. The Sku110k dataset  provides images with more than million annotated bounding boxes captured in densely packed scenarios, including images for training, images for validation, and images for testing. In contrast to the aforementioned datasets, our dataset focuses on commodity detection on the shelve, where some groceries are severely occluded each other and densely packed, such as the stacked plates in Fig. 1(d). Meanwhile, we focus on commodity detection of different categories, which is much more challenging than the one-class groceries detection task in . The detailed comparisons of the proposed dataset with other related datasets are presented in Table 1.
Object detection algorithms. Object detection requires algorithms to produce a series of bounding boxes with category scores, which can be roughly divided into two categories, i.e., anchor-based approach and anchor-free approach. The anchor-based approach uses the anchor boxes to generate object proposals, and then determines the accurate object regions and the corresponding class labels using convolutional networks. For example, Faster R-CNN  designs the region proposal network to generate proposals and uses Fast R-CNN  to produce accurate bounding boxes and class labels of objects. FPN  uses multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids for object detection. Cascade R-CNN  proposes a multi-stage object detection architecture, which is formed by a sequence of detectors trained with increasing IoU thresholds. Considering the efficiency, SSD , RetinaNet , and RefineDet  omit the proposal generation step and tile multi-scale anchors at different layers, which run very fast and produce competitive detection accuracy. Recently, the anchor-free approach attracts much attention of researchers, including CornerNet , CenterNet , FCOS , RepPoint , which generally produces the bounding boxes of objects by learning the features of several object key-points. The anchor-free approach has shown great potential to surpass the anchor-based approach in terms of both accuracy and efficiency.
Object counting algorithms. Object counting methods aim to predict the total number of objects in different categories existing in images, such as pedestrian counting [17, 32, 35, 23], vehicle counting [9, 33], goods counting [18, 8] and general object counting [2, 15, 3]. In contrast to the Locount task, crowd counting and localization are always based on image-level statistics, which only require algorithms to produce the centers of objects, see Fig. 1(a). The count numbers associated with the bounding boxes in our dataset (see Fig. 1(d)) is used to indicate the number of instances enclosed in the bounding boxes, designing to bypass the severe occlusion challenge in real-world applications.
|Grocery Shelves ||-||M||2015|
|Freiburg Groceries ||-||C||2016|
3 The Locount Dataset
The Locount dataset is formed by JPEG images with the resolution of pixels. Notably, to ensure the diversity, we acquire the dataset at different stores and apartments with various illumination conditions and shooting angles.
Data collection and annotation. As mentioned above, we acquire the dataset at different stores and apartments. The dataset contains common commodities, including big subclasses, i.e., Baby Stuffs (e.g., Baby Diapers and Baby Slippers), Drinks (e.g., Juice and Ginger Tea), Food Stuff (e.g., Dried Fish and Cake), Daily Chemicals (e.g., Soap and Shampoo), Clothing (e.g., Jacket and Adult hats), Electrical Appliances (e.g., Microwave Oven and Socket), Storage Appliances (e.g., Trash and Stool), Kitchen Utensils (e.g., Forks and Food Box), and Stationery and Sporting Goods (e.g., Skate and Notebook). Please see Fig. 3 for more details. There are various factors challenging the performance of algorithms, such as scale changes, illumination variations, occlusion, similar appearance, clutter background, blurring and deformation, etc.
More than object instances are annotated in the proposed Locount dataset. Specifically, we hired experts to label the bounding boxes with the instance numbers using the Colabeler tool
Object categories. We group the object categories in our Locount dataset in the hierarchical structure, which is formed by big sub-groups including Baby Stuffs, Drinks, Foodstuff, Daily Chemicals, Clothing, Electrical Appliances, Storage Appliances, Kitchen Utensils, and Stationery and Sporting Goods, shown in Fig. 3. Each sub-group is further divided into several sub-classes, and the common products in retail stores are covered by our Locount dataset. The number of instances in these sub-groups in the training and testing subsets are presented in Fig. 4(a). The detailed category distributions are summarized in the Appendix section.
Object scales. We use the square root of the area of bounding box in pixels to indicate its scale, and divide three subsets based on the scales of objects, i.e., small scale subset ( pixels), medium scale subset (- pixels), and large scale subset ( pixels). The distribution of object scales in the training and testing subsets are presented in Fig. 4(b).
Object numbers. As described above, we associate an integer to each bounding box to indicate the number of instances enclosed in the bounding box, see Fig. 2 (i), (j), (k), and (l). To facility analysis, we divide the dataset into three subsets based on the instance numbers associated on the bounding boxes, i.e., individual number subset (number equals to ), medium number subset (number is ), and large number subset (number ). The instance number distribution in the training and testing subsets are presented in Fig. 4(c).
Evaluation protocol. To fairly compare algorithms on the Locount task, we design a new evaluation protocol, which penalizes algorithms for missing object instances, for duplicate detections of one instance, for false positive detections, and for false counting numbers of detections. Inspired by MS COCO , we evaluate detectors using the new designed metrics AP, AP, AP, and AR. Specifically, a correct detection should satisfied two criteria, (1) the localization intersection over union, , between the predicted bounding box and the ground-truth bounding box is larger than the threshold , i.e., ; and (2) the counting accuracy, , between the predicted instance number enclosed in the predicted bounding box and the ground-truth instance number is larger than the threshold , i.e., . After that, AP is computed by averaging over all IoU thresholds, i.e., with the uniform step size , and AC thresholds, i.e., with the uniform step size , of all categories, which is used as the primary metric for ranking algorithms.
We design a cascaded localization and counting network (CLCNet) to solve the Locount task, which gradually classifies and regresses the bounding boxes of objects, and estimates the number of instances enclosed in the predicted bounding boxes, with the increasing IoU and count number threshold in training phase. The architecture of the proposed CLCNet is shown in Figure 5. As shown in 5, the entire image is first fed into the backbone network to extract features. A proposal sub-network (denoted as “”) is then used to produce preliminary object proposals. After that, given the detection proposals in the previous stage, multiple stages for localization and counting, i.e., are cascaded to generate final object bounding boxes with classification scores and the number of instances enclosed in the bounding box, where is the total number of stages. For the -th stage , it takes the features generated by the ROIAlign operation  to produce the intermediate classification score, object bounding box, and the number of instances. That is, the features are fed into three sibling fully connected (FC) layers, i.e., a box-regression layer, a box-classification layer, and an instance counting layer to generate the final results. Notably, the localization IoU threshold in the -th stage used to generate the positive/negative samples in training phase is set to , where is a pre-defined incremental parameter. The counting accuracy threshold for the positive/negative sample generation is determined by the architecture design of our CLCNet, which is described as follows.
We use the same architecture and configuration as  for the box-regression and box-classification layers. For the instance counting layer, a direct strategy is to use a FC layer to regress a floating point number, indicating the number of instances, called count-regression strategy. However, the numbers of instances enclosed in the bounding boxes are integers, leading challenges for the network to regress the number accurately. For example, if the ground-truth numbers of instances are and for two bounding boxes, and both of the predictions are , it is difficult for the network to choose the right direction in the training phase. To that end, we design a classification strategy to handle such issue, called count-classification strategy. Specifically, we assume the maximal number of instances is and construct bins to indicate the number of instances. Thus, the counting task is formulated as the multi-class classification task, which use a FC layer to determine the bin index to indicate the instance number.
Notably, as mentioned above, we use the cascade architecture to gradually estimate the instance number with more accurate counting partitions, i.e., the network approaches the accurate number of instances in a coarse-to-fine process. We denote to be the new divided number of classes in the -th stage. We have number of classes till the -th stage, where . To cover all possible numbers of instances, we need to ensure in design. For convenience, we can use the digital base representation to determine the counting division (i.e., the number of bins for the classification task) in each stage. We take the binary representation as an example. Let the maximal number of instances , and stages in our CLCNet. Thus, digits are more than enough to cover all kinds of the possibilities of instance numbers (i.e., ). For each stage, we can gradually cover more digits (, where ), i.e., partitioning the value space of the instance number into more parts. To be specific, in the first stage, we only focus on the first digits, i.e., , , , and , of the instance number to generate positive/negative samples. In the second stage, we cover more digits, and use the first digits, i.e., , for sample generation. The rest can be done in the same manner. Along this way, the value space of the instance number can be partitioned into , , and different parts, and the coarse-to-fine process can be constructed for more accurate counting results. Obviously, the octal, decimal or other base representations can also be used to determine the counting division in the cascade architecture.
Loss function. We use the multi-task loss to train the network in an end-to-end manner, which is formed by three terms, i.e., the classification loss, the regression loss, and the counting loss. The overall loss function is computed as
where , , and are the classification, regression, and instance counting losses, is the number of positive anchors in the training phase, and and are the predefined parameters used to balance these three loss terms. Similar to , we use the cross-entropy loss and smooth L1 loss to compute the classification loss and the regression loss , respectively. Meanwhile, for the count-regression and count-classification strategies, the smooth L1 and cross-entropy losses are used to compute the counting loss, respectively.
|Method||MS COCO protocol||Proposed protocol|
|Faster R-CNN ||45.3||64.3||53.2||55.9||39.7||56.7||46.8||50.2|
|Cascade R-CNN ||46.8||63.2||54.7||56.2||40.9||55.7||47.8||50.5|
We conduct several experiments of the state-of-the-art object detectors and the proposed CLCNet method on the proposed dataset, to demonstrate the effectiveness of CLCNet. In addition, we also provide some analysis and discussions on failure cases to describe the challenges of the collected dataset.
All the evaluated methods are implemented based on the mmdetection platform
Quantitative results. As presented in Table 2, we compare our CLCNet method with the state-of-the-art object detectors (e.g., FCOS , RepPoints , SSD , RetinaNet , Faster R-CNN , and Cascade R-CNN ), for both the conventional object detection and the proposed Locount tasks. Notably, for the Locount task, each detected bounding box of the conventional detectors is regarded to enclose only one instance. We use CLCNet-s-reg to denote the CLCNet method with stages and the count-regression strategy for counting, and CLCNet-s-cls to be the CLCNet method with stages and digital representation in the count-classification strategy for counting. Notably, if we use only one stage, CLCNet is reduced to Faster R-CNN  with counting head.
For the conventional object detection task, we use the evaluation protocol in MS COCO  to indicate the localization accuracy. As shown in Table 2, our CLCNet method produces comparable localization accuracy compared to its baselines with the count-classification strategy, e.g., CLCNet-s(3)-cls(2) vs. Cascade R-CNN  and CLCNet-s(1)-cls(10) vs. Faster R-CNN . It indicates that the count-classification strategy does not affect the accuracy of object localization. Meanwhile, it worth mentioning that with the count-regression strategy, the localization accuracy is affected to some extent, e.g., CLCNet-s(3)-reg vs. Cascade R-CNN , and CLCNet-s(1)-reg vs. Faster R-CNN , demonstrating that the floating prediction of counting layer confusing the network to produce accurate results (see Section 4).
For the Locount task, we use the proposed protocol to evaluate the performance of algorithms, shown in Table 1. As shown in Table 1, the conventional object detection methods assume that there is only one instance enclosed in each bounding box, resulting in inferior accuracy in terms of . Among them, Cascade R-CNN  produces the best score of . Meanwhile, our CLCNet method based on either the count-regression strategy or the count-classification strategy can produce the accurate number of instances in the bounding box in some scenarios, see the qualitative results shown in Fig. 6. Notably, CLCNet-s-reg perform worse than their counterpart CLCNet-s-cls, which further validate the effectiveness of the proposed count-classification strategy. Overall, the CLCNet-s(2)-cls(10) method achieves the state-of-the-art results with AP score on our Locount dataset, surpassing all other methods.
Ablation study of number of stages. We further perform experiments to study the influence of the number of stages in CLCNet in terms of object scales and object number attributes in Table 3 and Table 4. We can conclude that using multiple stages generally achieve better results. For example, CLCNet-s-cls performs better than CLCNet-s-cls in terms of the AP and AP scores in all subsets, see Table 3 and Table 4. It indicates the effectiveness of the coarse-to-fine process in our method. However, using too many stages (more than stages) may cause the over-fitting issue since too many parameters are introduced in the network, resulting in inferior results. For example, CLCNet-s-reg produces the AP score compared to CLCNet-s-reg with the AP score.
Failure analysis. We also analyze some failure cases of our CLCNet in the collected Locount dataset, shown in Fig. 7 and Fig. 8. As shown in Fig. 7, our CLCNet misses some small objects such as baby tableware and rubber balls. Besides, if the objects are with the similar appearance (see facial cleaner in Fig. 7(g) and electric frying pan in Fig. 7(h)), our CLCNet method may produces incorrect classification results. Meanwhile, as shown in Fig. 8, our method still do not perform well in terms of severe occlusion and background clutter. For example, it is difficult to count the number of a group of small pens (see Fig. 8(b)) and severely occluded basins (see Fig. 8(c)) accurately. As shown Fig. 8(f), our method is prone to outputting only one count in the bounding box when heavy occlusion occurs. Note that, there still remains much room for improvement of algorithms in the Locount task.
In this paper, we define a new task Locount to localize groups of objects with the instance numbers, which is more practical in retail scenarios. Meanwhile, we collect a large-scale object localization and counting dataset, formed by images with more than million annotated object instances in categories. A new evaluation protocol is designed to fairly compare the performance of algorithms on the Locount task. We also present the CLCNet method, which uses a coarse-to-fine multi-stage process to gradually classify and regress the bounding boxes, and predict the instance numbers enclosed in the bounding boxes. Finally, we carry out several experiments on the proposed dataset to validate the effectiveness of the proposed method, and present some analysis and discussions on failure cases to indicate future directions.
The numbers of instances in the object categories of the training and testing subsets, shown in Fig. 10. Notably, to reduce the chances of algorithms to overfit to particular scenarios, the images from training and testing sets are acquired in different locations, but share similar conditions and attributes. Thus, the numbers of instances in some categories, such as Bowl, Pen, and Toothpaste, of the training and testing sets are uneven. Meanwhile, the numbers of instances in some categories are much smaller than other categories, e.g., Facial Cleanser vs. . Electromagnetic Furnace, and Dinner Plate vs. Cutter, which is another factor challenging the performance of the algorithms in the proposed dataset. In addition, we also present more qualitative results of our CLCNet method on the Locount dataset in Fig. 9. The numbers attached on the bounding boxes indicate the predicted instance numbers enclosed in the bounding boxes.
- Most of the state-of-the-art object detectors use non-maximal suppression (NMS) to post-process object proposals to produce final detections. Specifically, it filters the proposals based on intersection-over-union (IoU) between proposals and then most of the stacked dinner plates may fail to be detected.
- (2018) Cascade R-CNN: delving into high quality object detection. In CVPR, pp. 6154–6162. Cited by: §1, §2, Table 2, §4, §4, §5, §5, §5.
- (2017) Counting everyday objects in everyday scenes. In CVPR, pp. 4428–4437. Cited by: §2.
- (2019) Object counting and instance segmentation with image-level supervision. CoRR abs/1903.02494. Cited by: §2.
- (2005) Histograms of oriented gradients for human detection. In CVPR, pp. 886–893. Cited by: §1.
- (2018) MVTec D2S: densely segmented supermarket dataset. In ECCV, pp. 581–597. Cited by: Figure 2, Table 1, §2.
- (2014) Recognizing products: A per-exemplar multi-label image classification approach. In ECCV, pp. 440–455. Cited by: Figure 2, Table 1, §2.
- (2015) Fast R-CNN. In ICCV, pp. 1440–1448. Cited by: §2.
- (2019) Precise detection in densely packed scenes. CoRR abs/1904.00853. Cited by: Figure 2, Table 1, §2, §2.
- (2015) Extremely overlapping vehicle counting. In IbPRIA, pp. 423–431. Cited by: §2.
- (2019) Take goods from shelves: A dataset for class-incremental object detection. In ICMR, pp. 271–278. Cited by: Table 1, §2.
- (2017) Mask R-CNN. In ICCV, pp. 2980–2988. Cited by: §4.
- (2016) The freiburg groceries dataset. CoRR abs/1611.05799. Cited by: Figure 2, Table 1, §2.
- (2017) Fine-grained recognition of thousands of object categories with single-example training. In CVPR, pp. 965–974. Cited by: Table 1, §2.
- (2002) Evaluating colour-based object recognition algorithms using the soil-47 database. 2. Cited by: Figure 2, Table 1, §2.
- (2018) Where are the blobs: counting by localization with point supervision. In ECCV, pp. 560–576. Cited by: §2.
- (2018) CornerNet: detecting objects as paired keypoints. In ECCV, pp. 765–781. Cited by: §2.
- (2010) Learning to count objects in images. In NeurIPS, pp. 1324–1332. Cited by: §1, §2.
- (2019) Data priming network for automatic check-out. In ACM MM, Cited by: §2.
- (2017) Feature pyramid networks for object detection. In CVPR, pp. 936–944. Cited by: §2.
- (2017) Focal loss for dense object detection. In ICCV, pp. 2999–3007. Cited by: §2, Table 2, §5.
- (2014) Microsoft COCO: common objects in context. In ECCV, pp. 740–755. Cited by: §1, §3, §5.
- (2016) SSD: single shot multibox detector. In ECCV, pp. 21–37. Cited by: §2, Table 2, §5.
- (2018) Leveraging unlabeled data for crowd counting by learning to rank. In CVPR, pp. 7661–7669. Cited by: §2.
- (2007) Recognizing groceries in situ using in vitro training data. In CVPR, Cited by: Table 1, §2.
- (2017) Faster R-CNN: towards real-time object detection with region proposal networks. TPAMI 39 (6), pp. 1137–1149. Cited by: §2, Table 2, §5, §5.
- (2014) Small hand-held object recognition test (SHORT). In WACV, pp. 524–531. Cited by: Figure 2, Table 1, §2.
- (2010) Automatic fruit and vegetable classification from images. Computers and Electronics in Agriculture 70 (1), pp. 96–104. Cited by: Figure 2, Table 1, §2.
- (2019) FCOS: fully convolutional one-stage object detection. CoRR abs/1904.01355. Cited by: §2, Table 2, §5.
- (2015) Toward retail product recognition on grocery shelves. In ICGIP, Vol. 9443, pp. 944309. Cited by: Table 1, §2.
- (2019) RPC: A large-scale retail product checkout dataset. CoRR abs/1901.07249. Cited by: Figure 2, Table 1, §2.
- (2019) RepPoints: point set representation for object detection. CoRR abs/1904.11490. Cited by: §2, Table 2, §5.
- (2015) Cross-scene crowd counting via deep convolutional neural networks. In CVPR, pp. 833–841. Cited by: §2.
- (2017) FCN-rlstm: deep spatio-temporal neural networks for vehicle counting in city cameras. In ICCV, pp. 3687–3696. Cited by: §2.
- (2018) Single-shot refinement neural network for object detection. In CVPR, pp. 4203–4212. Cited by: §1, §2.
- (2016) Single-image crowd counting via multi-column convolutional neural network. In CVPR, pp. 589–597. Cited by: §1, §1, §2.
- (2019) Objects as points. CoRR abs/1904.07850. Cited by: §2.