Benchmark for Generic Product Detection: A strong baseline for Dense Object Detection
Object detection in densely packed scenes is a new area where standard object detectors fail to train well . We show that the performance of the standard object detectors on densely packed scenes is superior when it is trained on normal scenes rather than dense scenes. We train a standard object detector on a small, normally packed dataset with data augmentation techniques. This achieves significantly better results than state-of-the-art methods that are trained on densely packed scenes. We obtain 68.5% mAP on SKU110K dataset , 19.3% higher and 1.4x better than the previous state-of-the-art. We also create a varied benchmark for generic SKU product detection by providing full annotations for multiple public datasets. It can be accessed at this URL. We hope that this benchmark helps in building robust detectors that perform reliably across different settings.
Dense Object Detection, Grocery Products, Retail Products, Benchmark, Generic SKU Detection
|Dataset||#Images||#Objects||#Obj/Img||Object Size||Avg Img Size|
The real-world applications of computer vision span multiple industries like banking, agriculture, governance, healthcare, automotive, retail, and manufacturing. A few prominent ones include self-driving cars, automated retail stores like Amazon Go, and automated surveillance. The use of object detectors is absolutely a critical part of such real-world products. The area of research in object detectors has been quite vibrant, with a considerable number of datasets spanning various domains. However, the sub-topic of object detection in dense scenes is rarely explored. A recent study  showed that standard object detectors do not train well on dense scenes. This topic is quite relevant to multiple applications, for example, in surveillance and retail industries. Some of these applications are crowd counting, monitoring and auditing of retail shelves, insights into brand presence for sales, marketing teams, and so on.
Exemplar based object detection refers to the detection and classification of objects from scene images with the supervision of an exemplar image of the object. Most object detection datasets are quite large, with enough number of instances of every object category. Most of the object detection methods  depend on balanced and large object detection datasets to perform well in every category. These guarantees cannot be made for real-world applications where the object categories vary widely both in variety and in availability. For example, in the retail domain, the gathering of data to train an end to end object detection model is highly time-consuming as well as costly. This is because gathering enough data which covers all the variants of objects and has equal representation of each object is going to be much harder. For example, making sure that our dataset contains a specific rare Mercedes logo design requires us to search across multiple showrooms or market places. We cannot be even sure that availability across all classes in retail is even enough to create a balanced object detection dataset. A similar case can be made when we need to monitor retail shelves, which has thousands of SKUs having different availability and frequency.
Moreover, in a dynamic world, where new products, new marketing materials, new logos keep getting introduced, the importance of incorporating incremental learning in real-world applications becomes greatly necessary. Unfortunately, the methods of incremental learning for object detection lead to a vast and unacceptable drop in performance . A lot of these applications also involve distinguishing between extremely fine-grained classes. E.g., retail shelf monitoring, logo monitoring, face recognition. Building an end-to-end detector that would do both the dense object detection and fine-grained recognition is a very challenging task whose real-world performance is quite bad. Hence, to tackle this problem, we introduce to decouple detection and classification. We propose to use a general object detector that predicts bounding boxes of objects that is of interest. The detected objects can then be classified by a suitable fine-grained classifier.
This brings us to the current work of generic object detection in densely packed scenes. Previous works have shown that standard detectors trained on dense scenes do not perform well on dense scenes. In this work, we show that standard detectors trained on small datasets of normal scenes are quite sufficient and work much better than the ones trained on large datasets of dense scenes. We observe a 19.3% higher mAP, 1.4x better than the previous state-of-the-art in the SKU110K dataset.
We also create a varied benchmark for generic SKU product detection by annotating every SKU in multiple datasets. The motivation behind this is to create detectors that are robust across different settings. It is quite common for deep learning based detectors to perform well on the data similar to their training set. In the industry, there is a high need for robust detectors that perform reliably in the wild. This benchmark consists of 6 datasets (Table 1) used solely as a test set. Models trained on any dataset can be tested on this benchmark to measure the progress of robust generic SKU detection. The benchmark datasets, evaluation code, and the leaderboard are available at this URL
|Full Approach (RetinaNet) ||0.492||-||0.556||0.554||-|
2 Benchmark Datasets
 recently released a huge benchmark dataset for product detection in densely packed scenes. To increase diversity to the task of generic product detection, we release a benchmark of datasets. Details of the datasets are shown in Table 1. Please note that all of these datasets are used as a test set on which we benchmark our models. We welcome the community to participate in this benchmark by submitting their results.
 released a database of 3153 supermarket images. They also provided information regarding what product is present in each image. We annotate every object in the entire dataset to provide ground truth for the evaluation of general object detection. The average number of objects per image in this dataset is 37, while the average object area is roughly 0.052 megapixels.
2.2 Grocery products (GP)
A multi-label classification approach was proposed by  accompanied by 680 annotated shop images from their GP dataset. The annotation provided by them covered a group of same products in a bounding box rather than bounding boxes for individual boxes. A subset of 70 images was chosen by  and annotated with the desired object-level annotations. We provide individual bounding box annotation for every product for all 680 images in this dataset. The average number of objects per image in this dataset is 13, while the average object size is roughly 0.293 megapixels.
A fine-grained grocery product dataset was released by . It consists of 234 test images taken from 2 stores. The authors annotated only the products belonging to certain categories. To create ground truth for generic object detection, we decided to annotate every product in the entire dataset. The average number of objects per image in this dataset is 20, while the average object area is roughly 0.377 megapixels.
|Type of Images||Avg Img Size||#Images|
|Type 1 (HoloLens)||1.11||30|
|Type 3 (OnePlus-6T)||16.03||208|
2.4 Existing General Product Datasets
A retail product dataset was released by  containing 345 images of tobacco shelves collected 40 stores with four cameras. The annotations of every product were also released by the authors
Recently,  released a huge dataset for precise object detection in densely packed scenes. This dataset contains 11,762 images that were split into train, validation, and test sets. The test set consists of 2941 images with the average number of objects per image being 146. The average object area is roughly 0.27%, making it the lowest among all the datasets in this benchmark.
We collected close to 300 images encompassing various shapes in which retail products occur from the public domain (e.g., GoogleImages, OpenImages). The average number of objects per image in this training set is 14.6. This is in contrast with the data on which  was trained, where the average number of objects per image is 147.4, as shown in Table 4. We apply standard object detection augmentations from . We train a standard object detector, Faster-RCNN , on our training set described above as a baseline for this benchmark.
|Dataset||#Images||#Images with SKU count greater than|
4 Implementation & Results
We use standard post-processing steps like Non-Max Suppression after our detections. We use multi-scale testing for scenes where object sizes are quite large (e.g., GP, CAPG-GP), but for others like SKU110K, only a single scale is used.
We use the same evaluation metric as the recent work . This is the standard evaluation metric used by COCO . Average precision (AP) and Average Recall (AR) are reported at IoU=.50:.05:.95 (averaged by varying IoU between 0.5 and 0.95 with 0.05 intervals). 300 here represents the maximum number of objects in an image. AP and AR at specific IoUs (0.50 and 0.75) are also reported.
Our results on the SKU110K dataset (Table 2) clearly shows that standard detectors trained on normal scenes work much better than the sophisticated methods trained on dense scenes. It also shows that there is no need for large scale datasets for generic SKU detection. This method serves as a simple baseline while methods exploiting the shape of the objects and structure of densely packed scenes look promising.
A qualitative output of our method on different datasets are shown in Figure 1, 2, 3. The performance on TobaccoShelves was a bit low 2, which is seen qualitatively in Figure 5. This shows that evaluating on a varied benchmark is necessary to have a detector perform reliably in the wild. One can see that our method performs greatly when homogeneous objects are present throughout the image. It also detects objects of different aspect ratios comfortably. Current limitations of this baseline model include handling multi-scale objects ranging from 0.1% to 20% of the scene. Better scale-invariant object detectors  can be tried as future work.
6 Dense Object Detection
7 Related Work
Identifying real-world products (in situ) by training on exemplar template product images (in vitro) was initially proposed by . They released a database of 120 SKUs for product classification. Six in vitro images were collected from the web for each product and used for training. The in situ images were provided from frames of videos captured in a grocery store. There have been a few related retail product checkout datasets by  and . Both of them are densely packed product datasets arranged in the fashion of a checkout counter, with many overlapping regions between objects as well.
- (2018) Albumentations: fast and flexible image augmentations. CoRR abs/1809.06839. External Links: Cited by: §3.
- (2018) MVTec D2S: densely segmented supermarket dataset. CoRR abs/1804.08292. External Links: Cited by: §7.
- (2019) Towards identification of packaged products via computer vision: convolutional neural networks for object detection and image classification in retail environments. In Proceedings of the 9th International Conference on the Internet of Things, IoT 2019, New York, NY, USA, pp. 26:1–26:8. External Links: Cited by: §2.4.1, Figure 4.
- (2018) Fine-grained grocery product recognition by one-shot learning. In Proceedings of the 26th ACM International Conference on Multimedia, MM ’18, New York, NY, USA, pp. 1706–1714. External Links: Cited by: §2.3.
- (2014-09) Recognizing products: a per-exemplar multi-label image classification approach. pp. 440–455. External Links: Cited by: §2.2, Figure 2.
- (2019) Precise detection in densely packed scenes. CoRR abs/1904.00853. External Links: Cited by: Benchmark for Generic Product Detection: A strong baseline for Dense Object Detection, Table 2, §1, §2.4.3, §2, §3, §4, §6, Figure 1.
- (2014) Microsoft COCO: common objects in context. CoRR abs/1405.0312. External Links: Cited by: §4.
- (2007-06) Recognizing groceries in situ using in vitro training data. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, Vol. , pp. 1–8. External Links: Cited by: §7.
- (2015) Faster R-CNN: towards real-time object detection with region proposal networks. CoRR abs/1506.01497. External Links: Cited by: §1, §3.
- (2017) Incremental learning of object detectors without catastrophic forgetting. CoRR abs/1708.06977. External Links: Cited by: §1.
- (2017) An analysis of scale invariance in object detection - SNIP. CoRR abs/1711.08189. External Links: Cited by: §5.
- (2017) Product recognition in store shelves as a sub-graph isomorphism problem. CoRR abs/1707.08378. External Links: Cited by: §2.2.
- (2015-03) Toward retail product recognition on grocery shelves. pp. 944309. External Links: Cited by: §2.4.2, Figure 5.
- (2019) RPC: A large-scale retail product checkout dataset. CoRR abs/1901.07249. External Links: Cited by: §7.
- (2007-11-16) Where’s the weet-bix?. In ACCV (1), Y. Yagi, S. B. Kang, I. Kweon and H. Zha (Eds.), Lecture Notes in Computer Science, Vol. 4843, pp. 800–810. Cited by: §2.1, Figure 3.