\thesection Introduction


As an instance-level recognition problem, person re-identification (re-ID) relies on discriminative features, which not only capture different spatial scales but also encapsulate an arbitrary combination of multiple scales. We call features of both homogeneous and heterogeneous scales omni-scale features. In this paper, a novel deep re-ID CNN is designed, termed omni-scale network (OSNet), for omni-scale feature learning. This is achieved by designing a residual block composed of multiple convolutional streams, each detecting features at a certain scale. Importantly, a novel unified aggregation gate is introduced to dynamically fuse multi-scale features with input-dependent channel-wise weights. To efficiently learn spatial-channel correlations and avoid overfitting, the building block uses pointwise and depthwise convolutions. By stacking such block layer-by-layer, our OSNet is extremely lightweight and can be trained from scratch on existing re-ID benchmarks. Despite its small model size, OSNet achieves state-of-the-art performance on six person re-ID datasets, outperforming most large-sized models, often by a clear margin. Code and models are available at: https://github.com/KaiyangZhou/deep-person-reid.

\thesection Introduction

Person re-identification (re-ID), a fundamental task in distributed multi-camera surveillance, aims to match people appearing in different non-overlapping camera views. As an instance-level recognition problem, person re-ID faces two major challenges as illustrated in Fig. Document. First, the intra-class (instance/identity) variations are typically big due to the changes of camera viewing conditions. For instance, both people in Figs. Document(a) and (b) carry a backpack; the view change across cameras (frontal to back) brings large appearance changes in the backpack area, making matching the same person difficult. Second, there are also small inter-class variations – people in public space often wear similar clothes; from a distance as typically in surveillance videos, they can look incredibly similar (see the impostors for all four people in Fig. Document).



Figure \thefigure: Person re-ID is a hard problem, as exemplified by the four triplets of images above. Each sub-figure shows, from left to right, the query image, a true match and an impostor/false match.

To overcome these two challenges, key to re-ID is to learn discriminative features. We argue that such features need to be of omni-scale, defined as the combination of variable homogeneous scales and heterogeneous scales, each of which is composed of a mixture of multiple scales. The need for omni-scale features is evident from Fig. Document. To match people and distinguish them from impostors, features corresponding small local regions (e.g. shoes, glasses) and global whole body regions are equally important. For example, given the query image in Fig. Document(a) (left), looking at the global-scale features (e.g. young man, a white T-shirt + grey shorts combo) would narrow down the search to the true match (middle) and an impostor (right). Now the local-scale features come into play. The shoe region gives away the fact that the person on the right is an impostor (trainers vs. sandals). However, for more challenging cases, even features of variable homogeneous scales would not be enough and more complicated and richer features that span multiple scales are required. For instance, to eliminate the impostor in Fig. Document(d) (right), one needs features that represent a white T-shirt with a specific logo in the front. Note that the logo is not distinctive on its own – without the white T-shirt as context, it can be confused with many other patterns. Similarly, the white T-shirt is likely everywhere in summer (e.g. Fig. Document(a)). It is however the unique combination, captured by heterogeneous-scale features spanning both small (logo size) and medium (upper body size) scales, that makes the features most effective. Nevertheless, none of the existing re-ID models addresses omni-scale feature learning. In recent years, deep convolutional neural networks (CNNs) have been widely used in person re-ID to learn discriminative features [chang2018multi, li2018harmonious, liu2017hydraplus, si2018dual, sun2018beyond, xu2018attention, yang2019towards, zheng2019joint]. However, most of the CNNs adopted, such as ResNet [he2016deep], were originally designed for object category-level recognition tasks that are fundamentally different from the instance-level recognition task in re-ID. For the latter, omni-scale features are more important, as explained earlier. A few attempts at learning multi-scale features also exist [qian2017multi, chang2018multi]. Yet, none has the ability to learn features of both homogeneous and heterogeneous scales. In this paper, we present OSNet, a novel CNN architecture designed for learning omni-scale feature representations. The underpinning building block consists of multiple convolutional streams with different receptive field sizes1 (see Fig. Document). The feature scale that each stream focuses on is determined by exponent, a new dimension factor that is linearly increased across streams to ensure that various scales are captured in each block. Critically, the resulting multi-scale feature maps are dynamically fused by channel-wise weights that are generated by a unified aggregation gate (AG). The AG is a mini-network sharing parameters across all streams with a number of desirable properties for effective model training. With the trainable AG, the generated channel-wise weights become input-dependent, hence the dynamic scale fusion. This novel AG design allows the network to learn omni-scale feature representations: depending on the specific input image, the gate could focus on a single scale by assigning a dominant weight to a particular stream or scale; alternatively, it can pick and mix and thus produce heterogeneous scales.



Figure \thefigure: A schematic of the proposed building block for OSNet. R: Receptive field size.

Apart from omni-scale feature learning, another key design principle adopted in OSNet is to construct a lightweight network. This brings a couple of benefits: (1) re-ID datasets are often of moderate size due to the difficulties in collecting across-camera matched person images. A lightweight network with a small number of parameters is thus less prone to overfitting. (2) In a large-scale surveillance application (e.g. city-wide surveillance using thousands of cameras), the most practical way for re-ID is to perform feature extraction at the camera end. Instead of sending the raw videos to a central server, only the extracted features need to be sent. For on-device processing, small re-ID networks are clearly preferred. To this end, in our building block, we factorise standard convolutions with pointwise and depthwise convolutions [howard2017mobilenets, sandler2018mobilenetv2]. The contributions of this work are thus both the concept of omni-scale feature learning and an effective and efficient implementation of it in OSNet. The end result is a lightweight re-ID model that is more than one order of magnitude smaller than the popular ResNet50-based models, but performs better: OSNet achieves state-of-the-art performance on six person re-ID datasets, beating much larger networks, often by a clear margin. We also demonstrate the effectiveness of OSNet on object category recognition tasks, namely CIFAR [krizhevsky2009learning] and ImageNet [deng2009imagenet], and a multi-label person attribute recognition task. The results suggest that omni-scale feature learning is useful beyond instance recognition and can be considered for a broad range of visual recognition tasks. Code and pre-trained models are available in Torchreid [torchreid]2.

\thesection Related Work


Deep re-ID architectures. Most existing deep re-ID CNNs [li2014deepreid, ahmed2015improved, varior2016gated, shen2018end, guo2018efficient, subramaniam2016deep, wang2018person] borrow architectures designed for generic object categorisation problems, such as ImageNet 1K object classification. Recently, some architectural modifications are introduced to reflect the fact that images in re-ID datasets contain instances of only one object category (i.e., person) that mostly stand upright. To exploit the upright body pose, [sun2018beyond, zhang2017alignedreid, fu2019horizontal, wang2018learning] add auxiliary supervision signals to features pooled horizontally from the last convolutional feature maps. [si2018dual, song2018mask, li2018harmonious] devise attention mechanisms to focus feature learning on the foreground person regions. In [zhao2017spindle, su2017pose, xu2018attention, suh2018part, tian2018eliminating, zhang2019densely], body part-specific CNNs are learned by means of off-the-shelf pose detectors. In [li2017person, li2017learning, zhao2017deeply], CNNs are branched to learn representations of global and local image regions. In [yu2017devil, chang2018multi, liu2017hydraplus, wang2018resource], multi-level features extracted at different layers are combined. However, none of these re-ID networks learns multi-scale features explicitly at each layer of the networks as in our OSNet – they typically rely on an external pose model and/or hand-pick specific layers for multi-scale learning. Moreover, heterogeneous-scale features computed from a mixture of different scales are not considered. \keypointMulti-scale feature learning. As far as we know, the concept of omni-scale deep feature learning has never been introduced before. Nonetheless, the importance of multi-scale feature learning has been recognised recently and the multi-stream building block design has also been adopted. Compared to a number of re-ID networks with multi-stream building blocks [chang2018multi, qian2017multi], OSNet is significantly different. Specifically the layer design in [chang2018multi] is based on ResNeXt [xie2017aggregated], where each stream learns features at the same scale, while our streams in each block have different scales. Different to [chang2018multi], the network in [qian2017multi] is built on Inception [szegedy2015going, szegedy2016rethinking], where multiple streams were originally designed for low computational cost with handcrafted mixture of convolution and pooling layers. In contrast, our building block uses a scale-controlling factor to diversify the spatial scales to be captured. Moreover, [qian2017multi] fuses multi-stream features with learnable but fixed-once-learned streamwise weights only at the final block. Whereas we fuse multi-scale features within each building block using dynamic (input-dependent) channel-wise weights to learn combinations of multi-scale patterns. Therefore, only our OSNet is capable of learning omni-scale features with each feature channel potentially capturing discriminative features of either a single scale or a weighted mixture of multiple scales. Our experiments (see Sec. Document) show that OSNet significantly outperforms the models in [chang2018multi, qian2017multi]. \keypointLightweight network designs. With embedded AI becoming topical, lightweight CNN design has attracted increasing attention. SqueezeNet [iandola2016squeezenet] compresses feature dimensions using convolutions. IGCNet [zhang2017interleaved], ResNeXt [xie2017aggregated] and CondenseNet [huang2018condense] leverage group convolutions. Xception [chollet2017xception] and MobileNet series [howard2017mobilenets, sandler2018mobilenetv2] are based on depthwise separable convolutions. Dense convolutions are grouped with channel shuffling in ShuffleNet [zhang2018shufflenet]. In terms of lightweight design, our OSNet is similar to MobileNet by employing factorised convolutions, with some modifications that empirically work better for omni-scale feature learning.

\thesection Omni-Scale Feature Learning

In this section, we present OSNet, which specialises in learning omni-scale feature representations for the person re-ID task. We start with the factorised convolutional layer and then introduce the omni-scale residual block and the unified aggregation gate.

\thesubsection Depthwise Separable Convolutions

To reduce the number of parameters, we adopt the depthwise separable convolutions [howard2017mobilenets, chollet2017xception]. The basic idea is to divide a convolution layer with kernel into two separate layers with depthwise kernel and pointwise kernel , where denotes convolution, the kernel size, the input channel width and the output channel width. Given an input tensor of height and width , the computational cost is reduced from to , and the number of parameters from to . In our implementation, we use (pointwise depthwise instead of depthwise pointwise), which turns out to be more effective for omni-scale feature learning3. We call such layer Lite hereafter. The implementation is shown in Fig. Document.



Figure \thefigure: (a) Standard convolution. (b) Lite convolution. DW: Depth-Wise.

\thesubsection Omni-Scale Residual Block

The building block in our architecture is the residual bottleneck [he2016deep], equipped with the Lite layer (see Fig. Document(a)). Given an input , this bottleneck aims to learn a residual with a mapping function , i.e.


where represents a Lite layer that learns single-scale features (scale = 3). Note that here the layers are ignored in notation as they are used to manipulate feature dimension and do not contribute to the aggregation of spatial information [he2016deep, xie2017aggregated]. \keypointMulti-scale feature learning. To achieve multi-scale feature learning, we extend the residual function by introducing a new dimension, exponent , which represents the scale of the feature. For , with , we stack Lite layers, and this results in a receptive field of size . Then, the residual to be learned, , is the sum of incremental scales of representations up to :


When , Eq. Document reduces to Eq. Document (see Fig. Document(a)). In this paper, our bottleneck is set with (i.e. the largest receptive field is ) as shown in Fig. Document(b). The shortcut connection allows features at smaller scales learned in the current layer to be preserved effectively in the next layers, thus enabling the final features to capture a whole range of spatial scales.



Figure \thefigure: (a) Baseline bottleneck. (b) Proposed bottleneck. AG: Aggregation Gate. The first/last layers are used to reduce/restore feature dimension.

Unified aggregation gate. So far, each stream can give us features of a specific scale, i.e., they are scale homogeneous. To learn omni-scale features, we propose to combine the outputs of different streams in a dynamic way, i.e., different weights are assigned to different scales according to the input image, rather than being fixed after training. More specifically, the dynamic scale-fusion is achieved by a novel aggregation gate (AG), which is a learnable neural network. Let denote , the omni-scale residual is obtained by


where is a vector with length spanning the entire channel dimension of and denotes the Hadamard product. is implemented as a mini-network composed of a non-parametric global average pooling layer [lin2013network] and a multi-layer perceptron (MLP) with one ReLU-activated hidden layer, followed by the sigmoid activation. To reduce parameter overhead, we follow [woo2018cbam, hu2018senet] to reduce the hidden dimension of the MLP with a reduction ratio, which is set to 16. It is worth pointing out that, in contrast to using a single scalar-output function that provides a coarse scale-fusion, we choose to use channel-wise weights, i.e., the output of the AG network is a vector rather a scalar for the -th stream. This design results in a more fine-grained fusion that tunes each feature channel. In addition, the weights are dynamically computed by being conditioned on the input data. This is crucial for re-ID as the test images contain people of different identities from those in training; thus an adaptive/input-dependent feature-scale fusion strategy is more desirable. Note that in our architecture, the AG is shared for all feature streams in the same omni-scale residual block (dashed box in Fig. Document(b)). This is similar in spirit to the convolution filter parameter sharing in CNNs, resulting in a number of advantages. First, the number of parameters is independent of (number of streams), thus the model becomes more scalable. Second, unifying AG (sharing the same AG module across streams) has a nice property while performing backpropagation. Concretely, suppose the network is supervised by a loss function which is differentiable and the gradient can be computed; the gradient w.r.t , based on Eq. Document, is


The second term in Eq. Document indicates that the supervision signals from all streams are gathered together to guide the learning of . This desirable property disappears when each stream has its own gate.

\thesubsection Network Architecture

OSNet is constructed by simply stacking the proposed lightweight bottleneck layer-by-layer without any effort to customise the blocks at different depths (stages) of the network. The detailed network architecture is shown in Table Document. For comparison, the same network architecture with standard convolutions has 6.9 million parameters and 3,384.9 million mult-add operations, which are larger than our OSNet with the Lite convolution layer design. The standard OSNet in Table Document can be easily scaled up or down in practice, to balance model size, computational cost and performance. To this end, we use a width multiplier4 and an image resolution multiplier, following [howard2017mobilenets, sandler2018mobilenetv2, zhang2018shufflenet].

stage output OSNet
\multirow2*conv1 12864, 64 77 conv, stride 2
6432, 64 33 max pool, stride 2
conv2 6432, 256 bottleneck 2
\multirow2*transition 6432, 256 11 conv
3216, 256 22 average pool, stride 2
conv3 3216, 384 bottleneck 2
\multirow2*transition 3216, 384 11 conv
168, 384 22 average pool, stride 2
conv4 168, 512 bottleneck 2
conv5 168, 512 11 conv
gap 11, 512 global average pool
fc 11, 512 fc
# params 2.2M
Mult-Adds 978.9M
Table \thetable: Architecture of OSNet with input image size .

Relation to prior architectures. In terms of multi-stream design, OSNet is related to Inception [szegedy2015going] and ResNeXt [xie2017aggregated], but has crucial differences in several aspects. First, the multi-stream design in OSNet strictly follows the scale-incremental principle dictated by the exponent (Eq. Document). Specifically, different streams have different receptive fields but are built with the same Lite layers (Fig. Document(b)). Such a design is more effective at capturing a wide range of scales. In contrast, Inception was originally designed to have low computational costs by sharing computations with multiple streams. Therefore its structure, which includes mixed operations of convolution and pooling, was handcrafted. ResNeXt has multiple equal-scale streams thus learning representations at the same scale. Second, Inception/ResNeXt aggregates features by concatenation/addition while OSNet uses a unified AG (Eq. Document), which facilitates the learning of combinations of multi-scale features. Critically, it means that the fusion is dynamic and adaptive to each individual input image. Therefore, OSNet’s architecture is fundamentally different from that of Inception/ResNeXt in nature. Third, OSNet uses factorised convolutions and thus the building block and subsequently the whole network is lightweight. Compared with SENet [hu2018senet], OSNet is conceptually different. Concretely, SENet aims to re-calibrate the feature channels by re-scaling the activation values for a single stream, whereas OSNet is designed to selectively fuse multiple feature streams of different receptive field sizes in order to learn omni-scale features (see Fig. Document).

\thesection Experiments

\thesubsection Evaluation on Person Re-Identification


Datasets and settings. We conduct experiments on six widely used person re-ID datasets: Market1501 [zheng2015scalable], CUHK03 [li2014deepreid], DukeMTMC-reID (Duke) [ristani2016performance, zheng2017unlabeled], MSMT17 [wei2018person], VIPeR [gray2007evaluating] and GRID [loy2009multi]. Detailed dataset statistics are provided in Table Document. The first four are considered as ‘big’ datasets even though their sizes (around 30K training images for the largest MSMT17) are fairly moderate; while VIPeR and GRID are generally too small to train without using those big datasets for pre-training. For CUHK03, we use the 767/700 split [zhong2017rerank] with the detected images. For VIPeR and GRID, we first train a single OSNet from scratch using training images from Market1501, CUHK03, Duke and MSMT17 (Mix4), and then perform fine-tuning. Following [li2017person], the results on VIPeR and GRID are averaged over 10 random splits. Such a fine-tuning strategy has been commonly adopted by other deep learning approaches [liu2017hydraplus, wei2017glad, zhao2017spindle, li2017person, zhao2017deeply]. Cumulative matching characteristics (CMC) Rank-1 accuracy and mAP are used as evaluation metrics.

Dataset # IDs (T-Q-G) # images (T-Q-G)
Market1501 751-750-751 12936-3368-15913
CUHK03 767-700-700 7365-1400-5332
Duke 702-702-1110 16522-2228-17661
MSMT17 1041-3060-3060 30248-11659-82161
VIPeR 316-316-316 632-632-632
GRID 125-125-900 250-125-900
Table \thetable: Dataset statistics. T: Train. Q: Query. G: Gallery.
\multirow2*Method \multirow2*Publication \multirow2*Backbone Market1501 CUHK03 Duke MSMT17
R1 mAP R1 mAP R1 mAP R1 mAP
ShuffleNet [zhang2018shufflenet] CVPR’18 ShuffleNet 84.8 65.0 38.4 37.2 71.6 49.9 41.5 19.9
MobileNetV2 [sandler2018mobilenetv2] CVPR’18 MobileNetV2 87.0 69.5 \blue46.5 \blue46.0 75.2 55.8 \blue50.9 \blue27.0
BraidNet [wang2018person] CVPR’18 BraidNet 83.7 69.5 - - 76.4 59.5 - -
HAN [li2018harmonious] CVPR’18 Inception \blue91.2 \blue75.7 41.7 38.6 \blue80.5 \blue63.8 - -
OSNet (ours) ICCV’19 OSNet \red93.6 \red81.0 \red57.1 \red54.2 \red84.7 \red68.6 \red71.0 \red43.3
DaRe [wang2018resource] CVPR’18 DenseNet 89.0 76.0 63.3 59.0 80.2 64.5 - -
PNGAN [qian2018pose] ECCV’18 ResNet 89.4 72.6 - - 73.6 53.2 - -
KPM [shen2018end] CVPR’18 ResNet 90.1 75.3 - - 80.3 63.2 - -
MLFN [chang2018multi] CVPR’18 ResNeXt 90.0 74.3 52.8 47.8 81.0 62.8 - -
FDGAN [ge2018fd] NeurIPS’18 ResNet 90.5 77.7 - - 80.0 64.5 - -
DuATM [si2018dual] CVPR’18 DenseNet 91.4 76.6 - - 81.8 64.6 - -
Bilinear [suh2018part] ECCV’18 Inception 91.7 79.6 - - 84.4 69.3 - -
G2G [shen2018deep] CVPR’18 ResNet 92.7 82.5 - - 80.7 66.4 - -
DeepCRF [chen2018group] CVPR’18 ResNet 93.5 81.6 - - 84.9 69.5 - -
PCB [sun2018beyond] ECCV’18 ResNet 93.8 81.6 63.7 57.5 83.3 69.2 68.2 40.4
SGGNN [shen2018person] ECCV’18 ResNet 92.3 82.8 - - 81.1 68.2 - -
Mancs [wang2018mancs] ECCV’18 ResNet 93.1 82.3 65.5 60.5 84.9 71.8 - -
AANet [tay2019aanet] CVPR’19 ResNet 93.9 83.4 - - \blue87.7 \blue74.3 - -
CAMA [yang2019towards] CVPR’19 ResNet \blue94.7 84.5 \blue66.6 \blue64.2 85.8 72.9 - -
IANet [hou2019interaction] CVPR’19 ResNet 94.4 83.1 - - 87.1 73.4 75.5 46.8
DGNet [zheng2019joint] CVPR’19 ResNet \red94.8 \red86.0 - - 86.6 \red74.8 \blue77.2 \blue52.3
OSNet (ours) ICCV’19 OSNet \red94.8 \blue84.9 \red72.3 \red67.8 \red88.6 73.5 \red78.7 \red52.9
Table \thetable: Results (%) on big re-ID datasets. It is clear that OSNet achieves state-of-the-art performance on all datasets, surpassing most published methods by a clear margin. It is noteworthy that OSNet has only 2.2 million parameters, which are far less than the current best-performing ResNet-based methods. -: not available. : model trained from scratch. : reproduced by us. (Best and second best results in \redred and \blueblue respectively)

Implementation details. A classification layer (linear FC + softmax) is mounted on the top of OSNet. Training follows the standard classification paradigm where each person identity is regarded as a unique class. Similar to [li2018harmonious, chang2018multi], cross entropy loss with label smoothing [szegedy2016rethinking] is used for supervision. For fair comparison against existing models, we implement two versions of OSNet. One is trained from scratch and the other is fine-tuned from ImageNet pre-trained weights. Person matching is based on the distance of 512-D feature vectors extracted from the last FC layer (see Table Document). Batch size and weight decay are set to 64 and 5e-4 respectively. For training from scratch, SGD is used to train the network for 350 epochs. The learning rate starts from 0.065 and is decayed by 0.1 at 150, 225 and 300 epochs. Data augmentation includes random flip, random crop and random patch5. For fine-tuning, we train the network with AMSGrad [reddi2018on] and initial learning rate of 0.0015 for 150 epochs. The learning rate is decayed by 0.1 every 60 epochs. During the first 10 epochs, the ImageNet pre-trained base network is frozen and only the randomly initialised classifier is open for training. Images are resized to . Data augmentation includes random flip and random erasing [zhong2017random]. The code is based on Torchreid [torchreid]. \keypointResults on big re-ID datasets. From Table Document, we have the following observations. (1) OSNet achieves state-of-the-art performance on all datasets, outperforming most published methods by a clear margin. It is evident from Table Document that the performance on re-ID benchmarks, especially Market1501 and Duke, has been saturated lately. Therefore, the improvements obtained by OSNet are significant. Crucially, the improvements are achieved with much smaller model size – most existing state-of-the-art re-ID models employ a ResNet50 backbone, which has more than 24 million parameters (considering their extra customised modules), while our OSNet has only 2.2 million parameters. This verifies the effectiveness of omni-scale feature learning for re-ID achieved by an extremely compact network. As OSNet is orthogonal to some methods, such as the image generation based DGNet [zheng2019joint], they can be potentially combined to further boost the re-ID performance. (2) OSNet yields strong performance with or without ImageNet pre-training. Among the very few existing lightweight re-ID models that can be trained from scratch (HAN and BraidNet), OSNet exhibits huge advantages. At R1, OSNet beats HAN/BraidNet by 2.4%/9.9% on Market1501 and 4.2%/8.3% on Duke. The margins at mAP are even larger. In addition, general-purpose lightweight CNNs are also compared without ImageNet pre-training. Table Document shows that OSNet surpasses the popular MobileNetV2 and ShuffleNet by large margins on all datasets. Note that all three networks have similar model sizes. These results thus demonstrate the versatility of our OSNet: It enables effective feature tuning from generic object categorisation tasks and offers robustness against model over-fitting when trained from scratch on datasets of moderate sizes. (3) Compared with re-ID models that deploy a multi-scale/multi-stream architecture, namely those with a Inception or ResNeXt backbone [li2018harmonious, su2017pose, chen2017person, wei2017glad, chang2018multi, si2018dual], OSNet is clearly superior. As analysed in Sec. Document, this is attributed to the unique ability of OSNet to learn heterogeneous-scale features by combining multiple homogeneous-scale features with the dynamic AG. \keypointResults on small re-ID datasets. VIPeR and GRID are very challenging datasets for deep re-ID approaches because they have only hundreds of training images - training on the large re-ID datasets and fine-tuning on them is thus necessary. Table Document compares OSNet with six state-of-the-art deep re-ID methods. On VIPeR, it can be observed that OSNet outperforms the alternatives by a significant margin – more than 11.4% at R1. GRID is much more challenging than VIPeR because it has only 125 training identities (250 images) and extra distractors. Further, it was captured by real (operational) analogue CCTV cameras installed in busy public spaces. JLML [li2017person] is currently the best published method on GRID. It is noted that OSNet is marginally better than JLML on GRID. Overall, the strong performance of OSNet on these two small datasets is indicative of its practical usefulness in real-world applications where collecting large-scale training data is unscalable.

Method Backbone VIPeR GRID
MuDeep [qian2017multi] Inception 43.0 -
DeepAlign [zhao2017deeply] Inception 48.7 -
JLML [li2017person] ResNet 50.2 37.5
Spindle [zhao2017spindle] Inception 53.8 -
GLAD [wei2017glad] Inception 54.8 -
HydraPlus-Net [liu2017hydraplus] Inception 56.6 -
OSNet (ours) OSNet 68.0 38.2
Table \thetable: Comparison with deep learning approaches on VIPeR and GRID. Only Rank-1 accuracy (%) is reported. -: not available.
\multirow2*Model \multirow2*Architecture Market1501
R1 mAP
\ablStdModel + unified AG (primary model) 93.6 81.0
\ablStdModel w/ full conv + unified AG 94.0 82.7
\ablStdModel (same depth) + unified AG 91.7 77.9
\ablStdModel + concatenation 91.4 77.4
\ablStdModel + addition 92.0 78.2
\ablStdModel + separate AGs 92.9 80.2
\ablStdModel + unified AG (stream-wise) 92.6 80.0
\ablStdModel + learned-and-fixed gates 91.6 77.5
\ablStdModel 86.5 67.7
\ablStdModel + unified AG 91.7 77.0
\ablStdModel + unified AG 92.8 79.9
Table \thetable: Ablation study on architectural design choices.

Ablation experiments. Table Document evaluates our architectural design choices where our primary model is model Document. is the stream cardinality in Eq. Document. (1) vs. standard convolutions: Factorising convolutions reduces the R1 marginally by 0.4% (model Document vs. Document). This means our architecture design maintains the representational power even though the model size is reduced by more than . (2) vs. ResNeXt-like design: OSNet is transformed into a ResNeXt-like architecture by making all streams homogeneous in depth while preserving the unified AG, which refers to model Document. We observe that this variant is clearly outperformed by the primary model, with 1.9%/3.1% difference in R1/mAP. This further validates the necessity of our omni-scale design. (3) Multi-scale fusion strategy: To justify our design of the unified AG, we conduct experiments by changing the way how features of different scales are aggregated. The baselines are concatenation (model Document) and addition (model Document). The primary model is better than the two baselines by more than 1.6%/2.8% at R1/mAP. Nevertheless, models Document and Document are still much better than the single-scale architecture (model Document). (4) Unified AG vs. separate AGs: When separate AGs are learned for each feature stream, the model size is increased and the nice property in gradient computation (Eq. Document) is lost. Empirically, unifying AG improves by 0.7%/0.8% at R1/mAP (model Document vs. Document), despite having less parameters. (5) Channel-wise gates vs. stream-wise gates: By turning the channel-wise gates into stream-wise gates (model Document), both the R1 and the mAP decline by 1%. As feature channels encapsulate sophisticated correlations and can represent numerous visual concepts [fong2018net2vec], it is advantageous to use channel-specific weights. (6) Dynamic gates vs. static gates: In model Document, feature streams are fused by static (learned-and-then-fixed) channel-wise gates to mimic the design in [qian2017multi]. As a result, the R1/mAP drops off by 2.0%/3.5% compared with that of dynamic gates (primary model). Therefore, adapting the scale fusion for individual input images is essential. (7) Evaluation on stream cardinality: The results are substantially improved from (model Document) to (model Document) and gradually progress to (model Document). \keypointModel shrinking hyper-parameters. We can trade-off between model size, computations and performance by adjusting the width multiplier and the image resolution multiplier . Table Document shows that by keeping one multiplier fixed and shrinking the other, the R1 drops off smoothly. It is worth noting that 92.2% R1 accuracy is obtained by a much shrunken version of OSNet with merely 0.2M parameters and 82M mult-adds (). Compared with the results in Table Document, we can see that the shrunken OSNet is still very competitive against the latest proposed models, most of which are bigger in size. This indicates that OSNet has a great potential for efficient deployment in resource-constrained devices such as a surveillance camera with an AI processor.

\multirow2* \multirow2*# params \multirow2* \multirow2*Mult-Adds Market1501
R1 mAP
1.0 2.2M 1.0 978.9M 94.8 84.9
0.75 1.3M 1.0 571.8M 94.5 84.1
0.5 0.6M 1.0 272.9M 93.4 82.6
0.25 0.2M 1.0 82.3M 92.2 77.8
1.0 2.2M 0.75 550.7M 94.4 83.7
1.0 2.2M 0.5 244.9M 92.0 80.3
1.0 2.2M 0.25 61.5M 86.9 67.3
0.75 1.3M 0.75 321.7M 94.3 82.4
0.75 1.3M 0.5 143.1M 92.9 79.5
0.75 1.3M 0.25 35.9M 85.4 65.5
0.5 0.6M 0.75 153.6M 92.9 80.8
0.5 0.6M 0.5 68.3M 91.7 78.5
0.5 0.6M 0.25 17.2M 85.4 66.0
0.25 0.2M 0.75 46.3M 91.6 76.1
0.25 0.2M 0.5 20.6M 88.7 71.8
0.25 0.2M 0.25 5.2M 79.1 56.0
Table \thetable: Results (%) of varying width multiplier and resolution multiplier for OSNet. For input size, : ; : ; : .


Figure \thefigure: Image clusters of similar gating vectors. The visualisation shows that our unified aggregation gate is capable of learning the combination of homogeneous and heterogeneous scales in a dynamic manner.

Visualisation of unified aggregation gate. As the gating vectors produced by the AG inherently encode the way how the omni-scale feature streams are aggregated, we can understand what the AG sub-network has learned by visualising images of similar gating vectors. To this end, we concatenate the gating vectors of four streams in the last bottleneck, perform k-means clustering on test images of Mix4, and select top-15 images closest to the cluster centres. Fig. Document shows four example clusters where images within the same cluster exhibit similar patterns, i.e., combinations of global-scale and local-scale appearance. \keypointVisualisation of attention. To understand how our designs help OSNet learn discriminative features, we visualise the activations of the last convolutional feature maps to investigate where the network focuses on to extract features, i.e. attention. Following [zagoruyko2017paying], the activation maps are computed as the sum of absolute-valued feature maps along the channel dimension followed by a spatial 2 normalisation. Fig. Document compares the activation maps of OSNet and the single-scale baseline (model Document in Table Document). It is clear that OSNet can capture the local discriminative patterns of Person A (e.g., the clothing logo) which distinguish Person A from Person B. In contrast, the single-scale model over-concentrates on the face region, which is unreliable for re-ID due to the low resolution of surveillance images. Therefore, this qualitative result shows that our multi-scale design and unified aggregation gate enable OSNet to identify subtle differences between visually similar persons – a vital requirement for accurate re-ID.



Figure \thefigure: Each triplet contains, from left to right, original image, activation map of OSNet and activation map of single-scale baseline. These images indicate that OSNet can detect subtle differences between visually similar persons.

\thesubsection Evaluation on Person Attribute Recognition

Although person attribute recognition is a category-recognition problem, it is closely related to the person re-ID problem in that omni-scale feature learning is also critical: some attributes such as ‘view angle’ are global; others such as ‘wearing glasses’ are local; heterogeneous-scale features are also needed for recognising attributes such as ‘age’. \keypointDatasets and settings. We use PA-100K [liu2017hydraplus], the largest person attribute recognition dataset. PA-100K contains 80K training images and 10K test images. Each image is annotated with 26 attributes, e.g., male/female, wearing glasses, carrying hand bag. Following [liu2017hydraplus], we adopt five evaluation metrics, including mean Accuracy (mA), and four instance-based metrics, namely Accuracy (Acc), Precision (Prec), Recall (Rec) and F1-score (F1). Please refer to [li2016richly] for the detailed definitions. \keypointImplementation details. A sigmoid-activated attribute prediction layer is added on the top of OSNet. Following [li2015multi, liu2017hydraplus], we use the weighted multi-label classification loss for supervision. For data augmentation, we adopt random translation and mirroring. OSNet is trained from scratch with SGD, momentum of 0.9 and initial learning rate of 0.065 for 50 epochs. The learning rate is decayed by 0.1 at 30 and 40 epochs. \keypointResults. Table Document compares OSNet with two state-of-the-art methods [li2015multi, liu2017hydraplus] on PA-100K. It can be seen that OSNet outperforms both alternatives on all five evaluation metrics. Fig. Document provides some qualitative results. It shows that OSNet is particularly strong at predicting attributes that can only be inferred by examining features of heterogeneous scales such as age and gender.

Method PA-100K
mA Acc Prec Rec F1
DeepMar [li2015multi] 72.7 70.4 82.2 80.4 81.3
HydraPlusNet [liu2017hydraplus] 74.2 72.2 83.0 82.1 82.5
OSNet 74.6 76.0 88.3 82.5 85.3
Table \thetable: Results (%) on pedestrian attribute recognition.


Figure \thefigure: Likelihoods on ground-truth attributes predicted by OSNet. Correct/incorrect classifications based on threshold 50% are shown in \greengreen/\redred.

\thesubsection Evaluation on CIFAR


Datasets and settings. CIFAR10/100 [krizhevsky2009learning] has 50K training images and 10K test images, each with the size of . OSNet is trained following the setting in [he2016identity, zagoruyko2016wide]. Apart from the default OSNet in Table Document, a deeper version is constructed by increasing the number of staged bottlenecks from 2-2-2 to 3-8-6. Error rate is reported as the metric. \keypointResults. Table Document compares OSNet with a number of state-of-the-art object recognition models. The results suggest that, although OSNet is originally designed for fine-grained object instance recognition task in re-ID, it is also highly competitive on object category recognition tasks. Note that CIFAR100 is more difficult than CIFAR10 because it contains ten times fewer training images per class (500 vs. 5,000). However, OSNet’s performance on CIFAR100 is stronger, indicating that it is better at capturing useful patterns with limited data, hence its excellent performance on the data-scarce re-ID benchmarks.

Method Depth # params CIFAR10 CIFAR100
pre-act ResNet [he2016identity] 164 1.7M 5.46 24.33
pre-act ResNet [he2016identity] 1001 10.2M 4.92 22.71
Wide ResNet [zagoruyko2016wide] 40 8.9M 4.97 22.89
Wide ResNet [zagoruyko2016wide] 16 11.0M 4.81 22.07
DenseNet [huang2017densely] 40 1.0M 5.24 24.42
DenseNet [huang2017densely] 100 7.0M 4.10 20.20
OSNet 78 2.2M 4.41 19.21
OSNet 210 4.6M 4.18 18.88
Table \thetable: Error rates (%) on CIFAR datasets. All methods here use translation and mirroring for data augmentation. Pointwise and depthwise convolutions are counted as separate layers.

Ablation study. We compare our primary model with model Document (single-scale baseline in Table Document) and model Document (four streams + addition) on CIFAR10/100. Table Document shows that both omni-scale feature learning and unified AG contribute positively to the overall performance of OSNet.

Architecture CIFAR10 CIFAR100
5.49 21.78
+ addition 4.72 20.24
+ unified AG 4.41 19.21
Table \thetable: Ablation study on OSNet on CIFAR10/100.
Method # params Mult-Adds Top1
SqueezeNet [iandola2016squeezenet] 1.0 1.2M - 57.5
MobileNetV1 [howard2017mobilenets] 0.5 1.3M 149M 63.7
MobileNetV1 [howard2017mobilenets] 0.75 2.6M 325M 68.4
MobileNetV1 [howard2017mobilenets] 1.0 4.2M 569M 70.6
ShuffleNet [zhang2018shufflenet] 1.0 2.4M 140M 67.6
ShuffleNet [zhang2018shufflenet] 1.5 3.4M 292M 71.5
ShuffleNet [zhang2018shufflenet] 2.0 5.4M 524M 73.7
MobileNetV2 [sandler2018mobilenetv2] 1.0 3.4M 300M 72.0
MobileNetV2 [sandler2018mobilenetv2] 1.4 6.9M 585M 74.7
OSNet (ours) 0.5 1.1M 424M 69.5
OSNet (ours) 0.75 1.8M 885M 73.5
OSNet (ours) 1.0 2.7M 1511M 75.5
Table \thetable: Single-crop top1 accuracy (%) on ImageNet-2012 validation set. : width multiplier. M: Million.

\thesubsection Evaluation on ImageNet

In this section, the results on the larger-scale ImageNet 1K category dataset (LSVRC-2012 [deng2009imagenet]) are presented. \keypointImplementation. OSNet is trained with SGD, initial learning rate of 0.4, batch size of 1024 and weight decay of 4e-5 for 120 epochs. For data augmentation, we use random crops on images and random mirroring. To benchmark, we report single-crop6 top1 accuracy on the LSVRC-2012 validation set [deng2009imagenet]. \keypointResults. Table Document shows that OSNet outperforms the alternative lightweight models by a clear margin. In particular OSNet1.0 surpasses MobiltNetV21.0 by 3.5% and MobiltNetV21.4 by 0.8%. It is noteworthy that MobiltNetV21.4 is around 2.5 larger than our OSNet1.0. OSNet0.75 performs on par with ShuffleNet2.0 and outperforms ShuffleNet1.5/1.0 by 2.0%/5.9%. These results give a strong indication that OSNet has a great potential for a broad range of visual recognition tasks. Note that although the model size is smaller, our OSNet does have a higher number of mult-adds operations than its main competitors. This is mainly due to the multi-stream design. However, if both model size and number of Multi-Adds need to be small for a certain application, we can reduce the latter by introducing pointwise convolutions with group convolutions and channel shuffling [zhang2018shufflenet]. The overall results on CIFAR and ImageNet show that omni-scale feature learning is beneficial beyond re-ID and should be considered for a broad range of visual recognition tasks.

\thesection Conclusion

We presented OSNet, a lightweight CNN architecture that is capable of learning omni-scale feature representations. Extensive experiments on six person re-ID datasets demonstrated that OSNet achieved state-of-the-art performance, despite its lightweight design. The superior performance on object categorisation tasks and a multi-label attribute recognition task further suggested that OSNet is of wide interest to visual recognition beyond re-ID.


  1. We use scale and receptive field interchangeably.
  2. https://github.com/KaiyangZhou/deep-person-reid
  3. The subtle difference between the two orders is when the channel width is increased: pointwise depthwise increases the channel width before spatial aggregation.
  4. Width multiplier with magnitude smaller than 1 works on all layers in OSNet except the last FC layer whose feature dimension is fixed to 512.
  5. RandomPatch works by (1) constructing a patch pool that stores randomly extracted image patches and (2) pasting a random patch selected from the patch pool onto an input image at random position.
  6. centre crop from .
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description