As an instance-level recognition problem, person re-identification (re-ID) relies on discriminative features, which not only capture different spatial scales but also encapsulate an arbitrary combination of multiple scales. We call features of both homogeneous and heterogeneous scales omni-scale features. In this paper, a novel deep re-ID CNN is designed, termed omni-scale network (OSNet), for omni-scale feature learning. This is achieved by designing a residual block composed of multiple convolutional streams, each detecting features at a certain scale. Importantly, a novel unified aggregation gate is introduced to dynamically fuse multi-scale features with input-dependent channel-wise weights. To efficiently learn spatial-channel correlations and avoid overfitting, the building block uses pointwise and depthwise convolutions. By stacking such block layer-by-layer, our OSNet is extremely lightweight and can be trained from scratch on existing re-ID benchmarks. Despite its small model size, OSNet achieves state-of-the-art performance on six person re-ID datasets, outperforming most large-sized models, often by a clear margin. Code and models are available at: https://github.com/KaiyangZhou/deep-person-reid.
Person re-identification (re-ID), a fundamental task in distributed multi-camera surveillance, aims to match people appearing in different non-overlapping camera views. As an instance-level recognition problem, person re-ID faces two major challenges as illustrated in Fig. Document. First, the intra-class (instance/identity) variations are typically big due to the changes of camera viewing conditions. For instance, both people in Figs. Document(a) and (b) carry a backpack; the view change across cameras (frontal to back) brings large appearance changes in the backpack area, making matching the same person difficult. Second, there are also small inter-class variations – people in public space often wear similar clothes; from a distance as typically in surveillance videos, they can look incredibly similar (see the impostors for all four people in Fig. Document).
To overcome these two challenges, key to re-ID is to learn discriminative features. We argue that such features need to be of omni-scale, defined as the combination of variable homogeneous scales and heterogeneous scales, each of which is composed of a mixture of multiple scales. The need for omni-scale features is evident from Fig. Document. To match people and distinguish them from impostors, features corresponding small local regions (e.g. shoes, glasses) and global whole body regions are equally important. For example, given the query image in Fig. Document(a) (left), looking at the global-scale features (e.g. young man, a white T-shirt + grey shorts combo) would narrow down the search to the true match (middle) and an impostor (right). Now the local-scale features come into play. The shoe region gives away the fact that the person on the right is an impostor (trainers vs. sandals). However, for more challenging cases, even features of variable homogeneous scales would not be enough and more complicated and richer features that span multiple scales are required. For instance, to eliminate the impostor in Fig. Document(d) (right), one needs features that represent a white T-shirt with a specific logo in the front. Note that the logo is not distinctive on its own – without the white T-shirt as context, it can be confused with many other patterns. Similarly, the white T-shirt is likely everywhere in summer (e.g. Fig. Document(a)). It is however the unique combination, captured by heterogeneous-scale features spanning both small (logo size) and medium (upper body size) scales, that makes the features most effective.
Nevertheless, none of the existing re-ID models addresses omni-scale feature learning. In recent years, deep convolutional neural networks (CNNs) have been widely used in person re-ID to learn discriminative features [chang2018multi, li2018harmonious, liu2017hydraplus, si2018dual, sun2018beyond, xu2018attention, yang2019towards, zheng2019joint]. However, most of the CNNs adopted, such as ResNet [he2016deep], were originally designed for object category-level recognition tasks that are fundamentally different from the instance-level recognition task in re-ID. For the latter, omni-scale features are more important, as explained earlier. A few attempts at learning multi-scale features also exist [qian2017multi, chang2018multi]. Yet, none has the ability to learn features of both homogeneous and heterogeneous scales.
In this paper, we present OSNet, a novel CNN architecture designed for learning omni-scale feature representations.
The underpinning building block consists of multiple convolutional streams with different receptive field sizes
Apart from omni-scale feature learning, another key design principle adopted in OSNet is to construct a lightweight network. This brings a couple of benefits:
(1) re-ID datasets are often of moderate size due to the difficulties in collecting across-camera matched person images. A lightweight network with a small number of parameters is thus less prone to overfitting.
(2) In a large-scale surveillance application (e.g. city-wide surveillance using thousands of cameras), the most practical way for re-ID is to perform feature extraction at the camera end. Instead of sending the raw videos to a central server, only the extracted features need to be sent. For on-device processing, small re-ID networks are clearly preferred. To this end, in our building block, we factorise standard convolutions with pointwise and depthwise convolutions [howard2017mobilenets, sandler2018mobilenetv2].
The contributions of this work are thus both the concept of omni-scale feature learning and an effective and efficient implementation of it in OSNet.
The end result is a lightweight re-ID model that is more than one order of magnitude smaller than the popular ResNet50-based models, but performs better: OSNet achieves state-of-the-art performance on six person re-ID datasets, beating much larger networks, often by a clear margin. We also demonstrate the effectiveness of OSNet on object category recognition tasks, namely CIFAR [krizhevsky2009learning] and ImageNet [deng2009imagenet], and a multi-label person attribute recognition task. The results suggest that omni-scale feature learning is useful beyond instance recognition and can be considered for a broad range of visual recognition tasks. Code and pre-trained models are available in Torchreid [torchreid]
\thesection Related Work\keypoint
Deep re-ID architectures. Most existing deep re-ID CNNs [li2014deepreid, ahmed2015improved, varior2016gated, shen2018end, guo2018efficient, subramaniam2016deep, wang2018person] borrow architectures designed for generic object categorisation problems, such as ImageNet 1K object classification. Recently, some architectural modifications are introduced to reflect the fact that images in re-ID datasets contain instances of only one object category (i.e., person) that mostly stand upright. To exploit the upright body pose, [sun2018beyond, zhang2017alignedreid, fu2019horizontal, wang2018learning] add auxiliary supervision signals to features pooled horizontally from the last convolutional feature maps. [si2018dual, song2018mask, li2018harmonious] devise attention mechanisms to focus feature learning on the foreground person regions. In [zhao2017spindle, su2017pose, xu2018attention, suh2018part, tian2018eliminating, zhang2019densely], body part-specific CNNs are learned by means of off-the-shelf pose detectors. In [li2017person, li2017learning, zhao2017deeply], CNNs are branched to learn representations of global and local image regions. In [yu2017devil, chang2018multi, liu2017hydraplus, wang2018resource], multi-level features extracted at different layers are combined. However, none of these re-ID networks learns multi-scale features explicitly at each layer of the networks as in our OSNet – they typically rely on an external pose model and/or hand-pick specific layers for multi-scale learning. Moreover, heterogeneous-scale features computed from a mixture of different scales are not considered. \keypointMulti-scale feature learning. As far as we know, the concept of omni-scale deep feature learning has never been introduced before. Nonetheless, the importance of multi-scale feature learning has been recognised recently and the multi-stream building block design has also been adopted. Compared to a number of re-ID networks with multi-stream building blocks [chang2018multi, qian2017multi], OSNet is significantly different. Specifically the layer design in [chang2018multi] is based on ResNeXt [xie2017aggregated], where each stream learns features at the same scale, while our streams in each block have different scales. Different to [chang2018multi], the network in [qian2017multi] is built on Inception [szegedy2015going, szegedy2016rethinking], where multiple streams were originally designed for low computational cost with handcrafted mixture of convolution and pooling layers. In contrast, our building block uses a scale-controlling factor to diversify the spatial scales to be captured. Moreover, [qian2017multi] fuses multi-stream features with learnable but fixed-once-learned streamwise weights only at the final block. Whereas we fuse multi-scale features within each building block using dynamic (input-dependent) channel-wise weights to learn combinations of multi-scale patterns. Therefore, only our OSNet is capable of learning omni-scale features with each feature channel potentially capturing discriminative features of either a single scale or a weighted mixture of multiple scales. Our experiments (see Sec. Document) show that OSNet significantly outperforms the models in [chang2018multi, qian2017multi]. \keypointLightweight network designs. With embedded AI becoming topical, lightweight CNN design has attracted increasing attention. SqueezeNet [iandola2016squeezenet] compresses feature dimensions using convolutions. IGCNet [zhang2017interleaved], ResNeXt [xie2017aggregated] and CondenseNet [huang2018condense] leverage group convolutions. Xception [chollet2017xception] and MobileNet series [howard2017mobilenets, sandler2018mobilenetv2] are based on depthwise separable convolutions. Dense convolutions are grouped with channel shuffling in ShuffleNet [zhang2018shufflenet]. In terms of lightweight design, our OSNet is similar to MobileNet by employing factorised convolutions, with some modifications that empirically work better for omni-scale feature learning.
\thesection Omni-Scale Feature Learning
In this section, we present OSNet, which specialises in learning omni-scale feature representations for the person re-ID task. We start with the factorised convolutional layer and then introduce the omni-scale residual block and the unified aggregation gate.
\thesubsection Depthwise Separable Convolutions
To reduce the number of parameters, we adopt the depthwise separable convolutions [howard2017mobilenets, chollet2017xception]. The basic idea is to divide a convolution layer with kernel into two separate layers with depthwise kernel and pointwise kernel , where denotes convolution, the kernel size, the input channel width and the output channel width. Given an input tensor of height and width , the computational cost is reduced from to , and the number of parameters from to . In our implementation, we use (pointwise depthwise instead of depthwise pointwise), which turns out to be more effective for omni-scale feature learning
\thesubsection Omni-Scale Residual Block
The building block in our architecture is the residual bottleneck [he2016deep], equipped with the Lite layer (see Fig. Document(a)). Given an input , this bottleneck aims to learn a residual with a mapping function , i.e.
where represents a Lite layer that learns single-scale features (scale = 3). Note that here the layers are ignored in notation as they are used to manipulate feature dimension and do not contribute to the aggregation of spatial information [he2016deep, xie2017aggregated]. \keypointMulti-scale feature learning. To achieve multi-scale feature learning, we extend the residual function by introducing a new dimension, exponent , which represents the scale of the feature. For , with , we stack Lite layers, and this results in a receptive field of size . Then, the residual to be learned, , is the sum of incremental scales of representations up to :
When , Eq. Document reduces to Eq. Document (see Fig. Document(a)). In this paper, our bottleneck is set with (i.e. the largest receptive field is ) as shown in Fig. Document(b). The shortcut connection allows features at smaller scales learned in the current layer to be preserved effectively in the next layers, thus enabling the final features to capture a whole range of spatial scales.
Unified aggregation gate. So far, each stream can give us features of a specific scale, i.e., they are scale homogeneous. To learn omni-scale features, we propose to combine the outputs of different streams in a dynamic way, i.e., different weights are assigned to different scales according to the input image, rather than being fixed after training. More specifically, the dynamic scale-fusion is achieved by a novel aggregation gate (AG), which is a learnable neural network. Let denote , the omni-scale residual is obtained by
where is a vector with length spanning the entire channel dimension of and denotes the Hadamard product. is implemented as a mini-network composed of a non-parametric global average pooling layer [lin2013network] and a multi-layer perceptron (MLP) with one ReLU-activated hidden layer, followed by the sigmoid activation. To reduce parameter overhead, we follow [woo2018cbam, hu2018senet] to reduce the hidden dimension of the MLP with a reduction ratio, which is set to 16. It is worth pointing out that, in contrast to using a single scalar-output function that provides a coarse scale-fusion, we choose to use channel-wise weights, i.e., the output of the AG network is a vector rather a scalar for the -th stream. This design results in a more fine-grained fusion that tunes each feature channel. In addition, the weights are dynamically computed by being conditioned on the input data. This is crucial for re-ID as the test images contain people of different identities from those in training; thus an adaptive/input-dependent feature-scale fusion strategy is more desirable. Note that in our architecture, the AG is shared for all feature streams in the same omni-scale residual block (dashed box in Fig. Document(b)). This is similar in spirit to the convolution filter parameter sharing in CNNs, resulting in a number of advantages. First, the number of parameters is independent of (number of streams), thus the model becomes more scalable. Second, unifying AG (sharing the same AG module across streams) has a nice property while performing backpropagation. Concretely, suppose the network is supervised by a loss function which is differentiable and the gradient can be computed; the gradient w.r.t , based on Eq. Document, is
The second term in Eq. Document indicates that the supervision signals from all streams are gathered together to guide the learning of . This desirable property disappears when each stream has its own gate.
\thesubsection Network Architecture
OSNet is constructed by simply stacking the proposed lightweight bottleneck layer-by-layer without any effort to customise the blocks at different depths (stages) of the network. The detailed network architecture is shown in Table Document. For comparison, the same network architecture with standard convolutions has 6.9 million parameters and 3,384.9 million mult-add operations, which are larger than our OSNet with the Lite convolution layer design. The standard OSNet in Table Document can be easily scaled up or down in practice, to balance model size, computational cost and performance. To this end, we use a width multiplier
|\multirow2*conv1||12864, 64||77 conv, stride 2|
|6432, 64||33 max pool, stride 2|
|conv2||6432, 256||bottleneck 2|
|\multirow2*transition||6432, 256||11 conv|
|3216, 256||22 average pool, stride 2|
|conv3||3216, 384||bottleneck 2|
|\multirow2*transition||3216, 384||11 conv|
|168, 384||22 average pool, stride 2|
|conv4||168, 512||bottleneck 2|
|conv5||168, 512||11 conv|
|gap||11, 512||global average pool|
Relation to prior architectures. In terms of multi-stream design, OSNet is related to Inception [szegedy2015going] and ResNeXt [xie2017aggregated], but has crucial differences in several aspects. First, the multi-stream design in OSNet strictly follows the scale-incremental principle dictated by the exponent (Eq. Document). Specifically, different streams have different receptive fields but are built with the same Lite layers (Fig. Document(b)). Such a design is more effective at capturing a wide range of scales. In contrast, Inception was originally designed to have low computational costs by sharing computations with multiple streams. Therefore its structure, which includes mixed operations of convolution and pooling, was handcrafted. ResNeXt has multiple equal-scale streams thus learning representations at the same scale. Second, Inception/ResNeXt aggregates features by concatenation/addition while OSNet uses a unified AG (Eq. Document), which facilitates the learning of combinations of multi-scale features. Critically, it means that the fusion is dynamic and adaptive to each individual input image. Therefore, OSNet’s architecture is fundamentally different from that of Inception/ResNeXt in nature. Third, OSNet uses factorised convolutions and thus the building block and subsequently the whole network is lightweight. Compared with SENet [hu2018senet], OSNet is conceptually different. Concretely, SENet aims to re-calibrate the feature channels by re-scaling the activation values for a single stream, whereas OSNet is designed to selectively fuse multiple feature streams of different receptive field sizes in order to learn omni-scale features (see Fig. Document).
\thesubsection Evaluation on Person Re-Identification\keypoint
Datasets and settings. We conduct experiments on six widely used person re-ID datasets: Market1501 [zheng2015scalable], CUHK03 [li2014deepreid], DukeMTMC-reID (Duke) [ristani2016performance, zheng2017unlabeled], MSMT17 [wei2018person], VIPeR [gray2007evaluating] and GRID [loy2009multi]. Detailed dataset statistics are provided in Table Document. The first four are considered as ‘big’ datasets even though their sizes (around 30K training images for the largest MSMT17) are fairly moderate; while VIPeR and GRID are generally too small to train without using those big datasets for pre-training. For CUHK03, we use the 767/700 split [zhong2017rerank] with the detected images. For VIPeR and GRID, we first train a single OSNet from scratch using training images from Market1501, CUHK03, Duke and MSMT17 (Mix4), and then perform fine-tuning. Following [li2017person], the results on VIPeR and GRID are averaged over 10 random splits. Such a fine-tuning strategy has been commonly adopted by other deep learning approaches [liu2017hydraplus, wei2017glad, zhao2017spindle, li2017person, zhao2017deeply]. Cumulative matching characteristics (CMC) Rank-1 accuracy and mAP are used as evaluation metrics.
|Dataset||# IDs (T-Q-G)||# images (T-Q-G)|
A classification layer (linear FC + softmax) is mounted on the top of OSNet.
Training follows the standard classification paradigm where each person identity is regarded as a unique class. Similar to [li2018harmonious, chang2018multi], cross entropy loss with label smoothing [szegedy2016rethinking] is used for supervision.
For fair comparison against existing models, we implement two versions of OSNet. One is trained from scratch and the other is fine-tuned from ImageNet pre-trained weights.
Person matching is based on the distance of 512-D feature vectors extracted from the last FC layer (see Table Document).
Batch size and weight decay are set to 64 and 5e-4 respectively.
For training from scratch, SGD is used to train the network for 350 epochs. The learning rate starts from 0.065 and is decayed by 0.1 at 150, 225 and 300 epochs. Data augmentation includes random flip, random crop and random patch
|\ablStdModel||+ unified AG (primary model)||93.6||81.0|
|\ablStdModel||w/ full conv + unified AG||94.0||82.7|
|\ablStdModel||(same depth) + unified AG||91.7||77.9|
|\ablStdModel||+ separate AGs||92.9||80.2|
|\ablStdModel||+ unified AG (stream-wise)||92.6||80.0|
|\ablStdModel||+ learned-and-fixed gates||91.6||77.5|
|\ablStdModel||+ unified AG||91.7||77.0|
|\ablStdModel||+ unified AG||92.8||79.9|
Ablation experiments. Table Document evaluates our architectural design choices where our primary model is model Document. is the stream cardinality in Eq. Document. (1) vs. standard convolutions: Factorising convolutions reduces the R1 marginally by 0.4% (model Document vs. Document). This means our architecture design maintains the representational power even though the model size is reduced by more than . (2) vs. ResNeXt-like design: OSNet is transformed into a ResNeXt-like architecture by making all streams homogeneous in depth while preserving the unified AG, which refers to model Document. We observe that this variant is clearly outperformed by the primary model, with 1.9%/3.1% difference in R1/mAP. This further validates the necessity of our omni-scale design. (3) Multi-scale fusion strategy: To justify our design of the unified AG, we conduct experiments by changing the way how features of different scales are aggregated. The baselines are concatenation (model Document) and addition (model Document). The primary model is better than the two baselines by more than 1.6%/2.8% at R1/mAP. Nevertheless, models Document and Document are still much better than the single-scale architecture (model Document). (4) Unified AG vs. separate AGs: When separate AGs are learned for each feature stream, the model size is increased and the nice property in gradient computation (Eq. Document) is lost. Empirically, unifying AG improves by 0.7%/0.8% at R1/mAP (model Document vs. Document), despite having less parameters. (5) Channel-wise gates vs. stream-wise gates: By turning the channel-wise gates into stream-wise gates (model Document), both the R1 and the mAP decline by 1%. As feature channels encapsulate sophisticated correlations and can represent numerous visual concepts [fong2018net2vec], it is advantageous to use channel-specific weights. (6) Dynamic gates vs. static gates: In model Document, feature streams are fused by static (learned-and-then-fixed) channel-wise gates to mimic the design in [qian2017multi]. As a result, the R1/mAP drops off by 2.0%/3.5% compared with that of dynamic gates (primary model). Therefore, adapting the scale fusion for individual input images is essential. (7) Evaluation on stream cardinality: The results are substantially improved from (model Document) to (model Document) and gradually progress to (model Document). \keypointModel shrinking hyper-parameters. We can trade-off between model size, computations and performance by adjusting the width multiplier and the image resolution multiplier . Table Document shows that by keeping one multiplier fixed and shrinking the other, the R1 drops off smoothly. It is worth noting that 92.2% R1 accuracy is obtained by a much shrunken version of OSNet with merely 0.2M parameters and 82M mult-adds (). Compared with the results in Table Document, we can see that the shrunken OSNet is still very competitive against the latest proposed models, most of which are bigger in size. This indicates that OSNet has a great potential for efficient deployment in resource-constrained devices such as a surveillance camera with an AI processor.
Visualisation of unified aggregation gate. As the gating vectors produced by the AG inherently encode the way how the omni-scale feature streams are aggregated, we can understand what the AG sub-network has learned by visualising images of similar gating vectors. To this end, we concatenate the gating vectors of four streams in the last bottleneck, perform k-means clustering on test images of Mix4, and select top-15 images closest to the cluster centres. Fig. Document shows four example clusters where images within the same cluster exhibit similar patterns, i.e., combinations of global-scale and local-scale appearance. \keypointVisualisation of attention. To understand how our designs help OSNet learn discriminative features, we visualise the activations of the last convolutional feature maps to investigate where the network focuses on to extract features, i.e. attention. Following [zagoruyko2017paying], the activation maps are computed as the sum of absolute-valued feature maps along the channel dimension followed by a spatial 2 normalisation. Fig. Document compares the activation maps of OSNet and the single-scale baseline (model Document in Table Document). It is clear that OSNet can capture the local discriminative patterns of Person A (e.g., the clothing logo) which distinguish Person A from Person B. In contrast, the single-scale model over-concentrates on the face region, which is unreliable for re-ID due to the low resolution of surveillance images. Therefore, this qualitative result shows that our multi-scale design and unified aggregation gate enable OSNet to identify subtle differences between visually similar persons – a vital requirement for accurate re-ID.
\thesubsection Evaluation on Person Attribute Recognition
Although person attribute recognition is a category-recognition problem, it is closely related to the person re-ID problem in that omni-scale feature learning is also critical: some attributes such as ‘view angle’ are global; others such as ‘wearing glasses’ are local; heterogeneous-scale features are also needed for recognising attributes such as ‘age’. \keypointDatasets and settings. We use PA-100K [liu2017hydraplus], the largest person attribute recognition dataset. PA-100K contains 80K training images and 10K test images. Each image is annotated with 26 attributes, e.g., male/female, wearing glasses, carrying hand bag. Following [liu2017hydraplus], we adopt five evaluation metrics, including mean Accuracy (mA), and four instance-based metrics, namely Accuracy (Acc), Precision (Prec), Recall (Rec) and F1-score (F1). Please refer to [li2016richly] for the detailed definitions. \keypointImplementation details. A sigmoid-activated attribute prediction layer is added on the top of OSNet. Following [li2015multi, liu2017hydraplus], we use the weighted multi-label classification loss for supervision. For data augmentation, we adopt random translation and mirroring. OSNet is trained from scratch with SGD, momentum of 0.9 and initial learning rate of 0.065 for 50 epochs. The learning rate is decayed by 0.1 at 30 and 40 epochs. \keypointResults. Table Document compares OSNet with two state-of-the-art methods [li2015multi, liu2017hydraplus] on PA-100K. It can be seen that OSNet outperforms both alternatives on all five evaluation metrics. Fig. Document provides some qualitative results. It shows that OSNet is particularly strong at predicting attributes that can only be inferred by examining features of heterogeneous scales such as age and gender.
\thesubsection Evaluation on CIFAR\keypoint
Datasets and settings. CIFAR10/100 [krizhevsky2009learning] has 50K training images and 10K test images, each with the size of . OSNet is trained following the setting in [he2016identity, zagoruyko2016wide]. Apart from the default OSNet in Table Document, a deeper version is constructed by increasing the number of staged bottlenecks from 2-2-2 to 3-8-6. Error rate is reported as the metric. \keypointResults. Table Document compares OSNet with a number of state-of-the-art object recognition models. The results suggest that, although OSNet is originally designed for fine-grained object instance recognition task in re-ID, it is also highly competitive on object category recognition tasks. Note that CIFAR100 is more difficult than CIFAR10 because it contains ten times fewer training images per class (500 vs. 5,000). However, OSNet’s performance on CIFAR100 is stronger, indicating that it is better at capturing useful patterns with limited data, hence its excellent performance on the data-scarce re-ID benchmarks.
|pre-act ResNet [he2016identity]||164||1.7M||5.46||24.33|
|pre-act ResNet [he2016identity]||1001||10.2M||4.92||22.71|
|Wide ResNet [zagoruyko2016wide]||40||8.9M||4.97||22.89|
|Wide ResNet [zagoruyko2016wide]||16||11.0M||4.81||22.07|
Ablation study. We compare our primary model with model Document (single-scale baseline in Table Document) and model Document (four streams + addition) on CIFAR10/100. Table Document shows that both omni-scale feature learning and unified AG contribute positively to the overall performance of OSNet.
|+ unified AG||4.41||19.21|
\thesubsection Evaluation on ImageNet
In this section, the results on the larger-scale ImageNet 1K category dataset (LSVRC-2012 [deng2009imagenet]) are presented.
OSNet is trained with SGD, initial learning rate of 0.4, batch size of 1024 and weight decay of 4e-5 for 120 epochs. For data augmentation, we use random crops on images and random mirroring. To benchmark, we report single-crop
We presented OSNet, a lightweight CNN architecture that is capable of learning omni-scale feature representations. Extensive experiments on six person re-ID datasets demonstrated that OSNet achieved state-of-the-art performance, despite its lightweight design. The superior performance on object categorisation tasks and a multi-label attribute recognition task further suggested that OSNet is of wide interest to visual recognition beyond re-ID.
- We use scale and receptive field interchangeably.
- The subtle difference between the two orders is when the channel width is increased: pointwise depthwise increases the channel width before spatial aggregation.
- Width multiplier with magnitude smaller than 1 works on all layers in OSNet except the last FC layer whose feature dimension is fixed to 512.
- RandomPatch works by (1) constructing a patch pool that stores randomly extracted image patches and (2) pasting a random patch selected from the patch pool onto an input image at random position.
- centre crop from .