Multi-Scale Attention Network for Crowd Counting

Multi-Scale Attention Network for Crowd Counting

Abstract

In crowd counting datasets, people appear at different scales, depending on their distance from the camera. To address this issue, we propose a novel multi-branch scale-aware attention network that exploits the hierarchical structure of convolutional neural networks and generates, in a single forward pass, multi-scale density predictions from different layers of the architecture. To aggregate these maps into our final prediction, we present a new soft attention mechanism that learns a set of gating masks. Furthermore, we introduce a scale-aware loss function to regularize the training of different branches and guide them to specialize on a particular scale. As this new training requires annotations for the size of each head, we also propose a simple, yet effective technique to estimate them automatically. Finally, we present an ablation study on each of these components and compare our approach against the literature on 4 crowd counting datasets: UCF-QNRF, ShanghaiTech A & B and UCF_CC_50. Our approach achieves state-of-the-art on all them with a remarkable improvement on UCF-QNRF (+% reduction in error).

\iccvfinalcopy

1 Introduction

Crowd counting is the task of predicting the number of people present in an image and in recent years, it has attracted growing interest in the academic research community. The computer vision community has tackled this task in a variety of ways: early works either counted based on the outputs of a body or head detector [1, 2, 3] or learned a mapping from the global or local features of an image to the predicted count [4, 5, 6]. More recently, thanks to the ability of convolutional neural networks to learn local patterns, works have started to learn density maps that not only predict the count, but also the spatial extent of the crowd [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18].

Despite this progress, crowd counting remains a challenging task due to background clutter, heavy occlusions and scale variations. Of these, scale is the issue that has received the largest amount of attention in recent literature [7, 10, 13, 11, 9, 8, 12, 18, 14].

Figure 1: One of the most impactful issue in crowd counting is scale variation. For example, two similar people appear visually different when they are not at the same distance from the camera (a-b); and the same person can appear small in an image, but much larger in another (c-d). We tackle the former case with a novel scale-aware convolutional neural network, while the latter with a simple, yet effective image size regularization approach.

In this paper, we tackle the notion of scale that deals with visual changes in people’s appearance with respect to their distance from the camera. As pictured in fig. 1a-b, two similar individuals can appear very different depending on their relative location in the scene. To solve this issue, we propose a novel scale-aware deep convolutional neural network. The hierarchical structure of convolutional neural networks progressively expands the receptive field of the network feature maps, implicitly capturing information at different scales. Inspired by the skip branches in FCN [19] and SSD [20], we propose to directly generate multiple density maps from these intermediate feature maps. As the feature map generated by the last convolutional layer has the largest receptive field, it carries high-level semantic information that can be used to localize large heads; on the other hand, feature maps generated by the intermediate layers are more accurate and robust in counting extremely small heads (i.e., the crowds), and they contain important details about the spatial layout of the people and low-level texture patterns.

In order to aggregate the density maps generated from different layers of our network, we propose a novel soft attention mechanism that learns a set of gating masks, one for each map. Our masks learn to attend to large heads from the density map predicted by the last convolutional layer and smaller ones from earlier layers. While this can be trained by only providing supervision to the final density estimate, we found that performance improves by supervising the intermediate density estimates as well. We propose a new scale-aware loss function to further regularize our multi-scale estimates and guide them to specialize on a particular head size. Furthermore, as head size information is not available in any crowd counting dataset, we also propose a novel approach to automatically estimate them. Our approach combines the geometry-adaptive technique of  [7] with a new bounding-box-adaptive technique.

In our experiments we show that our approach achieves state-of-the-art results on four major crowd counting datasets: UCF-QNRF [17], ShanghaiTech A & B [7] and UCF_CC_50 [21], with a substantial improvement on UCF-QNRF (over % reduction in error). Moreover, in our ablation study we analyze the density maps generated by different layers of our network and show that each specializes on the different scale variations.

To summarize, we make the following contributions:

  1. a new network architecture that generates multi-scale density maps from its intermediate layers (sec. 3);

  2. a new scale-aware attention mechanism to aggregate these maps into our final prediction (sec. 3.2);

  3. a new scale-aware loss function to further help regularize these maps during training (sec. 3.3);

  4. a simple, yet effective technique to estimate the size of each head in an image, in a completely automatic way (sec. 3.4).

2 Related work

Multi-scale models for crowd counting.

People appear at different sizes in crowd counting images due to large perspective changes in the scenes as well as varying image resolution. In order to address this issue, many recent works on crowd counting have focused on learning multi-scale models.

Most previous works use a multi-column architecture [7, 10, 13, 11, 9, 8, 22]. Zhang et. al. trained a custom network with three CNN columns, each with a different receptive field to capture a specific range of head sizes (MCNN [7]). Running three CNN columns was however slow and Sam et. al. proposed to predict which column to run for each input image patch (Switch-CNN [10]). Later, Sam et. al. further extended their previous work by training a mixture of experts (each one equivalent to a column) in an incrementally growing fashion (IG-CNN [13]). Furthermore, Sindagi et. al. proposed a new architecture where MCNN is enriched with two additional columns capturing global and local context (CP-CNN [11]). Instead of each column designed with different receptive fields, Boominathan et al. proposed using columns of different depths, where the deep CNN captured large crowds and the shallow CNN smaller ones (CrowdNet [8]). Finally, Onoro-Rubio et. al. (Hydra CNN [9]) and Kang et al. (AFS-FCN [22]) represented columns as pyramid levels over image patches at multiple scales (former) or over the full image fed to the same network multiple times at different resolutions (latter). While all these multi-column architectures have shown promising results, they present several disadvantages: they have a large amount of model parameters, which often results in difficulties during training, and they are slow at inference, as multiple CNNs need to be run.

To overcome these limitations, recent works have focused on multi-scale, single column architectures [12, 18, 14]. Zhang et. al. proposed an architecture that combines two feature maps of two layers through a skip connection (saCNN [12]). Cao et. al. proposed an encoder-decoder network, where the encoder learns scale diversity in its features by using an aggregation module that combines filters of different sizes (SANet [18]). Finally, Li et. al. replaced some pooling layers in the CNN with dilated convolutional filters at different rates, which enlarge the receptive field of feature maps without losing spatial resolution (CSRNet [14]).

In this paper, we present a single column network architecture that mimics multi-columns by predicting multi-scale density maps from different layers of the network. Our architecture takes advantage of the multi-column approaches ability to predict multi-scale density maps, yet it is much faster to compute and requires far fewer parameters. Moreover, differently from the previous multi-column approaches, our architecture aggregates its predictions using a novel attention-based mechanism that selects each column based on the size of each head in an image.

Figure 2: Our multi-branch architecture. Intermediate feature maps are used to generate density map predictions through different branches. The scale-aware attention masks are used to aggregate these maps and generate our final prediction . Finally, a new scale-aware loss is used to regularize the training of our branches and further help them learn different scale variations.

Attention-based mechanism. Attention models have been widely used for many computer vision tasks like image classification [23, 24], object detection [25, 26], semantic segmentation [27, 28], saliency detection [29] and, very recently, crowd counting [22]. These models work by learning an intermediate attention map that is used to select the most relevant piece of information for visual analysis. The most similar works to ours are the ones of Chen et al. [27] and Kang et al. [22]. Both approaches extract multi-scale features from several resized input images and use an attention mechanism to weight the importance of each pixel of each feature map. One clear drawback of these approaches is that their inference is slow, as each test image needs to be re-sized and fed into the CNN model multiple times. Instead, our approach is much faster: it requires a single input image and a single pass through the model, as our multi-scale features are generated by pooling information from different layers of the same network instead of multiple passes through the same network.

3 Our approach

In sec. 3.1 we present our baseline network for estimating the density map and training loss. In sec. 3.2 we describe how we extend this baseline with (i) our novel multi-branch density prediction architecture and (ii) our attention mechanism for selecting between these branches. In sec 3.3, we then describe our novel scale-aware loss function, which guides each density prediction branch to specialize on a particular head size. This loss requires a head size estimate during training. As head size information is not available in any public dataset for crowd counting, in sec. 3.4, we present a novel approach to automatically estimate head size.

3.1 Baseline network for crowd counting

Like other density-based approaches [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] for crowd counting, given an image, we feed it to a fully convolutional network and estimate a density map (fig. 2, ). Then, we sum all the values in this map to obtain the final count.

Our baseline network consists of three components: a backbone network, a regression head and an upsampling layer. The image is fed into the backbone, which progressively down samples the spatial resolution to produce a feature map with a large receptive field but at of the image resolution. These features are fed into the regression head to produce a density map. Then bi-linear upsampling is used to bring the density estimates back to the original image resolution.

During training, we use a pixel-wise Euclidean loss on the density map output:

(1)

where is the estimated density map at pixel location and is its corresponding ground truth value. We follow the method of MCNN [7] to generate the ground truth density map () and blur each head point in an image with a Gaussian kernel ().

3.2 Scale-aware soft attention masks

Our approach enriches the baseline with density estimates , with the idea that each map will be specialized to perform well on a specific range of head sizes. Our network estimates all density maps in a single forward pass by branching the features from intermediate layers of our backbone and sending each into its own regression head. Then, to aggregate these density estimates and produce a single density estimate , we use a soft attention mechanism that learns a set of gating masks corresponding to each branch. Each mask is used to re-weight the pixels of its corresponding density estimate to produce the final density estimate as follows:

(2)

where refers to the element-wise product. The attention masks are generated by the attention block, which takes as input the last feature map from our backbone network, passes it through an attention head and produces -channel logit maps . These are then fed to a softmax layer to produce the masks:

(3)

where and are the values for the corresponding maps at pixel location . The softmax ensures that the attentions maps act as a weighted average over the density predictions.

We train this network end-to-end with the same loss used in sec. 3.1 applied only to the final density estimate (). The intuition is that the attention masks () will learn to attend to large heads from the density map predicted by the last branch () as it is derived from a feature map with a large receptive field. Conversely, smaller heads will be attended by density estimates from earlier branches () as these branches have smaller receptive fields and higher spatial resolution, thus they capture finer details in the image.

3.3 Scale-aware loss regularization

In eq. 2, the error signal propagated back to the -th branch is modulated by the attention mask , i.e., . Instead of propagating the whole error signals back to every branch, these masks force each branch to only focus on improving the crowd counting accuracy on some selected areas. While fig. 3 shows that the attention masks mostly attend to heads of different sizes, as intended, the network has no explicit regularization that enforces this to happen.

Here we present a new scale-aware loss function to further regularize each branch estimate and guide them to specialize on a particular head size. To achieve this, we add a scale-aware loss to each branch, which measures the distance between the branch’s predicted density map and our ground truth density map only in areas of the image with heads in a target size range for that density map. In this way, each branch only needs to perform well on its scale.

For each ground truth head point we estimate the head size and assign it to one of head size bins. The method for predicting is described in sec. 3.4. To generate each scale supervision mask , we set to 1 the regions around each training head assigned to bin .

This supervision guides each map to correctly predict the heads at its scale range, but it does not give any penalty to heads outside of it. We compute the new scale-aware loss as follows:

(4)

Finally, our final loss is the combination of the loss on the final density (eq. 1) and our scale-aware losses on the intermediate layers of the network.

(5)

where refers to the regularization weight.

3.4 Estimating the size of each head:

The scale-aware loss regularization presented in the previous section requires an estimate of the diameter of the head , however, head size is not available in any crowd counting dataset. In this section we present a new method to estimate it. We combine the popular geometry-adaptive technique  [7] with a new bounding-box-adaptive technique that estimates head sizes based on the output of a head detector. More specifically, given head , we estimate its size as follows:

(6)

We compute by first running a person head detector. Then, for each ground truth head point, we estimate its scale as the median size prediction from the nearest head detections:

(7)

where are the detected bounding boxes with the closest center to and and are the width and height of bounding box respectively. This estimate is only as good as the detector. We found that our detector works well most of the time, but it fails when people are too small and too close together. Thus, we augment this prediction with the geometry-adaptive approach () from Zhang et al. [7]. For each head, this measure is computed as half the mean distance to the nearest heads or:

(8)

where is the number of neighbors and is the location of the ground truth head annotation. This measure works well for crowded scenes but not when people are further apart, thus complementing our measure well.

4 Experiments

4.1 Evaluation metrics

In crowd counting, the count accuracy is measured by two error metrics: Mean Absolute Error (MAE) and Mean Squared Error (MSE), which are defined as follows:

(9)
(10)

where is the number of test images, the predicted count for image and the ground-truth.

4.2 Datasets

We evaluate our approach on 4 publicly available crowd counting datasets: UCF-QNRF [17], ShanghaiTech A & B [7] and UCF_CC_50 [21].

UCF-QNRF (2018) is the latest released dataset and it consists of 1535 challenging images from Flickr, Web Search and Hajj footage. The number of people in an image (i.e., the count) varies from 49 to 12,865, making this the dataset with the largest crowd variation. Furthermore, the average image resolution is larger compared to all other datasets, causing the absolute size of a person’s head to vary drastically from a few pixels to more than 1500.

ShanghaiTech (2016) consists of two parts: A and B. Part A contains 482 images of dense scenes like stadiums and parades; its count varies from 33 to 3139. Part B contains 716 images of street scenes from fixed cameras capturing sparser crowds; its count varies from 12 to 578.

UCF_CC_50 (2013) consists of 50 black and white, low resolution images and its count varies from 94 to 4543. We follow the dataset instructions and evaluate our results using 5-fold cross-validation.

Figure 3: We show the attention masks and their corresponding density maps from branches 1,2,3. In general, branch 1 attends to small people (the crowd), while branch 3 attends to larger ones and background regions. After scale-aware loss regularization is introduced, and get higher weights for small and medium-size people respectively.

4.3 Implementation details

Network architecture. Similar to other recent crowd counting works [8, 10, 11, 14], we use a VGG-16 backbone [30]. We use three branches () from VGG features conv3_3, conv4_3 and conv5_3 from blocks 3, 4 and 5, respectively. Our regression head consists of two convolutions with 128 and 64 channels each, followed by a final convolutional regression layer.

Network hyper-parameters. We split the training set of the UCF-QRNF dataset into train (80%) and validation (20%) and chose the optimal model hyper-parameters on the validation set. We repeated this process on ShanghaiTech and, interestingly, we observed the optimal parameters to be very similar to those for UCF-QRNF. We initialized all the new layers in our network with random weights drawn from a Gaussian distributions with zero mean and a standard deviation of 0.0033. We used Adam optimizer [31] and trained our networks for 120 epochs with an initial learning rate of 1e-4, which drops to 1e-5 after 80 epochs. We used a batch size of 64 and input to the network crops of size randomly sampled from different locations in each training image. Following previous works [7], we set . To select our head size bins, we split all training heads by size (estimated as described in sec. 3.4) into three buckets with roughly the same number of training examples. This ensures that each intermediate layer has sufficient training data. In the supplementary material (fig. 1), we show visual examples of scale supervision masks generated by binning the head sizes. Finally, we set lambda for eq. 5 to be 0.1, which provides a good balance between only relying on the loss function () and only relying on the intermediate losses ().

Head estimation. For both the geometry-adaptive and bounding-box-adaptive estimations, we set the number of neighbors to . For the bounding-box adaptive estimation, we trained a Faster-RCNN [32] head detector with a ResNet-50 backbone [33]. We used the same hyper-parameters as [34], but we reduced the smallest anchor box size from 32 pixels to 8, in order to be able to localize extremely small heads. We trained our detector on the combination of two public datasets: SCUT-HEAD [35] and Pascal-Parts [36]. SCUT-HEAD contains annotations for around 111k heads, which are visually similar to those in crowd counting images. Pascal-Parts, on the other hand, contains annotations for only 7.5k heads, but it offers a large selection of extremely useful and difficult background. We found the combination of these two complementary datasets to lead to great detection performance (fig. 4).

4.4 Analysis of our method

In this section we explore some of our model components and analyze their outputs. We conduct all experiments on the UCF-QNRF dataset, as it is the largest both in number of images and diversity of crowd count s(sec. 4.2).

Multi-scale density (). Li et. al.  [14] showed that the three columns of MCNN [7] learned similar information instead of being specialized to a specific scale. Here we investigate the predictions of our branches and to what extent our branches are learning different scale information (fig. 3). Interestingly, even without our scale-aware loss, our approach is capable of learning different scale information: branch 1 (output ) has stronger activations on smaller people, as it relates small-size people with low-level texture patterns, while branch 2 and 3 make far fewer errors on medium and large people, as they operate on a larger receptive field. This complementarity in scale information is essential towards better scale-aware representations.

Attention mask (). In fig. 3 we also show the attention masks generated by our approach. Our network learns distinctive attention masks for each branch. In general, has higher weights for large-size people and background regions, while gives higher weights to small-scale people. Interestingly, without our scale aware-loss regularization, the masks learn to attend mostly to the density map predicted by branch 3 (i.e., red region in ). On the other hand, when scale-aware loss is used, and get higher weights for small and medium-size people, respectively. This demonstrates the importance of using our regularization to help the branches better specialize to their respective scales.

Aggregating multi-scale maps. We compare our soft attention mechanism (sec. 3.2) used to aggregate our multi-scale density predictions against other popular aggregation methods: ‘average’, which is popular in semantic segmantation [19], ‘max’, which is popular in human pose estimation [37] and ‘concatenation+conv’, which has been used in several multi-column works for crowd counting [7, 11, 12]. Results are presented in table 1. ‘Max’ produces the largest error, as it tends to over-estimate the count; ‘concatenation+conv’ and ‘average’ work better, but the best performing approach is our attention mechanism. This result proves the effectiveness of our scale-aware aggregation mechanism to fuse multi-scale density maps.

Aggregation method MAE MSE
Average 123.2 206.9
Max 144.6 227.9
Concatenation + Conv 128.3 210.1
Attention (Ours) 116.7 184.5
Table 1: Evaluation of different pooling techniques to combine the different branches of the network.
Figure 4: Head size estimation. Each map is normalized independently based on its largest (dark red) and smallest head (dark blue). tends to perform poorly in sparse regions where faces are far from each other, while tends to predict slightly larger sizes than in reality on undetected heads. By combining these two signals, we are able to produce very accurate density maps ().

Our head size estimation approach. We present some visual results (fig. 4) for our head size estimation approach (sec. 3.4). The figure shows two images with the bounding boxes detected by our head detector and the corresponding head sizes estimated by the popular , our and our final . For visualization purposes, we color each head based on its size, where dark red is used for the largest head in each map and dark blue for the smallest. performs relatively well on very crowded scenes (fig. 4, bottom row), but it performs rather poorly on sparse regions with small heads far away from each other (fig. 4, top row). This is probably the reason why CSRNet [14], among other works, uses the geometry-adaptive Gaussian for the extremely dense ShanghaiTech Part A dataset, but a fixed Gaussian for all other crowd counting datasets. On the other hand, performs very well on both sparse and dense scenes, but it tends to predict slightly larger sizes than in reality on undetected heads (fig. 4, bottom image, end of the tail). By combining these two techniques in the novel , we are able to overcome most of their limitations and to produce highly accurate maps (fig. 4, last column).

We now quantitatively compare the performance of our scale-aware approach trained with head binnings defined by (i) our estimation and (ii) the classic geometry-adaptive technique . While both models achieve comparable MAE performance (around 97.5), the results show a difference in MSE (175.3 vs 167.8, in favor of using our estimate). This is caused by the geometry-adaptive estimate making larger mistakes on the images that contain sparse crowds with tiny heads. As these heads get binned with larger ones, they get assigned to the wrong branch, which now needs to predict heads at a scale outside of its domain (fig. 2), effectively making the training data noisy.

Finally, we would like to point out that the utility of our head estimation technique goes beyond what we propose in this paper and it can open new research directions for future crowd counting ideas.

4.5 Ablation study

In this section we present incremental results of our model and its components (sec. 3). Again, we use the UCF-QNRF dataset for these experiments.

Baseline results. As a baseline, we train a VGG-16 architecture backbone with the loss of eq. 1 and the settings of sec. 3.1. This generates a single feature map from the last convolutional layer of the network and it achieves an MAE of 128.5 (table 2, row 1) which is the highest error across all entries in the table. Still, this simple baseline achieves competitive results, which is comparable to the state-of-the-art (table 3).

Adding . By enriching the baseline with three branches that predict multi-scale density maps and our novel scale-aware attention mechanism (sec. 3.2), the error decreases to 116.7 (table 2 row 3), which is a significant improvement. This indicates that (i), using multi-scale feature maps is beneficial for crowd counting and (ii), the inferred attention masks are performing well on aggregating multi-scale predictions from our multi-branch network.

Adding . Adding our scale-aware loss regularization (sec. 3.3) also brings an improvement and the error further decreases to 113.3 (table 2, row 3). This indicates that our regularization helps each branch output accurate density maps for the people within its assigned scale range, which collectively contributes to the the accuracy improvement of crowd estimation.

Adding image resolution regularization. While scale variations can be learned during training, sometimes these exceed the capability of the network. We observed this to be the case for the UCF-QNRF dataset: some of its images are 6k9k and they contain heads of 1.5k1.5k, which, clearly, are outside the range of our network’s receptive field. To overcome this resolution issue, at inference we down-sample these large images to a maximum size of 1080p. This simple normalization improves performance considerably for all components, lowering our full approach MAE to 97.5 (table 2, last row). It is also worth noticing that while this improvement is generic, it still preserves the relative importance of each model component: adding still improves substantially over the baseline and adding still remains the best performing model. Interestingly, adding improves MSE considerably over just (175.6 to 167.8).

VGG16 + + +ImgRes MAE MSE
128.5 205.6
116.7 184.5
113.3 183.2
109.7 186.5
99.6 175.6
97.5 167.8
Table 2: Results for the different components of our architecture.
UCF-QNRF ShanghaiTechA ShanghaiTechB UCF_CC_50
Method Venue & Year MAE MSE MAE MSE MAE MSE MAE MSE
MCNN [7] CVPR 2016 277 426 110.2 173.2 26.4 41.3 377.6 509.1
C-MTL [38] AVSS 2017 252 514 101.3 152.4 20.0 31.1 322.8 341.4
SwitchCNN [10] CVPR 2017 228 445 90.4 135.0 21.6 33.4 318.1 439.2
CP-CNN [11] ICCV 2017 - - 73.6 106.4 20.1 30.1 295.8 320.9
SaCNN [12] WACV 2018 - - 86.8 139.2 16.2 25.8 314.9 424.8
ACSCP [16] CVPR 2018 - - 75.7 102.7 17.2 27.4 291.0 404.6
IG-CNN [13] CVPR 2018 - - 72.5 118.2 13.6 21.1 291.4 349.4
Deep-NCL [39] CVPR 2018 - - 73.5 112.3 18.7 26.0 288.4 404.7
CSRNet [14] CVPR 2018 - - 68.2 115.0 10.6 16.0 266.1 397.5
CL-CNN [17] ECCV 2018 132 191 - - - - - -
SANet [18] ECCV 2018 - - 67.0 104.5 8.4 13.6 258.4 334.9
Our Baseline (from scratch) - 109.7 186.5 71.9 107.1 8.9 14.4 267.1 351.7
Our Approach (from scratch) - 97.5 167.8 65.6 102.1 8.3 13.3 248.7 327.2
Our Approach (w/ pre-training) - - - 62.1 98.5 7.6 12.4 238.2 310.8
Table 3: Quantitative results of our approach on four public datasets, against several approaches in the literature.
Figure 5: Qualitative results. Not only does our approach achieves considerably better count estimates than our baseline, but it also produces sharper and better localized predictions.

4.6 Comparison to other crowd counting methods

We now compare our model against several approaches in the literature, on the datasets introduced in sec. 4.2. Results are presented in table 3 and fig. 5. As most of the other works in the literature, we train our models from scratch and only use the image resolution regularization on the UCF-QNRF dataset (the only one with extremely large images). Overall, our approach always performs better than the baseline, showing the importance of learning multi-scale features. Furthermore, our approach also outperforms all previous methods in the literature, on all datasets and all metrics. Interestingly, we observe the biggest improvement on the UCF-QNRF dataset, which is the largest dataset and the one with the most diverse variation of head sizes. This further shows that our model is capable of handling such large scale variations to produce positive results (fig. 5).

Additionally, we also present results for our models pre-trained on the large UCF-QNRF dataset. As this dataset was only recently released, no previous work have addressed the impact of pre-training for crowd counting. Our results show that pre-training is very important and it further improves our results by 5-10% on all datasets.

Finally, fig. 5 presents some visual results of our baseline and our approach. In addition to performing better on counting the number of people in an image, our approach also shows better localized predictions. Its density maps are much sharper than those outputted by the baseline, which tends to oversmooth regions with large crowds. This is especially evident in fig. 5d. It also validates our hypothesis that directly using low-level layers to output intermediate density maps is beneficial for localizing small-scale people, as these low-level feature maps have detailed spatial layout.

5 Conclusions

In this work, we proposed a novel multi-branch architecture that generates multi-scale density maps from its intermediate layers. To aggregate these density maps into our final prediction, we developed a new soft attention mechanism that learns a set of gating masks, one for each map. We further introduced a scale-aware loss to guide each branch to specialize on different scale ranges . Finally, we proposed a simple, yet effective technique to estimate the size of each head in an image. Our approach achieved state-of-the-art results on four challenging crowd counting datasets, on all evaluation metrics.

References

  1. Bo Wu and Ram Nevatia. Detection of multiple, partially occluded humans in a single image by bayesian combination of edgelet part detectors. In ICCV, 2005.
  2. Meng Wang and Xiaogang Wang. Automatic adaptation of a generic pedestrian detector to a specific traffic scene. In CVPR, 2011.
  3. Mikel Rodriguez, Ivan Laptev, Josef Sivic, and Jean-Yves Audibert. Density-aware person detection and tracking in crowds. In ICCV, 2011.
  4. Antoni B Chan, Zhang-Sheng John Liang, and Nuno Vasconcelos. Privacy preserving crowd monitoring: Counting people without people models or tracking. In CVPR, 2008.
  5. Antoni B Chan and Nuno Vasconcelos. Bayesian poisson regression for crowd counting. In CVPR, 2009.
  6. David Ryan, Simon Denman, Clinton Fookes, and Sridha Sridharan. Crowd counting using multiple local features. In DICTA, 2009.
  7. Yingying Zhang, Desen Zhou, Siqin Chen, Shenghua Gao, and Yi Ma. Single-image crowd counting via multi-column convolutional neural network. In CVPR, 2016.
  8. Lokesh Boominathan, Srinivas SS Kruthiventi, and R Venkatesh Babu. Crowdnet: A deep convolutional network for dense crowd counting. In ACM, 2016.
  9. Daniel Onoro-Rubio and Roberto J López-Sastre. Towards perspective-free object counting with deep learning. In ECCV, 2016.
  10. Deepak Babu Sam, Shiv Surya, and R Venkatesh Babu. Switching convolutional neural network for crowd counting. In CVPR, 2017.
  11. Vishwanath A Sindagi and Vishal M Patel. Generating high-quality crowd density maps using contextual pyramid cnns. In ICCV, 2017.
  12. Lu Zhang and Miaojing Shi. Crowd counting via scale-adaptive convolutional neural network. In WACV, 2018.
  13. Deepak Babu Sam, Neeraj N Sajjan, R Venkatesh Babu, and Mukundhan Srinivasan. Divide and grow: Capturing huge diversity in crowd images with incrementally growing cnn. In CVPR, 2018.
  14. Yuhong Li, Xiaofan Zhang, and Deming Chen. Csrnet: Dilated convolutional neural networks for understanding the highly congested scenes. In CVPR, 2018.
  15. Jiang Liu, Chenqiang Gao, Deyu Meng, and Alexander G Hauptmann. Decidenet: Counting varying density crowds through attention guided detection and density estimation. In CVPR, 2018.
  16. Zan Shen, Yi Xu, Bingbing Ni, Minsi Wang, Jianguo Hu, and Xiaokang Yang. Crowd counting via adversarial cross-scale consistency pursuit. In CVPR, 2018.
  17. Haroon Idrees, Muhmmad Tayyab, Kishan Athrey, Dong Zhang, Somaya Al-Maadeed, Nasir Rajpoot, and Mubarak Shah. Composition loss for counting, density map estimation and localization in dense crowds. In ECCV, 2018.
  18. Xinkun Cao, Zhipeng Wang, Yanyun Zhao, and Fei Su. Scale aggregation network for accurate and efficient crowd counting. In ECCV, 2018.
  19. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
  20. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In ECCV, 2016.
  21. Haroon Idrees, Imran Saleemi, Cody Seibert, and Mubarak Shah. Multi-source multi-scale counting in extremely dense crowd images. In CVPR, 2013.
  22. Di Kang and Antoni Chan. Crowd counting by adaptively fusing predictions from an image pyramid. In BMVC, 2018.
  23. Tianjun Xiao, Yichong Xu, Kuiyuan Yang, Jiaxing Zhang, Yuxin Peng, and Zheng Zhang. The application of two-level attention models in deep convolutional neural network for fine-grained image classification. In CVPR, 2015.
  24. Chunshui Cao, Xianming Liu, Yi Yang, Yinan Yu, Jiang Wang, Zilei Wang, Yongzhen Huang, Liang Wang, Chang Huang, Wei Xu, et al. Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks. In ICCV, 2015.
  25. Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual attention. In ICLR, 2015.
  26. Donggeun Yoo, Sunggyun Park, Joon-Young Lee, Anthony S Paek, and In So Kweon. Attentionnet: Aggregating weak directions for accurate object detection. In CVPR, 2015.
  27. Liang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu, and Alan L Yuille. Attention to scale: Scale-aware semantic image segmentation. In CVPR, 2016.
  28. Hanchao Li, Pengfei Xiong, Jie An, and Lingxue Wang. Pyramid attention network for semantic segmentation. In BMVC, 2018.
  29. Nian Liu, Junwei Han, and Ming-Hsuan Yang. Picanet: Learning pixel-wise contextual attention for saliency detection. In CVPR, 2018.
  30. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  31. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014.
  32. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015.
  33. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
  34. Ross Girshick, Ilija Radosavovic, Georgia Gkioxari, Piotr Dollár, and Kaiming He. Detectron. https://github.com/facebookresearch/detectron, 2018.
  35. Dezhi Peng, Zikai Sun, Zirong Chen, Zirui Cai, Lele Xie, and Lianwen Jin. Detecting heads using feature refine net and cascaded multi-scale architecture. In ICPR, 2018.
  36. Xianjie Chen, Roozbeh Mottaghi, Xiaobai Liu, Sanja Fidler, Raquel Urtasun, and Alan Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. In CVPR, 2014.
  37. Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In CVPR, 2017.
  38. Vishwanath A Sindagi and Vishal M Patel. Cnn-based cascaded multi-task learning of high-level prior and density estimation for crowd counting. In AVSS, 2017.
  39. Zenglin Shi, Le Zhang, Yun Liu, Xiaofeng Cao, Yangdong Ye, Ming-Ming Cheng, and Guoyan Zheng. Crowd counting with deep negative correlation learning. In CVPR, 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
384071
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description