Convolutional Oriented Boundaries

Convolutional Oriented Boundaries

Abstract

We present Convolutional Oriented Boundaries (COB), which produces multiscale oriented contours and region hierarchies starting from generic image classification Convolutional Neural Networks (CNNs). COB is computationally efficient, because it requires a single CNN forward pass for contour detection and it uses a novel sparse boundary representation for hierarchical segmentation; it gives a significant leap in performance over the state-of-the-art, and it generalizes very well to unseen categories and datasets. Particularly, we show that learning to estimate not only contour strength but also orientation provides more accurate results. We perform extensive experiments on BSDS, PASCAL Context, PASCAL Segmentation, and MS-COCO, showing that COB provides state-of-the-art contours, region hierarchies, and object proposals in all datasets.

Keywords:
Contour detection, contour orientation estimation, hierarchical image segmentation, object proposals
\subfile

01_Intro.tex \subfile02_Related.tex \subfile03_MCG_HED.tex \subfile04_MCG_Fast.tex \subfile05_Experiments.tex \subfile06_Conclusions.tex

References

  1. Kokkinos, I.: Pushing the boundaries of boundary detection using deep learning. In: ICLR. (2016)
  2. Xie, S., Tu, Z.: Holistically-nested edge detection. In: ICCV. (2015)
  3. Bertasius, G., Shi, J., Torresani, L.: Deepedge: A multi-scale bifurcated deep network for top-down contour detection. In: CVPR. (2015)
  4. Bertasius, G., Shi, J., Torresani, L.: High-for-low and low-for-high: Efficient boundary detection from deep object features and its applications to high-level vision. In: ICCV. (2015)
  5. Shen, W., Wang, X., Wang, Y., Bai, X., Zhang, Z.: Deepcontour: A deep convolutional feature learned by positive-sharing loss for contour detection. In: CVPR. (2015)
  6. Ganin, Y., Lempitsky, V.: N-fields: Neural network nearest neighbor fields for image transforms. In: ACCV. (2014)
  7. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS. (2012)
  8. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR. (2015)
  9. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR. (2015)
  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. (2016)
  11. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV. (2001)
  12. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html
  13. Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: ICCV. (2011)
  14. Mottaghi, R., Chen, X., Liu, X., Cho, N.G., Lee, S.W., Fidler, S., Urtasun, R., Yuille, A.: The role of context for object detection and semantic segmentation in the wild. In: CVPR. (2014)
  15. Arbeláez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. TPAMI 33(5) (2011) 898–916
  16. Pont-Tuset, J., Arbeláez, P., Barron, J., F.Marques, Malik, J.: Multiscale combinatorial grouping for image segmentation and object proposal generation. TPAMI (2016)
  17. Maninis, K., Pont-Tuset, J., Arbeláez, P., Gool, L.V.: Deep retinal image understanding. In: MICCAI. (2016)
  18. Bertasius, G., Shi, J., Torresani, L.: Semantic segmentation with boundary neural fields. In: CVPR. (2016)
  19. Khoreva, A., Benenson, R., Omran, M., Hein, M., Schiele, B.: Weakly supervised object boundaries. In: CVPR. (2016)
  20. Li, Y., Paluri, M., Rehg, J.M., Dollár, P.: Unsupervised learning of edges. In: CVPR. (2016)
  21. Yang, J., Price, B., Cohen, S., Lee, H., Yang, M.H.: Object contour detection with a fully convolutional encoder-decoder network. In: CVPR. (2016)
  22. Najman, L., Schmitt, M.: Geodesic saliency of watershed contours and hierarchical segmentation. TPAMI 18(12) (1996) 1163–1173
  23. Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeply-supervised nets. arXiv preprint arXiv:1409.5185 (2014)
  24. Dollár, P., Zitnick, C.L.: Structured forests for fast edge detection. In: ICCV. (2013)
  25. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
  26. Martin, D., Fowlkes, C., Malik, J.: Learning to detect natural image boundaries using local brightness, color, and texture cues. TPAMI 26(5) (2004) 530–549
  27. Pont-Tuset, J., Marques, F.: Supervised evaluation of image segmentation and object proposal techniques. TPAMI 38(7) (2016) 1465–1478
  28. Zhao, Q.: Segmenting natural images with the least effort as humans. In: BMVC. (2015)
  29. Ren, Z., Shakhnarovich, G.: Image segmentation by cascaded region agglomeration. In: CVPR. (2013)
  30. Shi, J., Malik, J.: Normalized cuts and image segmentation. TPAMI 22(8) (2000)
  31. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. IJCV 59 (2004) 2004
  32. Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. TPAMI 24(5) (2002) 603 –619
  33. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR. (2014)
  34. Girshick, R.: Fast R-CNN. In: ICCV. (2015)
  35. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: NIPS. (2015)
  36. Humayun, A., Li, F., Rehg, J.M.: The middle child problem: Revisiting parametric min-cut and seeds for object proposals. In: ICCV. (2015)
  37. Krähenbühl, P., Koltun, V.: Learning to propose objects. In: CVPR. (2015)
  38. Krähenbühl, P., Koltun, V.: Geodesic object proposals. In: ECCV. (2014)
  39. Uijlings, J.R.R., van de Sande, K.E.A., Gevers, T., Smeulders, A.W.M.: Selective search for object recognition. IJCV 104(2) (2013) 154–171
  40. Rantalankila, P., Kannala, J., Rahtu, E.: Generating object segmentation proposals using global and local search. In: CVPR. (2014)
  41. Humayun, A., Li, F., Rehg, J.M.: RIGOR: Recycling Inference in Graph Cuts for generating Object Regions. In: CVPR. (2014)
  42. Hosang, J., Benenson, R., Dollár, P., Schiele, B.: What makes for effective detection proposals? TPAMI 38(4) (2016) 814––830
  43. Pont-Tuset, J., Van Gool, L.: Boosting object proposals: From Pascal to COCO. In: ICCV. (2015)
  44. Lin, T., Maire, M., Belongie, S., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context. arXiv:1405.0312 (2014)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
   
Add comment
Cancel
Loading ...
26752
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description