Ship Detection and Segmentation using Image Correlation

Ship Detection and Segmentation using Image Correlation


There have been intensive research interests in ship detection and segmentation due to high demands on a wide range of civil applications in the last two decades. However, existing approaches, which are mainly based on statistical properties of images, fail to detect smaller ships and boats. Specifically, known techniques are not robust enough in view of inevitable small geometric and photometric changes in images consisting of ships. In this paper a novel approach for ship detection is proposed based on correlation of maritime images. The idea comes from the observation that a fine pattern of the sea surface changes considerably from time to time whereas the ship appearance basically keeps unchanged. We want to examine whether the images have a common unaltered part, a ship in this case. To this end, we developed a method - Focused Correlation (FC) to achieve robustness to geometric distortions of the image content. Various experiments have been conducted to evaluate the effectiveness of the proposed approach.


978-1-4799-0652-9/13/$31.00 ©2013 IEEE       


Vessel detection, ship detection, object detection, phase correlation, orientation correlation, matching, registration. \IEEEpeerreviewmaketitle


1 Introduction

Robustly detecting ships plays a crucial role in civil applications such as drug-smuggling ships detection. Ship detection problems have been researched intensively. A review [1] on this topic includes up to 500 literature entries. Generally speaking, two types of techniques have been used for ship detection [2]. The most popular one is the synthetic aperture radar (SAR) technique, which is fairly robust to various weather conditions. It is based on an invasive technology - the scene is illuminated with radio rays. The detection is achieved through a series of processing of reflected signals. A side-effect of this technique is that the airborne surveillance system reveals itself. As a result, people on the detected ship may be aware that they are under surveillance. For smaller ships and boats SAR technique is less efficient [3], and thus remains an open problem which is commonly interpreted as the need for “accurate empirical modelling of sea” to separate the boat from the sea.

An alternative, either optical, or visual based detection is lesser developed [4, 2], though it is considered to be important. For example, in [5] it is stated that UK will have “a range of maritime surveillance resources available in 2020, operating in the audio, visual and electronic spectra”. It should be non-invasive and does not require special equipments. Then it is preferable for the purpose of non-invasive detection of small vessels by unmanned aerial vehicle (UAV), and therefore it is appropriate for improving maritime border surveillance of small ships and boats in order to detect illegal activities such as drug and human trafficking and illegal fishing activities etc.

To the best of our knowledge, most ship detection methods currently operate with one image, and apply thresholding that follows a preprocessing procedure [6]. The rationality behind this is straightforward: an experienced human operator is able to distinguish a ship in the surrounding sea based on the fact that a ship has a specific color and shape, and the sea surface has a particular texture. Computers can rely upon the same assumptions. Hence those standard approaches are mainly based on a variety of segmentation techniques and shape analysis methods for suspected inclusions [7, 8, 9, 10, 11], where most advanced techniques model sea patterns by specially designed random fields, and they also model a ship as an elongated inclusion.


Here in this paper we propose a completely different paradigm. Instead of analysing a single picture, a pair of pictures is considered. The method is based on the observation as follows. During a short time the wave pattern changes and hence the water area cannot match with its previous state. On the contrary, a ship’s shape does not change much within this time interval, and she can be merely displaced and/or undergo a small rotation in the image, due to movement of the ship and the aerial based camera. Hence, the two images of the sea do not correlate, whereas the two images of the same boat do correlate. This observation forms the principle of the presented research in this paper. We correlate the two images, and if the correlation value is significant, it is concluded that a boat is present in the image. However, in practice a problem arises due to such changes in the ship appearance that the correlation algorithm overlooks the ship presence. To overcome this difficulty, we re-interpret the visual information by creating a controlled uncertainty to combat possible changed in the ship appearance in the video sequence.

The reminder of this paper is organised as follows: Section 2 presents an explanation of the task and scenarios of ship detection; Section 3 gives an overview of available correlation methods, shows a need of their improvement, and suggests such an improvement; Section 4 explains how a ship and sea can be separated, and the presence of the ship can be detected. However, the obtained there information is not equivalent to segmentation of the ship. In the Section 5 we compare different methods for ship detection. Section 6 eventually provides segmentation of the detected ship. Finally, we conclude this paper with discussion in Section 7.

2 Task and problem formulation

Imagine an unmanned aeriall vehicle (UAV) hovering over the sea as a part of a maritime surveillance system. An on-board camera takes images and sends them to the base for human operators. In this scenario two problems arise. Firstly, most of images contain inadequate information, majority showing an empty sea, or repeatedly sending the same information back to the base already observed previously. Secondly, the human operator is overloaded with a bulk of redundant information and thus can make mistakes when human attention is eventually required to analyse a non-standard situation. An ideal solution to these problems would be to enable on-board computer to automatically analyse images, and send out only those images that do require human attention. Moreover, an on-board computer can take a part in controlling the flight of the UAV [12], taking more pictures of a discovered boat from different perspective, and send them to the base. One of the most important steps of this procedure is an ability to detect and delineate ships and boats, because only images containing boats and ships can raise further interest. We present our work in this direction.

To narrow the task and to specify the visual information of interest, we point to the following advantages that an UAV can provide. The first advantage is a high definition camera and a powerful computer. This means that we can rely upon high resolution images. Another advantage is a highly accurate positioning system employed by UAV. With this information we can estimate position of the ship/boat in the image. If a boat is present in one image, then it is assumed to present in another image taken within few seconds. On the other hand, we also know that if the fist image does not contain the boat, then the second image also does not contain it (except the fringe of the image). Assumed high resolution allows to proceed images by parts, where boats will have a better representation in their sizes. Thus we formulate our task as detecting whether a boat is present on the both images taken with a time interval up to a threshold of a few seconds.

The two images of boats in the described scenarios are shifted and geometrically distorted comparatively to one another, a method is sought that is robust in case of small geometric distortions. In spite of their variety, existing correlation methods in Section 3 share the same characteristics concerning geometric distortion of images. A new modification is required to meet the needs of our task.

3 Correlation methods

3.1 Standard methods

To examine whether two images and have a common area, that is a boat in our case, a few standard options are available. The general procedure is described as follows. The common area may be shifted from the its original position, so then one has to try all possible shift vectors, displacing the first image, and compare such displaced image with the second image. The comparison includes computation of a similarity or dissimilarity measure. Note that this is not excessively time consuming since the computation uses FFT algorithm. To specify the process, we denote the shift vector by and denote by a variable pixel in the picture. The measure depend on only, so we obtain a function which is called a matching surface [13]. The usage of the matching surface is as follows. The position of maximum (or minimum in case of dissimilarity) of the matching surface gives the sought shift vector between the images. If the maximum is not high enough, then it indicates that there is no common area in the two images.

We consider standard ways of defining the matching surface. The first formula is a regular cross-correlation,


and this can be used as a similarity measure. It can be argued that it is not the best way to compare images, so then we consider its standard alternatives.

Instead of the initial images, one can firstly correlate modified images, then a more general approach would be to define an operation that transforms images to new functions and and correlate them instead of the initial images. Thus, the generalisation of (1) is the formula


There are three well known ways to define the operation in this context. They are

  • “Orientation correlation” [14]. The operation is taking gradient of the image at each pixel and then normalize it, that is , then (2), where multiplication means scalar product of vectors, represents orientation correlation. We will refer to this particular as to an “orientation operator”.

  • “Phase correlation” [15]. The operation is retaining only phase information in the image, and ignoring the amplitude information in the frequency domain, that is , (where is Fourier transform of ) and then (2) represents phase correlation. For better results one will need to take care of the image borders, this is discussed in [16].

  • “Normalized correlation” [17]. To define it, a size of a small sliding window should be chosen. The operation is defined as , where is the mean value of the function in the sliding square window with side and with its center at , and is the standard deviation of all the values of this function in the square.

The image looks more random than the initial (and this can be proved by statistical tests), so the operation randomises the underlying image, for details see [18, 19, 20, 21]. Other standard alternatives to formula (1) follow: a dissimilarity measures

and even a general measure

where is a loss function associated with robust statistics. For the sake of generalization, the images and here also can be substituted by and .

Let us analyse these surveyed methods of correlating images. Firstly, can be reduced to in (1). For this one can open the parentheses, and get , where is independent of .

Secondly, reduce formulas and to (1), they are expressed through cross-correlation in [13]. Therefore and can be approximated with any desired precision by a sum of a few cross-correlations of functions which are obtained from and by simple procedures in form or (where ).

It is concluded that known standard methods for investigating similarity of images can be presented in the form of correlation (2) or in a sum of a few () such correlations, that is in the form


where and are modified images obtained from the initial and by applying an operation , and each operation is shift-invariant, that is it commutes with an arbitrary image displacement.

3.2 Drawback of standard methods and an idea of focusing: a heuristic consideration

The observation (3) allows critically judge all the considered method in a unified scheme. We are going to demonstrate a drawback of (1), and the rest of the described methods inherit this drawback from formulas (2) and (3). The drawback is that it has over-sensitive reaction to geometric distortion of the ship in the two images, and this does not suit well our purposes. Rather small rotations of the ship will make the sought correspondence undetectable. This effect also is shown in this paper in experiments.

The drawback is presented in a heuristic form. It starts from a general observation that an actual image changes its values gradually from pixel to pixel, at least in most of its parts, and at some distance between two pixels these values become independent. We accept a simplification assuming that the image consists of small squares of constant values and these values are values of independent random variables. This is an approximation to a real image illustrated in Fig. 1. Suppose we have such an image and its rotated version . Consider a square grid of the introduced small squares covering the image , they are shown in Fig. 1(A) in white color. After rotation, these squares are changed to those in Fig. 1(B).

Figure 1: Correlation of rotated images. (A) squares; (B) squares after rotation; (C) each square matches with its rotated version.

We will examine how each square in Fig. 1(A) matches with its rotated version in Fig. 1(B). The matched areas are depicted in Fig. 1(C). It can be seen from the Fig. 1(C) that the number of the matched squares does depend on the angle of rotation only, and it is independent of size of the squares. The correlation (more precisely, its mathematical expectation) of and is proportional to the white area in Fig. 1(C) divided by an area of one square.

From this heuristic construction, we draw the following conclusions:

  1. Described matching methods are expected to be rather sensitive to rotation of a ship. It can be seen from Fig. 1(C), where rotation significantly diminishes number of white parts.

  2. Improving resolution of images will not improve detection of of the ship. This is illustrated by the fact that number of white parts in Fig. 1(C) is fixed.

  3. The matching can be improved if square areas change its size, smaller in the center, and gradually becoming bigger to the fringe. This can be interpreted as making the image artificially smoother when farther from the center, and then using such an unevenly smoothed image for further correlation. While applying this idea to the whole image, we call it “focusing”, because a chosen part is well focused, and the other parts are out of focus, as shown in Fig. 2.

The property (I) is widely presumed. For example, in [22] it is said that correlation methods are too sensitive in applications due to “distortion of the object surface under test”. In applications usually a small window is used, like in [23], to make distortions less noticeable. In our opinion, property (I) is the reason why the general problem of image registration is not yet solved satisfactory, and, instead of relying on machine vision (which necessitates perfect registration), other techniques are developed, [24].

The effect (II) was empirically discovered in other circumstances such as [25], where it was soundly proved in experiments that phase correlation method paradoxically benefits from down-sampling of the images when it concerns robustness to affine distortions.

The idea (III) is widely used for small neighbourhoods of feature points as an empirical technique for articulating a feature point. The purpose is to make its neighbourhood more resilient to small rotations [26], and therefore it formally belongs to feature-based approach of registration techniques [27]. In the next section we modify this idea for area-based techniques.

3.3 Focused correlation: a way of interpretation of spatial information

We define focusing procedure with parameter and focus , which is a position of a pixel. The parameter determines strength of the focusing. For each point in the plane set a value , then the definition is


The illustration follows on an example of the standard image ”Lena” in Fig. 2. However, note that in the presented method we do not apply the focusing directly to the image, but for its whitened (randomised) version, because then we have controlled blurring, that is we know in which degree the image is blurred in its different parts. If we would apply the focusing to the initial image, the resulting variable smoothness would not be known since the initial image already has different unknown smoothness degree in different its parts. Moreover, our research demonstrates that application of focusing to the initial image produces rather negligible benefit, and, as we can suppose, this is the reason why the idea of [26] was used locally only.

In short, we introduce an artificial controlled sensor measurement uncertainty for purpose to cope with really happening uncertainty of unknown geometric distortion.

We define a focused correlation as a cross-correlation of images and , where and . Two cases are considered in the paper:

  • is the orientation operator defined above, then we have ”Focused orientation correlation”;

  • is phase retaining operation defined above, then we have ”Focused phase correlation”.

Figure 2: Left: Initial crisp image; Center: Result of focusing with parameters: ; focus is chosen in the center of an eye. Intuitively, it is obvious that such an image, being rotated for small angle, would coincide with the itself better than a crisp image would do, and therefore the image information is present here in a way that suits better for image registration in case of small rotation or, more generally, linear distortion with the fixed point at the focus; Right: Result of focusing of the whitened image.

The operation is not shift-invariant, therefore focused correlation differs from (3), and thus we genuinely present a new method.

Intuitively it is apprehensible that the bigger the parameter , the bigger distortion the method can tolerate, however it is at expense of losing overall reliability since for bigger parameter information is lost due to smoothing. Therefore, a trade-off necessitates, and we use in our experiments which is defined empirically.

According to our task, we use two images and find a displacement which retains the most unchanged mutual information in them, and this should be a displacement of the ship since nothing else is expected in the open sea. The method in some sense is opposite to detection of vehicles in the land and cannot use an advantage of an unchanged background [28], [29]. Focused correlation method correlates minuscule features in the image, and those change in the wave pattern and do not change in the ship pattern. On the contrary, coarser patterns, like a wake (i.e. long waves or a track left by a vessel) may remain stable. This is why we focus on fine structures, and it follows that we would prefer higher resolution in images, less compression in image information, and better randomised images, – all these underline minuscule patterns. Another property of the focused correlation is that it is rather more robust, in comparison with ordinary correlation, to small rotation and alike geometric distortion which the ship can undergo. All these properties are demonstrated in the presented experiments as follows.

4 Sea and ship separation in matching surface

To solve the task posed in Section 2, we have to determine how to let the water under the UAV be gathered into one place, so that a dry ship may appear. Firstly we do this not in the initial real images, but in the matching surface; for the initial images the separation is introduced later in Section 6. Each point in the matching surface expresses the shift between the two initial images and . However, the shift is meaningful, that is an area in the image with such shift exists, only if the value is a few times above the standard deviation of all the values of . To demonstrate this idea, consider images of a boat in Fig. 3. The first image is an initial image taken at time sec, and the rest of images were taken at times 1.5, 2.3, 3.7. 4.7 and 12 seconds.

Figure 3: A video scene taken during 12 seconds. The boat rocks, and at times and it is less rotated than at times and . By courtesy of SAGEM.

We scrutinize the initial period from to sec. in Fig. 4. We show the process of sea and ship separation. At the starting moment the sea and the ship both have zero displacement yet. This is reflected in the first image in Fig. 4. After sec. they have different motions: the sea pattern moves down quicker that the ship. In this, the second image of Fig. 4, one can see that shift vector of the sea starts to loose its certainty, because different parts of the sea move differently. In the next image, , this effect is even more prominent. Eventually, in the last image of the matching surface the sea shift vector disappeared. The shift vector of the ship also suffered: it is not concentrated as before, because the ship changed it geometric appearance. From this experiment we conclude, that in about a second, the sea vanished from the matching surface, but the ship is still present.

Figure 4: Matching surface at different times corresponding video at Fig. 3. The shift vector of the boat and the shift vector of the sea gradually separate, then the shift vector of the sea disappears, while the shift vector of the ship becomes presented by a blob due to rotation of the ship.

This experiment is also illustrated in Fig. 5, where two graphs are presented. The both graphs present signal-to-noise ratio (SNR) which is defined as a ratio of a maximum value of the matching surface to the standard deviation of the values of the surface, that is


The upper graph is SNR of the matching surface between the initial frame at and a frame at from the video sequence partly shown in Fig. 3. It is seen that when the ship’s appearance in the images differ less, then SNR is higher. Since the ship rocks, the graph has a periodic appearance. The lowest SNR occurred at the moment sec. The latter demonstrates the limit of the method, and then for this particular image at sec. we estimate a geometric transform between the images as follows. Ship’s rotation (comparatively with the time ) is 0.21 radian, and the scale along its length is 0.88. More precise description of the happened distortion of the ship from time to is given by an affine transform, which we estimated as

The strength of the distortion in terms of norm is


The second, lower, graph in the Fig. 5, presents SNR of the the same images but without the boat. To eliminate the boat form the images, we just retained the left one third of the images, see Fig. 3, and cut out the rest two third of it. This graph allows us to extract two bounding characteristics and SNR of the sea defined by their properties:

  • After passing a time interval of sec presence of the sea vanishes in the matching surface.

  • The SNR of the sea (without boat) is bounded by SNR.

Now we can formulate results of this experiment. From comparison of the two graphs in Fig. 5 we can conclude that sec and SNR. We also observed that the maximum detectable deviation of the ship is described by (6). Therefore, presence of the boat in the image is indicated by conditions:

The value of is empirical, while the value of SNR has some theoretical backing. Assuming that the matching surface is a Gaussian random field, we can estimate an expected value of SNR as , see [30], where is the number of pixels in the matching surface. Taking SNR slightly bigger than that, we again come to the value SNR.

Figure 5: SNR of the matching surface corresponding to the video in Fig. 3. The used method is focused orientation correlation. Compare with caption of Fig. 3 for explanation of moments .

5 Comparison of the methods

In the previous section we demonstrated how focused orientation correlation can benefit detecting a ship. In this section we compare different methods and come to a conclusion that a combination of two methods is necessary.

5.1 Example of prevalence of Focused Phase correlation

Using the same video in Fig. 3, we consider four methods: orientation correlation, phase correlation, and their focused versions. The results are present in Fig. 6.

Figure 6: Example of prevalence of focused correlation method. The graphs from Fig. 5 are present here for comparison of all the tried methods.

Each of the methods produces two graphs, as shown in Fig. 5, and the graphs from that figure are also presented in the Fig. 6. The lowest four graphs are not labelled, they present SNR of the boatless left one third part of the scene. The greatest difficulty for ship detection is moment sec, and it is demonstrated separately by four matching surfaces in Fig. 7. From these data it is concluded that for this particular maritime scene the focused phase correlation method outperforms the rest.

Figure 7: Matching surfaces for the four methods for the last frame (at ) in Fig. 3. Focused versions of the methods could detect the presence of the ship, while the original methods couldn’t.

5.2 Example of prevalence of Focused Orientation correlation

Consider an example of images with less hight frequency information, they are present in Fig. 8.

Figure 8: A scene with lower frequency information in the ship area, and this leads to failure to detect presence/absence of a boat while using phase information for correlation; however, the gradient orientation information (in the form of focused orientation correlation) suffices for detecting. By courtesy of SAGEM.

For the four considered methods we have following graphs in Fig. 9 arranged as before. These graphs show that in this case focused orientation correlation is the most reliable, while focused phase correlation is unable to provide a proper solution for ship detection.

Figure 9: Four methods applied to the scene in Fig. 8. The upper graphs (thick lines) present SNR of the matching surfaces of the whole picture. For comparison, the lower graphs (thin lines) present SNR for the left part of the picture which does not contain the boat.

5.3 Conclusion: Reliable method

These two experiments illustrated our following findings:

  • Focused correlation can solve the task, while correlation itself only does not suit the task due to sensitivity to geometric distortion of the boat. We also tried other, simpler methods such as , , , and found that they cannot provide a decent result for the scenes.

  • For different kinds of scenes there is always one of the two variants of the focused correlation that works better, so we applied the both of them.

  • The boat was detected if one of the focused correlations robustly exceed value of SNR at the time in between one and three seconds.

  • The maximum detectable deviation of the ship is described by (6), however the reliable condition was found as , (this corresponds to rotation less than ). The bound sec is an empirical value to guarantee that the ship normally rotates no more that to .

These findings were confirmed in our experiments with more than 20 available maritime scenes.

6 Sea and ship separation in images

To build an intelligent vision system we eventually will need the location of an object [31], that is to segment it in the image. Suppose a ship is detected, that is SNR SNR at time . The displacement between the ship images is found as a 2D vector at which the matching surface reaches its maximal value (5). If we align the second image, that is, obtain the shifted image , then it should coincide with the first image while belongs to the (unknown yet) ship area. With this observation we can determine the ship area.

Our solution is to use the orientation operator , and compare images and . The ship area is then found as a set of points where the angle between and is less than a particular number, which was empirically chosen to be . The result is visible on the matchability map which is defined as a cosine between unit vectors and at each pixel . The matchability map shows which parts of the image can match its counterpart in the second image by displaying a degree of matching quality as a correlation coefficient ranging from to . The results of this automatic segmentation are present in Figs. 10, 11 and 12. These scenes present different degrees of difficulty for a human operator: obviously, for a human it would be the easiest to delineate the ship in Fig. 12.

Figure 10: Upper row: two pictures of a boat by courtesy of SAGEM. Lower row: matchability map and the segmented common area.
Figure 11: Upper row: two pictures of a boat by courtesy of SAGEM. Lower row: matchability map and the segmented common area.
Figure 12: Upper row: two pictures of a boat by courtesy of SAGEM. Lower row: matchability map and the segmented common area.

7 Concluding Remarks

Due to long history of matching and correlating images, it seems rather difficult to propose a better and feasible approach for applications. Papers on this topic appear at a rate of at least 100 papers each year, [27]. The same situation is with the ship detection topic, where many attempts were made, which seems to leave only a possibility for incremental further development.

We, however, proposed a novel way of substantial improvement of the most known method of correlation – phase correlation. This enables us to arrive at a new approach for boat detection and then resolve difficult cases when no prior information about statistical properties of the sea is available. The novelty presented in the paper can be therefore listed

  1. A new correlation method was proposed, and it shows reinforced robustness to geometric distortions of the involved images;

  2. A new boat detection method was proposed, which is based entirely on comparison of images;

  3. A usage of the observation that a fine pattern of the sea changes completely was proposed.

The broader implication is that the proposed more robust modified phase correlation can be used everywhere where other similar correlation techniques are used. The proposed focused correlation benefit from higher resolution, therefore it can substitute the standard methods especially when larger images come about, for example medical images which usually have very high resolution. The main advantage is that the method needs no tuning and can even cope with scenes that are difficult to analyse for a human operator.

We presented a novel correlation techniques that is able to align geometrically mutually translated and distorted pairs of 2D images. The method recovers the translational component of misalignment and it is more robust to small geometric distortion than known similar techniques. The method is based on a new way of interpreting spatial sensor information in the presence of geometric distortions.

We examined the performance of the method in several maritime scenes compared to a few other methods as reference. The proposed method showed a low sensitivity to geometric distortion of the common areas and low sensitivity to surrounding changing background. In the considered maritime scene the common area is a ship (or boat) area, and the changing background is the image of the surrounding water in the sea. This enabled us to detect whether a ship is present in both images, since its presence manifests itself as an unchanged area that can be aligned. We also considered a direct extension of the method for further segmenting of the detected ship. This step proceeds by comparing aligned images.

The method’s behaviour was stable, which is promising for its usage for large variety of data. The detection of the ship was conducted by computation of movement between the two images only, and without taking into consideration the image content as opposed to other methods. Therefore, the proposed method for ship detection can serve as a complement to the previously published work and then can be added as a new element to already working surveillance systems.


The authors would like to thank SAGEM (SAFRAN) for providing the maritime videos. Our work is supported by SeaBILLA project funded under the 7th Research Framework Programme of the European Commission.


  1. T. Arnesen and R. Olsen, “Literature review on vessel detection,” FFI/Rapport-2004/02619, Forsvarets Forskningsinstitutt, Kjeller, Norway, 1-168, 2004.
  2. C. Corbane, L. Najman, E. Pecoul, L. Demagistri, and M. Petit, “A complete processing chain for ship detection using optical satellite imagery,” International Journal of Remote Sensing, vol. 31, no. 22, pp. 5837–5854, 2010.
  3. P. Herselman, C. Baker, and H. De Wind, “An analysis of x-band calibrated sea clutter and small boat reflectivity at medium-to-low grazing angles,” International Journal of Navigation and Observation, vol. 2008, 2008.
  4. C. Zhu, H. Zhou, R. Wang, and J. Guo, “A novel hierarchical method of ship detection from spaceborne optical image based on shape and texture features,” Geoscience and Remote Sensing, IEEE Transactions on, vol. 48, no. 9, pp. 3446–3456, 2010.
  5. “Future maritime surveillance,” House of Commons, Defence Committee, UK, May 2012.
  6. I. Purohit, M. Islam, K. Asari, and M. Karim, “Target detection using adaptive progressive thresholding based shifted phase-encoded fringe-adjusted joint transform correlator,” International Journal of Electrical, Computer, and Systems Engineering, vol. 2, no. 4, 2008.
  7. W. Bingjie, W. Chao, Z. Bo, and W. Fan, “Ship detection basprocess started process exited with error(s)ed on radarsat-2 full-polarimetric images,” in Radar (Radar), 2011 IEEE CIE International Conference on, vol. 1.   IEEE, 2011, pp. 634–637.
  8. Y. Yu, X. Yang, S. Xiao, and J. Lin, “Automated ship detection from optical remote sensing images,” Key Engineering Materials, vol. 500, pp. 785–791, 2012.
  9. H. Sun, Y. Li, G. Liu, H. Long, and H. Wang, “A new ship detection method for massive data high-resolution remote sensing images,” Advanced Materials Research, vol. 532, pp. 1105–1109, 2012.
  10. F. Bi, B. Zhu, L. Gao, and M. Bian, “A visual search inspired computational model for ship detection in optical satellite images,” Geoscience and Remote Sensing Letters, IEEE, no. 99, pp. 1–5, July 2012.
  11. X. Xiangwei, J. Kefeng, Z. Huanxin, and S. Jixiang, “A fast ship detection algorithm in sar imagery for wide area ocean surveillance,” in Radar Conference (RADAR), 2012 IEEE.   IEEE, 2012, pp. 0570–0574.
  12. Y. Tang, H. Gao, J. Kurths, and J. Fang, “Evolutionary pinning control and its application in UAV coordination,” Industrial Informatics, IEEE Transactions on, pp. 828–838, 2012.
  13. A. Fitch, A. Kadyrov, W. Christmas, and J. Kittler, “Fast robust correlation,” Image Processing, IEEE Transactions on, vol. 14, no. 8, pp. 1063–1073, 2005.
  14. ——, “Orientation correlation,” in British Machine Vision Conference, vol. 1, 2002, pp. 133–142.
  15. C. D. Kuglin and D. C. Hines, “The phase correlation image alignment method,” in IEEE Conference on Cybernetics and Society, September 1975, pp. 163–165.
  16. L. Moisan, “Periodic plus smooth image decomposition,” Journal of Mathematical Imaging and Vision, vol. 39, no. 2, pp. 161–179, 2011.
  17. J. Lewis, “Fast template matching,” in Vision Interface, vol. 95, 1995, pp. 120–123.
  18. J. J. Pearson, D. C. Hines, S. Golosman, and C. D. Kuglin, “Video rate image correlation processor,” Proc. SPIE, vol. 119, no. 3, pp. 197 –205, aug. 1977.
  19. H. Foroosh, J. B. Zerubia, and M. Berthod, “Extension of phase correlation to subpixel registration,” Image Processing, IEEE Transactions on, pp. 188 –200, 2002.
  20. J. Gluckman, “Higher order whitening of natural images,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, vol. 2, 2005.
  21. T. Soni, J. R. Zeidler, and W. H. Ku, “Adaptive whitening filters for small target detection,” Proc. SPIE 1698,, 1992.
  22. D. Tsai, I. Chiang, and Y. Tsai, “A shift-tolerant dissimilarity measure for surface defect detection,” Industrial Informatics, IEEE Transactions on, vol. 8, no. 1, pp. 128 –137, feb. 2012.
  23. M. Nielsen, D. Slaughter, and C. Gliever, “Vision-based 3D peach tree reconstruction for automated blossom thinning,” Industrial Informatics, IEEE Transactions on, no. 1, pp. 188 –196, 2012.
  24. J. Park and J. Lee, “A beacon color code scheduling for the localization of multiple robots,” Industrial Informatics, IEEE Transactions on, vol. 7, no. 3, pp. 467 –475, aug. 2011.
  25. O. Urhan, M. Gullu, and S. Erturk, “Modified phase-correlation based robust hard-cut detection with application to archive film,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 16, no. 6, pp. 753–770, 2006.
  26. T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, no. 99, pp. 1–1, 2011.
  27. B. Zitova and J. Flusser, “Image registration methods: a survey,” Image and vision computing, vol. 21, no. 11, pp. 977–1000, 2003.
  28. M. Kafai and B. Bhanu, “Dynamic bayesian networks for vehicle classification in video,” Industrial Informatics, IEEE Transactions on, vol. 8, no. 1, pp. 100 –109, feb. 2012.
  29. S. Chen, J. Zhang, and Y. Li, “A hierarchical model incorporating segmented regions and pixel descriptors for video background subtraction,” Industrial Informatics, IEEE Transactions on, vol. 8, no. 1, pp. 118 –127, feb. 2011.
  30. V. Koval’ and R. Schwabe, “Limit theorem for the maximum of dependent gaussian random elements in a banach space,” Ukrainian Mathematical Journal, vol. 49, no. 7, pp. 1129–1133, 1997.
  31. G. Wang, L. Tao, H. Di, X. Ye, and Y. Shi, “A scalable distributed architecture for intelligent vision system,” Industrial Informatics, IEEE Transactions on, vol. 8, no. 1, pp. 91 –99, feb. 2012.
This is a comment super asjknd jkasnjk adsnkj
The feedback cannot be empty
Comments 0
The feedback cannot be empty
Add comment

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.