References

As an important detection tool, lidar has been widely used in remote sensing in recent decades. Traditional imaging lidar, according to the measurement modes, can be classified into two types: scanning imaging lidar and non-scanning imaging lidar (e.g. flash radiography) [27, 28, 29]. Scanning imaging lidar obtains the real-space image of the target by scanning the target region point-by-point with a pulsed laser [28]. Therefore, it is difficult to image moving target with high-speed. Non-scanning imaging lidar [29], which is characterized by an imaging system with high resolution and a pulsed flash laser, can cover the whole field of the target and obtain the target’s real-space image in a single exposure. However, because the light intensity reflected by the target is divided into many small pixels of charge-coupled device (CCD) camera, the detection sensitivity is low and the detection distance of non-scanning imaging lidar is limited by the signal-to-noise ratio (SNR) of the received photons distributed on the whole imaging plane and the aperture of the imaging system. In addition, for scanning imaging lidar the imaging resolution is limited by the Rayleigh criterion of the emitting aperture [4] while by the numerical aperture of the receiving system for non-scanning imaging lidar.

Recently, there are many researches on the remote sensing with ghost imaging technology [31, 32, 33, 34, 35, 36, 37]. Combined with sparsity constraints, super-resolution [38, 39, 40], compressive sensing [40, 41, 42, 43, 44, 45], compressive radar [46] and other techniques (http://ecos.maths.ed.ac.uk/SPARS11/) are possible. In this Letter, a practical ghost imaging lidar via sparsity constraints (GISC lidar) system was proposed and high-resolution imaging was experimentally demonstrated at a distance about 1.0 km range.

Figure 1: Experimental setup of GISC lidar.

Fig. 1 presents the setup of the proposed system. The source, which consists of a 532 nm coherent solid-state pulsed laser with 10 ns pulsed width, a rotating diffuser (ground glass disk) and a set of field lens, forms speckle field at stop 2 (field stop). The light emitting from the source is divided by a beam splitter (BS) into an object and a reference paths. In the object path, the light propagates through the objective lens and then to the target. The photons reflected by the target are received by a light concentrator (a Cassegrain telescope with the aperture 420 mm and the focal length 5 m) and then passes through an interference filter with 1 nm half-width into a photomultiplier tube (PMT). In the reference path, the light goes through the reference lens and then to a CCD camera. In this system, Stop 2 is placed on the conjugate plane of both the target and the CCD camera, which can control the field of view (FOV) on the target plane. The field lens is used to increase the utilization rate of light energy and generate a visual aperture stop, which is signed as stop 3 in Fig. 1. Therefore, the transverse size of light beam at the objective and reference lens is controlled by the stop 1, which also ensures that the entrance pupil is exactly the same for the lens and . In addition, the PMT is used to transform light signals reflected from the target into electric signals and the interference filter is used for inhibiting background light.

By exploiting the image’s sparsity constraints, CS reconstruction techniques usually yield, as predicted by theoretical analysis and confirmed by experiments, better results when the target is sparse in the representation basis [40, 41, 42, 43, 44, 45]. The reconstruction of GISC lidar will be formulated in the CS framework. By reshaping each of the speckle intensity distribution ( pixels) recorded by the CCD camera (Fig. 1) into a row vector (, ), the measurement matrix () is obtained after observations. Meanwhile, the intensities recorded by the PMT in the object path are arranged as a column vector (). If we reshape the unknown target ( pixels) into a column vector () and can be represented as such that is much sparser ( denotes the transform operator to the sparse representation basis), then the target’s image can be reconstructed by solving the following convex optimization program [39, 44, 47]:

(1)

where is a nonnegative parameter, and denote the Euclidean norm and the -norm of , respectively.

To experimentally demonstrate the characteristics of GISC lidar, the concrete parameters of GISC lidar in the experiments are as follows: the focal length of the lens is 5 m and the magnification of the lens is 1. The FOV of the receiving system and the emitting system at =900 m range is about 2 m and 1 m, respectively. The emitting aperture (stop 3) in the test path is 18 mm (measured value), then the theoretical resolution (Rayleigh criterion) for traditional imaging system is mm mm. According to the parameters of emitting system, the pixel size of the CCD camera in the reference path is set as 27.6 m 27.6 m. The image is reconstructed using the gradient projection for sparse reconstruction algorithm [47].

Figure 2: Experimental reconstruction results for high-reflection targets we proposed at 900 m range (with 3000 measurements). (a, b) The original target plates imaged by a camera and a telescope, respectively; (c) the concrete sizes of a standard Chinese vehicle license plate; (e) the concrete sizes of a set of resolution panels; (d) and (f) are the targets’ images reconstructed by GISC lidar and the targets are all represented in the space basis; (g) the cross-section of the rectangular selection box in (f).

The first demonstration of GISC lidar was performed using our designed targets mounted on a building located about =900 m away. The targets are highly reflective: a standard Chinese vehicle license plate and a set of resolution panels. The line width of the characters on the vehicle license plate, as shown in Fig. 2(c), is about 10 mm. The imaging area on the CCD camera is 14072 pixels, which means that the imaging area on the target plane is 695 mm 357 mm. Fig. 2(d) presents the image of the vehicle license plate reconstructed by GISC lidar. The resolution panels, as shown in Fig. 2(e), are divided into three groups (six-slit, three-slit and double-slit) and their center-to-center separation between slits are 20 mm, 40 mm and 60 mm, respectively. The imaging area on the CCD camera is 12892 pixels, which means that the imaging area on the target plane is 635 mm 457 mm. The reconstructed results of the resolution panels are illustrated in Fig. 2(f) and Fig. 2(g). From Fig. 2(f) and Fig. 2(g), the resolution panels with the resolution 20 mm, can be clearly differentiated, which also demonstrates that super-resolution imaging is achieved by our GISC lidar.

Figure 3: Reconstruction results of a natural target (the overhead target of a tower) at 720 m range (with 3000 measurements). (a) and (b) are the image of the target imaged by a camera and a telescope, respectively; (c) is the target’s image reconstructed by GISC lidar and the target is represented in two-dimensional Discrete Cosine Transform (2D-DCT) basis.

Another demonstration of GISC lidar was conducted to image the overhead target of a 200 m tower located about =720 m away. The target’s images, taken by a camera and a telescope, are shown in Fig. 3(a) and Fig. 3(b). Fig. 3(c) presents the reconstruction result of the target (where the target is represented in 2D-DCT basis). In Fig. 3(c), due to the ultra-low reflectivity and the specular reflection, the three black antennas are invisible.

Compared with traditional imaging lidar, GISC lidar has both the advantages of scanning imaging and non-scanned imaging lidar. Similar to scanning imaging lidar, GISC lidar has high detection efficiency and long detection distance since all photons collected by the concentrator illuminate the same PMT. Similar to non-scanning imaging lidar, the laser pulse emitted form the GISC lidar covers the whole detection field, therefore, the image can be reconstructed without scanning the target and the imaging of targets with high-speed is possible even when the sampling number is far fewer than the pixel number the desired resolution needed [43, 44, 45]. In addition, the CCD camera in the reference path, which limits the sampling speed at present, can be omitted by using the techniques such as computational ghost imaging [48, 49] and encoding the pseudo-thermal light source [50], and the approaches such as image separation reconstructions [51, 52] are also helpful to the imaging of targets with high-speed. Furthermore, as indicated by the results in Fig. 2 and in Ref. [39], GISC lidar can also realize supper-resolution imaging.

In conclusion, we experimentally demonstrate a novel imaging lidar by combining GI method with sparse and redundant representations. We show that GISC lidar has the advantages of long detection distance, high imaging speed and super-resolution imaging capabilities.

The work was supported by the Hi-Tech Research and Development Program of China under Grant Project No. 2011AA120101 and No. 2011AA120102.

Informational Fourth Page

References

References

  1. C. Elachi, Spaceborne Radar Remote Sensing: Applications and Techniques (IEEE Press, NewYork, 1988).
  2. R. Ahola, T Heikkinen, and M. Manninen, Proc. Int. Conf. Image Processing and Pattern Recngnition. 139, (1985).
  3. J. P. Anthes, P. Garcia, J. T. Piercs, and P. V. Dressendorfer, SPIE 1936, 11 (1993).
  4. P. Zhang, W. Gong, X. Shen, and S. Han, Opt. Lett. 34, 1222 (2009).
  5. W. Gong, P. Zhang, X. Shen, and S. Han, Appl. Phys. Lett. 95, 071110 (2009).
  6. W. Gong, and S. Han, Phys. Lett. A 374, 3723 (2010).
  7. J. Cheng, Opt. Express 17, 7916 (2009).
  8. P. Zhang, W. Gong, X. Shen, and S. Han, Phys. Rev. A 82, 033817 (2010).
  9. P. B. Dixon, G. A. Howland, K. W. C. Chan, C. O’Sullivan-Hale, B. Rodenburg, N. D. Hardy, J. H. Shapiro, D. S. Simon, A. V. Sergienko, R. W. Boyd, and J. C. Howell, Phys. Rev. A 83, 051803R (2011).
  10. N. D. Hardy, and J. H. Shapiro, Phys. Rev. A 84, 063824 (2011).
  11. R. E. Meyers, K. S. Deacon, and Y. Shih, Appl. Phys. Lett. 98, 111115 (2011).
  12. D. L. Donoho, Siam. J. Math. Anal. 23, 1309 (1992).
  13. W. Gong and S. Han, e-print arXiv: 0911. 4750 [Quant-ph].
  14. E. J. Candès, J. K. Romberg, T. Tao, Commun. Pur. Appl. Math. 59, 1207 (2006).
  15. D. L. Donoho, IEEE Trans. Inform. Theory, 52, 1289 (2006).
  16. E. J. Candès and M. B. Wakin, IEEE Signal Process. Mag. 25, 21 (2008).
  17. O. Katz, Y. Bromberg, and Y. Silberberg, Appl. Phys. Lett. 95, 131110 (2009).
  18. J. Du, W. Gong, and S. Han, Opt. Lett. 37, 1067 (2012).
  19. A. Stern, Y. Rivensonb, and B. Javidic, Proc. SPIE 6778, 67780J (2007).
  20. M. Herman and T. Strohmer, IEEE Trans. Signal Process. 57, 2275 (2009).
  21. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, IEEE J. Sel. Top. in Sig. Proc. 1, 586 (2007).
  22. J. H. Shapiro, Phys. Rev. A 78, 061802R (2008).
  23. Y. Bromberg, O. Katz, and Y. Silberberg, Phys. Rev. A 79, 053840 (2009).
  24. H. Li, Z. Chen, J. Xiong, and G. Zeng, Opt. Express 20, 2956 (2012).
  25. J. L. Starck, D. L. Donoho, E. J. Candès, Astron. Astrophys. 398, 785 (2003).
  26. G. Kutyniok and W. Lim, e-print arXiv: 1101. 0553 [Math. NA].
  27. C. Elachi, Spaceborne Radar Remote Sensing: Applications and Techniques (IEEE Press, NewYork, 1988).
  28. R. Ahola, T Heikkinen, and M. Manninen, “3D-image acquisition by scanning time of flight measurements,” Proc. Int. Conf. Image Processing and Pattern Recngnition. 139 (1985).
  29. J. P. Anthes, P. Garcia, J. T. Piercs, and P. V. Dressendorfer, “Non-Scanned Ladar Imaging and Applications,” SPIE 1936, 11 (1993).
  30. P. Zhang, W. Gong, X. Shen, and S. Han, “Improving resolution by the second-order correlation of light fields,” Opt. Lett. 34, 1222 (2009).
  31. W. Gong, P. Zhang, X. Shen, and S. Han, “Ghost ‘pinhole’ imaging in Fraunhofer region,” Appl. Phys. Lett. 95, 071110 (2009).
  32. W. Gong, and S. Han,“ Lens ghost imaging with thermal light: from the far field to the near field,” Phys. Lett. A 374, 3723 (2010).
  33. J. Cheng, “Ghost imaging through turbulent atmosphere,” Opt. Express 17, 7916 (2009).
  34. P. Zhang, W. Gong, X. Shen, and S. Han, “Correlated imaging through atmospheric turbulence,” Phys. Rev. A 82, 033817 (2010).
  35. P. B. Dixon, G. A. Howland, K. W. C. Chan, C. O’Sullivan-Hale, B. Rodenburg, N. D. Hardy, J. H. Shapiro, D. S. Simon, A. V. Sergienko, R. W. Boyd, and J. C. Howell, “Quantum ghost imaging through turbulence,” Phys. Rev. A 83, 051803R (2011).
  36. N. D. Hardy, and J. H. Shapiro, “Reflective ghost imaging through turbulence,” Phys. Rev. A 84, 063824 (2011).
  37. R. E. Meyers, K. S. Deacon, and Y. Shih, “Turbulence-free ghost imaging,” Appl. Phys. Lett. 98, 111115 (2011).
  38. D. L. Donoho, “Superresolution via sparsity constraints,” Siam. J. Math. Anal. 23, 1309 (1992).
  39. W. Gong and S. Han, “Super-resolution far-field ghost imaging via compressive sampling,” e-print arXiv: 0911. 4750 [Quant-ph].
  40. E. J. Candès, J. K. Romberg, T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pur. Appl. Math. 59, 1207 (2006).
  41. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory, 52, 1289 (2006).
  42. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21 (2008).
  43. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95, 131110 (2009).
  44. J. Du, W. Gong, and S. Han, “The influence of sparsity property of images on ghost imaging with thermal light,” Opt. Lett. 37, 1067 (2012).
  45. A. Stern, Y. Rivensonb, and B. Javidic, “Single-shot compressive imaging,” Proc. SPIE, 6778, 67780J (2007).
  46. M. Herman and T. Strohmer, “High-resolution radar via compressed sensing,” IEEE Trans. Signal Process. 57, 2275 (2009).
  47. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. in Sig. Proc. 1, 586 (2007).
  48. J. H. Shapiro, “Computational ghost imaging’” Phys. Rev. A 78, 061802R (2008).
  49. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009).
  50. H. Li, Z-P Chen, J. Xiong, and G-h Zeng, “Periodic diffraction correlation imaging without a beam-splitter,” Opt. Express 20, 2956 (2012).
  51. J. L. Starck, D. L. Donoho, E. J. Candès, “Astronomical image representation by the curvelet transform,” Astron. Astrophys. 398, 785 (2003).
  52. G. Kutyniok and W. Lim, “Image Separation using Wavelets and Shearlets,” e-print arXiv: 1101. 0553 [Math. NA].
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
347486
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description