Study on Focusing of Area Array Camera by Using Frequency of Images

Abstract

Focusing of an area array camera is an important step in making a high precision imaging camera. Its testing method needs special study. In this paper, a method of camera focusing is introduced. The defocusing depth of camera is calculated by using the frequency spectrum of defocused image. This method is especially suitable for the focusing of the Planar Array Camera, and avoids the complicated work of adjusting the focus plane of the planar array camera in the focusing process.

Share and Cite:

Ma, L. , Li, C. , Wang, D. , Zhao, Y. , Jin, Z. and Liu, Z. (2021) Study on Focusing of Area Array Camera by Using Frequency of Images. Optics and Photonics Journal, 11, 394-401. doi: 10.4236/opj.2021.118028.

1. Introduction

Two kinds of method named DFF [1] [2] [3] [4] [5] (Depth from Focus) and DFD (Depth from Defocus) are described for measuring the defocusing distance of the camera and rapid autofocusing of the camera system. DFF method depends on searching arithmetic. Through analyzing the image quality of a series of images, we can get the clarity of the images in different focus positions, and then calculate the optimal position of the optical focal plane of the camera by fitting the defocusing curve. In actual process of adjustment, the collimator target should be parallel to the focal plane. To Line-Array CCD cameras, we can adjust the level direction of the target to realize that. The process is simple and convenient. Therefore, the method is widely used in the focal plane adjustment of the line array camera area. But to plane array camera, the method has some limitations. To ensure the parallel, two-dimensional direction of the target needs to be adjusted, but that is difficult to carry out. And this algorithm needs many images, the workload is heavy. Therefore, the DFF method is not very applicable to plane array camera.

This paper describes the DFD method to adjust the focal plane of the camera. We can use only two images taken with different camera parameters such as lens position, focal length, and aperture diameter to calculate the depth information of the camera through the curve of the frequency distribution of the images, then directly calculate the defocusing distance of the camera by DFD method. Through adjusting the thickness of the gasket at the lens position, focusing of camera systems can be realized. Although the method is based on the analysis of frequency Domain of the images, the calculation doesn’t need educe the result in frequency space. Therefore, the method is very applicable to plane array camera and simple. The DFD method is explained particularly below, and the process of operation is demonstrated.

2. The Principle of DFD Method

According to geometrical optics function and Figure 1, we have

D / 2 R = f / ( v f ) (1)

R is positive when the receiver is behind the focal plane and R is negative when the receiver is in front of the focal plane. In practical optical system, the Point Spread Function (PSF) of imaging system is not the ideal Airy-Pattern, because of the influence of system design and fabrication deflection. If the system transfer function is close to the Diffraction-Limited and the system is Non-destructive, the energy distribution can be described by Gaussian distribution [1]

h ( x , y ) = 1 / ( 2 π δ 2 ) exp [ ( x 2 + y 2 ) / 2 δ 2 ] (2)

and

h ( x , y ) d x d y = 1 (3)

According to the conclusion of many experiments [2], δ is proportion to R, i.e.

Figure 1. Sketch for imaging of optical imaging system. L: Lens, f: Focal Length, R: Blur circle Radius, D: Aperture Diameter, v: Focusing image distance.

δ = α R For α > 0 (4)

where α is a constant of proportionality characteristic of the measured camera. In most practical cases, α = 1 / 2 is a good approximation.

Therefore

δ = R / 2 (5)

The imaging system can be considered as a line system, so the imaging process can be expressed [6] as

I i ( x i , y i ) = I g ( x 0 , y 0 ) h ( x i x 0 , y i y 0 ) d x 0 d y 0 = I g ( x , y ) h i ( x , y ) (6)

where I g is the energy distribution of the image through the ideal optical system, I j is the energy distribution of the image through the real optical system. Applying Fourier Transform to Equation (6), we have

G i ( ξ , η ) = G g ( ξ , η ) H ( ξ , η ) = G g ( ξ , η ) exp [ 2 π 2 δ 2 ( ξ 2 + η 2 ) ] (7)

where G g is the frequency distribution of the image through the ideal optical system, G i is the frequency distribution of the image through the real optical system. Because H ( ζ , η ) is circular symmetry distribution, the real radial frequency distribution can be expressed as

D ( r ) = 1 / ( 2 π r ) 0 2 π | G i ( r , θ ) | d θ = 1 / ( 2 π r ) exp ( 2 π 2 δ 2 r 2 ) 0 2 π | G g ( r , θ ) | d θ (8)

where r is the radius of radial frequency distribution of the image. From the radial frequency distribution of the blurred image, we have the method below to obtain the defocusing distance of the camera. Using two blurred images taken with different focusing image distance, i.e. ν 1 and ν 2 , we can get the radial frequency distribution of the two images when r = a. Comparing the radial frequency distribution, we have

D 1 ( a ) / D 2 ( a ) = exp ( 2 π 2 a 2 δ 1 2 ) / exp ( 2 π 2 a 2 δ 2 2 ) (9)

The function is required to calculate the frequency distribution apparently; actually we just need to calculate D ( r ) ,

D ( r ) = 1 360 θ = 0 360 | m = 0 M 1 n = 0 N 1 f ( m , n ) exp [ j ( 2 π m / M r cos θ ) ] exp [ j ( 2 π n / N r sin θ ) ] | (10)

where f ( m , n ) is the energy distribution of the image, m and n represent the Unit of X-axis and Unit of Y-axis in the image, M and N are the size of the image in x aspect and in y aspect. From Equations (9) and (10), the function δ 1 2 δ 2 2 can be obtained. d is defined by

d = v 2 v 1 (11)

Using the function (1), (5) and (9), we have the distance ν 1

v 1 = ( δ 1 2 δ 2 2 ) 4 f 2 / ( D 2 d ) + f d / 2 (12)

where D is the aperture diameter.

3. Programs

3.1. The Program of DFD Method

A plane array camera as an example is used to illustrate the process of realizing the fixed-focus of the camera by DFD method. The program of the measurement of the camera is shown in Figure 2. The camera is installed on a tripod by special tools, and a collimator is placed before it. A target is placed at the infinity focal plane of the collimator; and its shape is shown in Figure 3. The target is divided to 60 parts average, 6 degree interval of black and white stripes.

The optical axis of the camera and the collimator should be coincided to make the image of the target on the image surface of the CMOS all the time. Adjust the target position of the collimator to simulate the defocus phenomenon. The focal position of the collimator is taken for the origin of coordinates in the experiment. We put the target at the focal position of the collimator, defining that the target away from the camera positive, close to the camera negative. The defocused images are acquired when the target is moved from −2 mm to 3 mm, the interval is 0.5 mm. The image signal of the camera is acquired by the video output of the imaging system.

Figure 2. Program of the DFD measurement.

Figure 3. Fan-shaped target.

3.2. Results

Figures 4-9 illustrate the high-frequency centered in the imaging of the target which is zoomed in 8 times by the optical system and defocused at different positions.

According to the results shown in Table 1, the optimal position of the focal plane of the camera is conjugate with the target at 1mm. We should adjust the position of the focal plane to satisfy the position of the focal plane is conjugate with the target at 0 mm. Therefore the thickness of gaskets should be thinned 0.02 mm.

Comparing the results of defocusing distance by DFD method and by the traditional method, the results by DFD method show the difference of within 0.05 mm with the results by the traditional method to satisfy the accuracy requirements.

We can get the radial frequency distribution graphs shown in Figure 10. The frequency data at the target position 1 mm is the largest in the graph. Therefore we can get the target position is the location of the optimal focal plane and that is constant with the result we get by the traditional method. That proves this algorithm correctly.

According to the result, the accuracy of the measurement is within the range of the camera’s depth of focus, so the result by DFD method is real and effective.

Figure 4. The target at 2.

Figure 5. The target at 1.5.

Figure 6. The target at 1.

Figure 7. The target at 0.5.

Figure 8. The target at 0.

Figure 9. The target at −0.5.

Figure 10. Radial frequency distribution.

Table 1. Comparing the results of defocusing distance by traditional method and by DFD method.

4. Conclusion

According to the method of focusing above, the process of DFD method is much simpler and more feasible to operate than the traditional method. DFD method can improve measuring efficiency, and can be applied to plane array camera broadly.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Pei, X., Feng, H., Li, Q. and Xu, Z. (2003) A Depth from Defocus Auto-Focusing Method Based on Frequency Analysis. Opto-Electronic Engineering, 30, 62-65.
[2] Subbarao, M. and Surva, G. (1994) Depth from Defocus: A Spatial Domain Approach. International Journal of Computer Vision, l3, 271-294. https://doi.org/10.1007/BF02028349
[3] Kim, S.K., Paik, S.R. and Park, J.K. (1998) Simultaneous Out-of-Focus Blur Estimation and Restoration for Digital Auto-Focusing System. IEEE Transaction on Consumer Electronics, 44, 107l-1075. https://doi.org/10.1109/30.713236
[4] Subbarao, M. and Wei, T. (1992) Depth from Defocus and Rapid Autofocusing: :A Practical Approach. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, Illinois, June 1992. https://doi.org/10.1109/CVPR.1992.223176
[5] Subbarao, M. and Surya, G. (1992) Application of Spatial-Domain Convolution/Deconvolution Transform for Determining Distance from Image Defocus. Proceedings of SPIE Conference, OE/TECHNOLOGY’92, Boston, November 1992, Vol. 1822, 159-167.
[6] Lai, S. and Fu, C. (1992) A Generalized Depth Estimation Algorithm with a Single Image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14, 405-411. https://doi.org/10.1109/34.126803

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.