Multiple Targets Recognition for Highly-Compressed Color Images in a Joint Transform Correlator

Abstract

In this paper, we are proposing a compression-based multiple color target detection for practical near real-time optical pattern recognition applications. By reducing the size of the color images to its utmost compression, the speed and the storage of the system are greatly increased. We have used the powerful Fringe-adjusted joint transform correlation technique to successfully detect compression-based multiple targets in colored images. The colored image is decomposed into three fundamental color components images (Red, Green, Blue) and they are separately processed by three-channel correlators. The outputs of the three channels are then combined into a single correlation output. To eliminate the false alarms and zero-order terms due to multiple desired and undesired targets in a scene, we have used the reference shifted phase-encoded and the reference phase-encoded techniques. The performance of the proposed compression-based technique is assessed through many computer simulation tests for images polluted by strong additive Gaussian and Salt & Pepper noises as well as reference occluded images. The robustness of the scheme is demonstrated for severely compressed images (up to 94% ratio), strong noise densities (up to 0.5), and large reference occlusion images (up to 75%).

Share and Cite:

Cherri, A. and Nazar, A. (2022) Multiple Targets Recognition for Highly-Compressed Color Images in a Joint Transform Correlator. Optics and Photonics Journal, 12, 107-127. doi: 10.4236/opj.2022.125009.

1. Introduction

Optical correlation techniques have been investigated widely in the past decades as excellent potential architecture for target recognition and tracking applications as well as other optical information processing applications such as in optical cryptosystems [1] - [20]. Over the years, since its introduction in the sixties of the last century, the optical joint transform correlator (JTC) had gained growing interest as a near real-time optical processor over the classical optical Lugt correlator [21] [22], which suffers from poor light efficiency, large correlation sidelobes and large autocorrelation width in addition, to the need of precise alignment of the optical elements and the fabrication of complex-valued filters. Interests in real-time JTC-based applications have grown with the fast development of electrically addressed spatial light modulators (EASLM). The discrimination capability of target detection using optical JTC was greatly improved by proposing various forms of JTC techniques [5] [6] [7] [8] [9]. Among them, the reported fringe-adjusted JTC (FJTC) technique has been proven to greatly enhance the JTC correlation peaks [3]. The JTC faces false alarm detection when the input scene contains many identical targets as well as many identical non-targets objects. To alleviate these problems, several methods were reported such as Fourier plane subtraction, shifted phase-encoded, and random phase-mask.

Usually, target images captured by a CCD camera as well as reference images are uploaded to the EASLM at the input of the JTC architecture for target recognition processing. A major factor that affects the speed of processing depends on the sizes of the images for uploading. Further, variations in target images regarding rotation and scaling necessitate the storage of a large number of reference images for successful target detection. Thus, a large storage capacity is needed. Furthermore, the limited size and pixel resolution of the SLM put more constraints on the useful size of images for practical optical processing. Moreover, the constraints on the image sizes and the growing storage capacity to store the reference images appear more profound when dealing with high-resolution and colored images. Consequently, researchers and developers seek for techniques to compress the image sizes for practical image transmission, automated optical target recognition, and/or encryption applications. In this regard, JPEG (Joint Photographic Experts Group) compression algorithm has been proven to be one of the efficient techniques to compress images and it is widely used in the transmission and storage of images [23] [24] [25] [26] [27].

In this paper, we are proposing a compression-based JTC that detects multiple color targets where we have used largely compressed targets and/or reference images. We will prevent the usual false alarms correlation peaks from the output plane (a common issue when dealing with multiple targets detection) by using fringe-adjusted filter and reference phase-shifted and reference phase-encoded schemes. The proposed system detects multiple targets and references compressed up to a ratio of 94%. Many simulation experiments (with added Gaussian, Salt & Peppers noises as well as occluded reference images) are carried out to demonstrate the robustness, discrimination, and detection capability of the proposed scheme. In Section 2, we provide the theoretical discussion of the various JTC schemes. Section 3 presents the computer simulation experiments on the compressed colored images. Section 4 is a short conclusion of this work.

2. The Joint Transform Correlator

The classical JTC is widely used in various imaging systems for accurate reconstruction of the optical field and Fourier plane filtering. Near real-time optical pattern recognition can be implemented by using JTC since it does not need a match filter. One possible JTC architecture is shown in Figure 1. In this figure, an input scene s ( y y 0 ) , that is captured by a CCD camera or may be stored in a computer, is captured and is displayed at the spatial light modulator (SLM) side-by-side with a reference image r ( y + y 0 ) , which is stored in a computer system. Both images can be easily updated in real time. The lens performs the Fourier transform (FT) of the joint image and its intensity (called joint power spectrum (JPS)) is captured by a CCD camera and sent to the computer for processing. The processed JPS is loaded again into the SLM and it is Fourier transformed by the lens and its intensity is recorded by the CCD camera to produce the correlation output. For color images, both the reference and the scene images are separated into the three basic color components (Red, Green, and Blue). Then, we process the individual color components images either sequentially or in three separate correlators (three channels). The final correlation output is the combination of these three output correlators. In the following discussion, for convenient and simplicity, we will adopt one dimensional presentation and present the mathematical expressions for one channel. The input joint image can be expressed as:

f ( y ) = r ( y + y 0 ) + s ( y y 0 ) (1a)

s ( y y 0 ) = i = 1 n t i ( y y i ) + i = 1 n o i ( y y i ) + n ( y y i ) (1b)

where i = 1 n t i ( y y i ) are the multiple target patterns, i = 1 n o i ( y y i ) are other identical and nonidentical unwanted patterns, and n ( y y i ) represents some added noises.

Figure 1. Classical JTC architecture.

The lens performs FT to Equation (1):

F ( v ) = | R ( v ) | e j [ φ r ( v ) + v y 0 ] + | S ( v ) | e j [ φ s ( v ) v y 0 ] (2a)

| S ( v ) | e j [ φ s ( v ) v y 0 ] = | N ( v ) | e j [ φ n ( v ) v y i ] + i = 1 n | T i ( v ) | e j [ φ t i ( v ) v y i ] + i = 1 n | O i ( v ) | e j [ φ o i ( v ) v y i ] (2b)

where | R ( v ) | , | S ( v ) | , | T i ( v ) | , | N ( v ) | , and | O i ( v ) | are the amplitudes, and φ r ( v ) , φ s ( v ) , φ t i ( v ) , φ n ( v ) , and φ o i ( v ) are the phases of the Fourier transforms of r ( y ) , s ( y ) , t i ( y ) , n ( y ) , and o i ( y ) respectively; v is a frequency domain variable scaled by a factor 2 π / λ f , λ is the wavelength of collimating light, f is the focal length of lens.

At the focal plane of the lens, the CCD camera records the intensity of Equation (2):

| F ( v ) | 2 = | R ( v ) | 2 + | S ( v ) | 2 + 2 | R ( v ) | | S ( v ) | cos [ φ r ( v ) φ s ( v ) + 2 v y 0 ] (3a)

| S ( v ) | 2 = | N ( v ) | 2 + i = 1 n | T i ( v ) | 2 + i = 1 n | O i ( v ) | 2 + 2 i = 1 n k = 1 k i n | T i ( v ) | | T k ( v ) | cos [ φ t i ( v ) φ t k ( v ) v y i + v y k ] + 2 i = 1 n k = 1 k i n | O i ( v ) | | O k ( v ) | cos [ φ o i ( v ) φ o k ( v ) v y i + v y k ] + 2 i = 1 n k = 1 n | T i ( v ) | | O k ( v ) | cos [ φ t i ( v ) φ o k ( v ) v y i + v y k ] + 2 i = 1 n | T i ( v ) | | N ( v ) | cos [ φ t i ( v ) φ n ( v ) v y i ] + 2 i = 1 n | O i ( v ) | | N ( v ) | cos [ φ o i ( v ) φ n ( v ) v y i ] (3b)

The first term in Equation (3a) and the first three terms in Equation (3b) represent strong zero-order DC terms. The fourth and the fifth terms demonstrate the auto-correlations between the identical targets and the identical nontargets while the sixth term is the cross-correlation between the targets and the other objects. The seventh and the eighth terms are the cross-correlations between the noise and the targets and the other objects. All these no useful correlation terms are within the input scene and they greatly degrade the detection capability of the JTC. By taking the inverse FT of Equation (3) and recording its intensity, we produce the correlation output. A typical correlation output of a classical JTC is shown in Figure 2 where the desired auto correlation peaks of the targets coexist with many other cross-correlation peaks as well as a wide and strong peak (DC value) at the center of the output, as discussed in Equation (3b). In addition, the classical JTC suffers from large correlation lobes, large correlation peak width, and low optical efficiency.

There are several techniques to enhance the targets detectability in the JTC, especially when multiple targets or identical non-target objects are present in an input scene, which cause the false alarms. Fourier plane subtraction [8] is one method that subtracts both the input-scene power spectrum ( | S ( v ) | 2 ) and the reference image power spectrum ( | R ( v ) | 2 ) from the JPS of Equation (3). The

Figure 2. A typical classical JTC output.

modified JPS is expressed as:

P ( v ) = | F ( v ) | 2 | S ( v ) | 2 | R ( v ) | 2 = 2 | R ( v ) | | S ( v ) | cos [ φ r ( v ) φ s ( v ) + 2 v y 0 ] (4)

Therefore, the Fourier plane subtraction technique gets rid of all the terms in Equation (3b) as well as the | R ( v ) | 2 DC term. Now, Equation (4) contains only correlation terms between the reference and the objects in the input scene and the conjugates of these terms. It is worth mentioning that the subtraction scheme is a three-step process. As an alternative to this scheme, a two-step process called the reference shifted phase-encoding technique provides the same results. Hence, this technique has less processing steps compared to the previous subtraction technique, which eventually enhances the system processing speed. The first step is to display input joint image of Equation (1) at the SLM and capture its JPS, which is the same as Equation (3). The second step is to 180˚ phase-shift the reference image, combine it with the input scene, take FT and capture its JPS:

f 1 ( y ) = r ( y + y 0 ) + s ( y y 0 ) (5)

F 1 ( v ) = | R ( v ) | e j [ φ r ( v ) + v y 0 ] + | S ( v ) | e j [ φ s ( v ) v y 0 ] (6)

| F 1 ( v ) | 2 = | R ( v ) | 2 + | S ( v ) | 2 2 | R ( v ) | | S ( v ) | cos [ φ r ( v ) φ s ( v ) + 2 v y 0 ] (7)

Now, digital subtraction of Equation (3) and Equation (7) yields the modified joint power spectrum:

P ( v ) = | F ( v ) | 2 | F 1 ( v ) | 2 = 4 | R ( v ) | | S ( v ) | cos [ φ r ( v ) φ s ( v ) + 2 v y 0 ] (8)

Equation (8) is basically the same as Equation (4). Therefore, the modified JPS provides large improvement for target detection. However, missing the targets at the correlation output may still occur when some objects are brighter than others in the input scene [9]. To alleviate this problem, a real valued fringe-adjusted filter (FAF) is employed to get a better correlation output for both single and multiple targets detection. The FAF is expressed as:

F A F ( v ) = C ( v ) D ( v ) + | R ( v ) | 2 (9)

where C ( v ) and D ( v ) are either constants or functions of v. For simplicity, we can set C ( v ) = 1 and to avoid the singularity on the filter we select D ( v ) to be a very small value such that | R ( v ) | 2 D ( v ) . Now, the FAF multiplies the modified JPS and the product is displayed at the SLM plane to generate the correlation output.

c ( y ) = F T 1 { P ( v ) × F A F ( v ) } F T 1 { P ( v ) × | R ( v ) | 2 } (10)

On the other hand, since its introduction more than two decades ago, the reference phase-encoded FAF technique [9] [10] [14] [28] [29], has been used in many works to also remove the undesired cross-correlation peaks and false alarms along with the zero-order terms that appear in the classical JTC. Further, this technique removes the unused conjugate correlation peaks from the output. Thus, practically, the CCD camera can now be focused on a smaller area at the output plane to record the correlation peaks at the output plane. Briefly, the reference phase-encoded technique is described as follows. In the frequency domain, the reference image R ( v ) is multiplied by a phase mask Φ ( v ) , which has randomly distributed phase from −π to π, and then inverse FT to produce:

r ( y ) ϕ ( y ) = F T 1 [ R ( v ) Φ ( v ) ] (11)

where “*” denotes the convolution operation. Now, the input joint images are written as:

f ( y ) = r ( y + y 0 ) ϕ ( y ) + s ( y y 0 ) (12)

f 1 ( y ) = r ( y + y 0 ) ϕ ( y ) + s ( y y 0 ) (13)

The joint power spectra for these joint images are:

| F ( v ) | 2 = | R ( v ) Φ ( v ) + S ( v ) | 2 = | R ( v ) | 2 + | S ( v ) | 2 + R ( v ) Φ ( v ) S * ( v ) + R * ( v ) Φ * ( v ) S ( v ) (14)

| F 1 ( v ) | 2 = | R ( v ) Φ ( v ) + S ( v ) | 2 = | R ( v ) | 2 + | S ( v ) | 2 R ( v ) Φ ( v ) S * ( v ) R * ( v ) Φ * ( v ) S ( v ) (15)

The modified JPS is:

P ( v ) = | F ( v ) | 2 | F 1 ( v ) | 2 = 2 R ( v ) S * ( v ) Φ ( v ) + 2 R * ( v ) S ( v ) Φ * ( v ) (16)

To obtain the correlation output, Equation (16) is multiplied first by the same Φ ( v ) and the FAF, and then inverse FT:

c ( y ) = F T 1 { F A F ( v ) × P ( v ) × Φ ( v ) } = F T 1 { C ( v ) D ( v ) + | R ( v ) | 2 × [ 2 R ( v ) S * ( v ) Φ 2 ( v ) + 2 R * ( v ) S ( v ) ] } (17)

3. Compressed Multiple Colored Target Detection

In this section, based on the previous discussions, we propose multiple targets recognition for color images when the targets and/or the reference images are greatly compressed and experienced under severe noise and occlusion conditions. Handling compressed images would make the JTC processor practically closer to near real-time processing and reduce the storage requirements as discussed in the introduction of this paper.

Colored images, especially high-resolution ones, take large space (bytes) for storage and more time to upload them into the SLM at the input plane of the JTC processor. Developers seek compression techniques to facilitate transmitting and storing colored images. JPEG compression scheme, which uses Wavelet transform in its compression algorithm, is very efficient unlike other compression schemes [11] [12] [13] since it compresses the image with less storage space while keeping more details of it. Thus, we will adopt this compressing scheme to compress the colored reference images in our proposed JTC-based automatic target recognition.

A three-channel (Red, Green, Blue) correlator is used to deal with colored images, where each channel will process separately one fundamental color component. Hence, all the equations in the previous section would represent the mathematics for one out of three fundamental color components. The output correlations of the three channels (see Equation (17)) are combined to generate the final correlation output to detect the colored targets. The input joint image is divided into two halves: the left half contains the input scene while the right half has the reference target. An illustrative example of the detection capability of the proposed scheme for colored images is presented in Figure 3. Figure 3(a) presents an input scene, prepared to have multiple identical targets images, multiple identical non targets images, and other images with various colors. The size of all colored images is 32 × 32 pixels and have JPEG format. Figures 3(b)-(d) show the correlation peaks when the target is the orange, red apple, and yellow pear, respectively. Next, we present simulation results for detecting colored targets that are exposed to severe compression and strong noise (Gaussian and Salt & Peppers) conditions. In addition, we tested our proposed detection scheme for references occluded up to 75%. Finally, we repeated the same experiment for compressing high-resolution color images. Note that Figure 3 demonstrated the detection of multiple targets when both the target and the reference images are uncompressed. Next, Figure 4(a) displays the successful recognition of uncompressed target image (Figure 3(a)) and a 94% compressed reference image. When the compression of the reference reaches 95%, the recognition fails as

Figure 3. (a) The input joint image. The correlation output when the target is: (b) the orange, (c) the red apple, and (d) the yellow pear.

shown in Figure 4(b). Now, the correlation results of the cases {compressed target, uncompressed reference} and {compressed target, compressed reference} are shown in Figure 5(a) and Figure 5(b), respectively.

The next simulation results will experiment the most challenging case of compression of both target and reference images, which are subjected to severe noise conditions. Figure 6 shows that the proposed JTC scheme successfully recognizes 94% compressed targets added to a random Gaussian noise with a

Figure 4. The correlation outputs for (a) 94% and (b) 95% compressed reference (Orange) image with the uncompressed target image of Figure 3(a).

maximum noise density of 0.2. Note that the random noise expectedly produces unequal correlation peaks for the targets. Further, one of the targets is at edge of not being recognized with low correlation peak value (Figure 6(a)). To support greater noise density, one must decrease the compression ratio for the color images as illustrated in Figure 6(b) and Figure 6(c) where the compression ratio is decreased to 90% and 80%, respectively.

Table 1 lists the maximum correlation peaks of the targets for different Gaussian noise densities and different compression ratios. Note that the correlation peak values are degraded significantly as the density of the noise increases. This implies that a careful threshold value might be needed to pick up the targets’ peaks and avoid false alarms. This experiment is repeated for added Salt &

Figure 5. Correlation outputs for 94% compressed target image with (a) uncompressed reference and (b) 94% compressed reference.

Table 1. Compression ratios versus Gaussian and salt & papers noise densities.

Peppers noise. The results are shown in Figure 7 and in Table 1. It is worth mentioning that the results in Table 1 correspond to the reference (ORANGE). The correlation peak values will be different for different references.

Figure 6. Correlation outputs for (a) 94% compressed images with added Gaussian noise density = 0.2, (b) 90% compressed images with added Gaussian noise density = 0.5, (c) 80% compressed images with added Gaussian noise density = 0.55.

Figure 7. Correlation outputs for (a) 94% compressed images with added Salt & Peppers noise density = 0.3, (b) 90% compressed images with added Salt & Peppers noise density = 0.4, (c) 80% compressed images with added Salt & Peppers noise density = 0.5.

In many situations it is needed to recognize a region or a part of a target image. Consequently, the proposed JTC scheme is tested for 25%, 50%, and 75% occluded reference with noise-free 94% compressed images as shown in Figure 8. Further, 90% compressed noisy images correlated with 50% occluded reference image are displayed in Figure 9. As a matter of fact, when images are

Figure 8. Correlation outputs for 94% compressed images with (a) 25%, (b) 50%, and (c) 75% occluded reference.

Figure 9. Correlation outputs for 90% compressed images with 50% occluded reference when adding (a) Gaussian noise density = 0.1 and (b) Salt & Peppers noise density = 0.3.

occluded, the remaining pixels in the image are easily affected by the noise density. This is demonstrated by comparing the correlation output of the 50% occluded reference with Gaussian noise density = 0.1 of Figure 9(a) to the correlation output of Figure 6(b) with Gaussian noise density = 0.5. Likewise, is the case for the Salt & Peppers noise density of Figure 9(b) and Figure 7(b), where the noise density decreases from 0.4 to 0.3.

It is well known that JPEG compression algorithm discards pixels that are not important in human eye perception such as small color variations and/or high-frequency components in color images. In this regard, we tested our proposed color multiple targets recognition scheme for higher resolution images such as the 128 × 128 pixels color images shown in Figure 10. The targets detection capability is excellent for noise-free images that are significantly compressed to 94% ratio as illustrated in Figure 10(a). However, this detection capability is

Figure 10. Correlation outputs for 94% compressed images (a) Successful detection for noise-free images. Fail detection when adding very small amount of (b) Gaussian noise density = 0.005 and (c) Salt & Peppers noise density = 0.003.

Figure 11. Improvement of noisy targets detection for less compression ratios. (a) and (b) 90% compressed images when added (a) Gaussian noise density = 0.1 and (b) Salt & Peppers noise density = 0.125. (c) and (d) 80% compressed images when adding (c) Gaussian noise density = 0.4 and (d) Salt & Peppers noise density = 0.4.

Figure 12. Occlusion tests for noise-free 94% compressed images with (a) 25%, (b) 50%, and (c) 75% occluded reference. Tests for noise-free 94% compressed images with (a) 25%, (b) 50%, and (c) 75% occluded reference.

greatly degraded (and failed) once a small amount of noise (Gaussian density = 0.005 and Salt & Peppers density = 0.003) is added to the target images as demonstrated in Figure 10(b) and Figure 10(c).

Figure 13. Occlusion tests for 80% noisy compressed images with: (a) 25% occluded reference and added Gaussian noise density = 0.1, (b) 50% occluded reference and added Gaussian noise density = 0.1, (c) 75% occluded reference and added Gaussian noise density = 0.05, and (d) 50% occluded reference and added Salt &Peppers noise density = 0.2.

In comparison, the low-resolution 32 × 32 pixels images in Figure 6(a) and Figure 7(a) afforded Gaussian and Salt & Peppers noise densities of 0.2 and 0.3, respectively. The multiple targets recognition can be significantly improved by slightly decreasing the compression ratio of images. For instance, in Figure 11(a) and Figure 11(b), we used 90% compressed images instead of 94% ones. This has resulted in increasing the noise capability of the correlator to handle Gaussian noise density increase from 0.005 to 0.1 (20 folds improvement) while the Salt & Peppers noise density changes from 0.003 to 0.125 (41 folds improvement). In addition, Figure 11(c) and Figure 11(d) show more improvement in handling severe noise densities (Gaussian and Salt & Peppers densities = 0.4) when the image compression ratio decreases to 80%.

Furthermore, the simulations show that the performance of the proposed scheme for occluded high-resolution images is excellent for noise-free and up to 94% compressed color images (see Figure 12). Again, in order to support large amount of noises, the compression ratio must be lowered. An illustrative example is shown in Figure 13.

4. Conclusion

In this paper, we have demonstrated a compression-based FJTC target detection for colored images. First, we have demonstrated the detection capability of the proposed scheme for the same target with different colors. Then, we have used the reference phase-shifted technique to eliminate false alarms and zero-order terms due to multiple desired and undesired multiple cross-correlation peaks which appeared at the output correlation plane. Also, we employed the random-phase mask method to avoid displaying the usual second pair of correlation peaks at the output plane of the JTC architecture. The proposed JTC scheme was tested through a large number of simulations for low-resolution as well as high-resolution colored images. Both types of images were subjected to severe compression (up to a ratio of 94%) and strong densities of Gaussian and Salt & Peppers noises (up to 0.5). Further, noise-free and noisy-occluded reference images (up to 75%) are tested. We have demonstrated that low-resolution color images can afford large amounts of compression ratios and strong noise densities. The proposed scheme successfully detects the multiple compressed targets under all the above conditions.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Elbouz, M., Alfalou, A., Brosseau, C., Ben Haj Yahia, N. and Alam, M.S. (2015) Assessing the Performance of a Motion Tracking System Based on Optical Joint Transform Correlation. Optics Communications, 349, 65-82.
https://doi.org/10.1016/j.optcom.2015.03.020
[2] Vilardy, J.M., Millán, M.S. and Pérez-Cabré, E. (2017) Nonlinear Image Encryption Using a Fully Phase Nonzero-Order Joint Transform Correlator in the Gyrator Domain. Optics and Lasers in Engineering, 89, 88-94.
https://doi.org/10.1016/j.optlaseng.2016.02.013
[3] Alam, M.S. and Karim, M.A. (1993) Fringe-Adjusted Joint Transform Correlation. Applied Optics, 32, 4344-4350.
https://doi.org/10.1364/AO.32.004344
[4] Nomura, T. and Javidi, B. (2000) Optical Encryption Using a Joint Transform Correlator Architecture. Optical Engineering, 39, 2031-2035.
https://doi.org/10.1117/1.1304844
[5] Qian, Y., Li, Y., Shao, J. and Miao, H. (2011) Real-Time Image Stabilization for Arbitrary Motion Blurred Image Based on Opto-Electronic Hybrid Joint Transform Correlator. Optics Express, 19, 10762-10769.
https://doi.org/10.1364/OE.19.010762
[6] Bal, A. and Alam, M.S. (2004) Dynamic Target Tracking with Fringe-Adjusted Joint Transform Correlation and Template Matching. Applied Optics, 43, 4874-4881.
https://doi.org/10.1364/AO.43.004874
[7] Lu, G., Zhang, Z., Wu, S. and Yu, F.T.S. (1997) Implementation of a Non-Zero-Order Joint Transform Correlator by Use of Phase-Shifting Techniques. Applied Optics, 36, 470-483.
https://doi.org/10.1364/AO.36.000470
[8] Schönleber, M., Cedilnik, G. and Tiziani, H.-J. (1995) Joint Transform Correlator Subtracting a Modified Fourier Spectrum. Applied Optics, 34, 7532-7537.
https://doi.org/10.1364/AO.34.007532
[9] Cherri, A.K. and Alam, M.S. (2001) Reference Phase-Encoded Fringe-Adjusted Joint Transform Correlation. Applied Optics, 40, 1216-1225.
https://doi.org/10.1364/AO.40.001216
[10] Haider, M.R., Islam, M.N., Alam, M.S. and Khan, J.F. (2005) Shifted Phase Encoded Fringe-Adjusted Joint Transform Correlation for Multiple Target Detection. Optics Communications, 248, 69-88.
https://doi.org/10.1016/j.optcom.2004.11.102
[11] Widjaja, J. and Suripon, U. (2005) Multiple-Target Detection by Using Joint Transform Correlator with Compressed Reference Images. Optics Communications, 253, 44-55.
https://doi.org/10.1016/j.optcom.2005.04.042
[12] Widjaja, J. (2012) Noisy Face Recognition Using Compression-Based Joint Wavelet-Transform Correlator. Optics Communications, 285, 1029-1034.
https://doi.org/10.1016/j.optcom.2011.10.064
[13] Kaewphaluk, K. and Widjaja, J. (2017) Experimental Demonstrations of Noise-Robustness of Compression-Based Joint Wavelet Transform Correlator in Retinal Recognition. Optik, 142, 168-173.
https://doi.org/10.1016/j.ijleo.2017.05.096
[14] Chen, J., Ge, P., Li, Q., Feng, H. and Xu, Z. (2013) Displacement Measurement for Color Images by A Double Phase-Encoded Joint Fractional Transform Correlator. Optik, 124, 1192-1198.
https://doi.org/10.1016/j.ijleo.2012.03.024
[15] Kamal, H.A. and Cherri, A.K. (2009) Complementary-Reference and Complementary-Scene for Real-Time Fingerprint Verification Using Joint Transform Correlator. Optics and Laser Technology, 41, 643-650.
https://doi.org/10.1016/j.optlastec.2008.09.010
[16] Wang, Q., Guo, Q., Zhou, J. and Lin, Q. (2012) Nonlinear Joint Fractional Fourier Transform Correlation for Target Detection in Hyperspectral Image. Optics & Laser Technology, 44, 1897-1904.
https://doi.org/10.1016/j.optlastec.2012.02.021
[17] Manzur, T., Zeller, J. and Serati, S. (2012) Optical Correlator-Based Target Detection, Recognition, Classification, and Tracking. Applied Optics, 51, 4976-4983.
https://doi.org/10.1364/AO.51.004976
[18] Qian, Y., Hong, X. and Miao, H. (2013) Improved Target Detection and Recognition in Complicated Background with Joint Transform Correlator. Optik, 124, 6282-6285.
https://doi.org/10.1016/j.ijleo.2013.05.008
[19] Vilardy, J.M., Torres, Y., Millán, M.S. and Pérez-Cabré, E. (2014) Generalized Formulation of an Encryption System Based on a Joint Transform Correlator and Fractional Fourier Transform. Journal of Optics, 16, Article ID: 125405.
https://doi.org/10.1088/2040-8978/16/12/125405
[20] Islam, M.N. (2018) Robust Target Detection Employing Log-Polar Transformation and Multiple Phase-Shifted Joint Transform Correlation. Sensors & Transducers, 225, 14-18.
[21] Lugt, A.V. (1964) Signal Detection by Complex Spatial Filtering. IEEE Transactions on Information Theory, 10, 139-145.
https://doi.org/10.1109/TIT.1964.1053650
[22] Weaver, C.S. and Goodman, J.W. (1966) A Technique for Optically Convolving Two Functions. Applied Optics, 5, 1248-1250.
https://doi.org/10.1364/AO.5.001248
[23] Chu, X. and Li, H. (2019) A Survey of Blind Forensics Techniques for JPEG Image Tampering. Journal of Computer and Communications, 7, 1-13.
https://doi.org/10.4236/jcc.2019.710001
[24] Sun, G., Chu, Y., Liu, X. and Wang, Z. (2019) A Plant Image Compression Algorithm Based on Wireless Sensor Network. Journal of Computer and Communications, 7, 53-64.
https://doi.org/10.4236/jcc.2019.74005
[25] Sun, J., Li, B., Jiang, Y. and Wen, C.-Y. (2016) A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes. Sensors, 16, Article 1778.
https://doi.org/10.3390/s16111778
[26] Liu, B., Pun, C.-M. and Yuan, X.-C. (2014) Digital Image Forgery Detection Using JPEG Features and Local Noise Discrepancies. The Scientific World Journal, 2014, Article ID: 230425.
https://doi.org/10.1155/2014/230425
[27] Hudson, G., Léger, A., Niss, B., Sebestyén, I. and Vaaben, J. (2018) JPEG-1 Standard 25 Years: Past, Present, and Future Reasons for a Success. Journal of Electronic Imaging, 27, Article ID: 40901.
https://doi.org/10.1117/1.JEI.27.4.040901
[28] Islam, M.N., Alam, M.S. and Haider, M.R. (2006) Class-Associative Color Pattern Recognition Using a Shifted Phase-Encoded Joint Transform Correlation. Optical Engineering, 45, Article ID: 75006.
https://doi.org/10.1117/1.2227364
[29] Islam, M.N. and Alam, M.S. (2008) Pattern Recognition in Hyperspectral Imagery Using One-Dimensional Shifted Phase-Encoded Joint Transform Correlation. Optics Communications, 281, 4854-4861.
https://doi.org/10.1016/j.optcom.2008.06.041

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.