A Plant Image Compression Algorithm Based on Wireless Sensor Network


This paper designs and implements an image transmission algorithm applied to plant information collection based on the wireless sensor network. It can effectively reduce the volume of transmitted data, low-energy, high-availability image compression algorithm. This algorithm mainly has two aspects of improvement measures: the first is to reduce the number of pixels that transmit images, from interlaced scanning to interlaced neighbor scanning; the second is to use JPEG image compression algorithm [1], changing the value of the quantization table in the algorithm [2]. After image compression, the image data volume is greatly reduced; the transmission efficiency is improved; and the problem of excessive data volume during image transmission is effectively solved.

Share and Cite:

Sun, G. , Chu, Y. , Liu, X. and Wang, Z. (2019) A Plant Image Compression Algorithm Based on Wireless Sensor Network. Journal of Computer and Communications, 7, 53-64. doi: 10.4236/jcc.2019.74005.

1. Introduction

With the rapid development of the network information industry technology around the world, agricultural informatization has gradually entered people’s lives. The agricultural informatization includes many modern technologies. If the growth monitoring is carried out through digital image technology during plant growth, on the one hand, pests and diseases can be accurately controlled, and on the other hand, the growth cycle of plants can be recorded, so that Picking takes place during the cycle. Every step of data collection, transmission, storage, display, etc. is extremely important. In the process of data acquisition and transmission, compressed data has become an important indicator to measure the transmission efficiency. From the perspective of the amount of data transmitted, the image compression becomes an important aspect affecting the transmission efficiency. The traditional image acquisition device needs to work in a stable power supply environment; the image transmission data volume is large; the system load is large; and the power consumption is high. Therefore, reducing the amount of data transmission and reducing the system working time can effectively improve the transmission efficiency and reduce the transmission cost. Based on this research, we have developed a stable wireless sensor network algorithm [1] [2] , which can greatly reduce the amount of data transmitted, compress the image size of the plant, and ensure the visibility of the image within a certain range.

1.1. Existing Wireless Data Transmission Technology

Traditional information transmission is carried out on a wired basis. Due to the complex agricultural production environment, wired transmission will be limited in many aspects. The emergence of wireless transmission solves this problem and can greatly reduce the impact of wired transmission limitations. But now wireless transmission equipment is expensive and the equipment is not adaptable.

Due to the need for better stability and low power consumption of devices that use images with plants, the more mature technologies used in wireless communication are laser communication, ZigBee, Wi-Fi, and Bluetooth technologies [3] . They all have corresponding technical features and usage scenarios. The Bluetooth transmission range is relatively close and cannot be covered by a large area. Wi-Fi cannot be used for a long time alone due to high power. ZigBee has become the wireless sensor of image transmission because of its low power consumption, long transmission distance and low price.

1.2. Existing Image Compression Technology

Image compression technology has been developed for decades. The most widely used image compression algorithms are fractal image compression technology, wavelet-based image compression technology, and DCT-based JPEG compression technology [4] . The technical standards of these three algorithms are perfect and the construction methods are quite mature.

Because the sensor network needs to be applied in the agricultural environment, the compression algorithm must adapt to the limited resource of the sensor node, low energy consumption and long service life. Therefore, it can be seen from the comparison of the characteristics of Table 1. Fractal compression and compression is good, but the coding time is long and consumes a lot of system resources. The characteristics based on wavelet transform and DCT transform are not much different, considering the wavelet in wavelet transform. The selection of the base has a great influence on the image compression ratio [5] [6] , so the compression ratio is not as stable as the DCT-based compression method, and the ZigBee maximum transmission speed is 250 KB/s,

Table 1. Compression technology feature comparison table.

which may be too large after compression and cannot be transmitted. The situation happened. So we chose DCT-based JPEG image compression.

1.3. Image Acquisition and Transmission

After understanding the selected transmission method and compression algorithm, we need to improve the algorithm in order to make the algorithm more efficient. Figure 1 is the flow chart of the algorithm. Firstly, the camera captures the image to determine whether high quality is needed. Image, this step needs to be set in advance. When high quality images are not needed, sampling begins, otherwise no sampling is taken. Then compress the image from the previous step and use the improved algorithm. Through ZigBee for wireless transmission, the receiving end receives data and determines whether the data reception is completed. If it is completed, the system restores the obtained number, and continues to receive it. When the restoration of an image is completed, the image will be stored in the database.

2. Design and Improvement of Sampling Methods

2.1. Traditional Image Sampling Problems

In the process of acquiring images, considering the size of the image, that is, the acquired image will be larger than the standard size of the image we are processing, Figure 2 is a standard 8 * 8 pixel block. Blue pixels are pixels that need to be preserved, and white pixels are pixels that need to be discarded. Then the original image is partially cut, and the commonly used method is interlaced scanning to achieve the image conforming to the standard. Figure 3 shows the pixel block interlaced scan acquisition. It can be seen that the reserved blue pixel points are continuous in the horizontal direction and there is no direct correlation in the vertical direction. Then, at the receiving end, interpolation is performed to restore [7] [8] . However, the usual drawback is that deleting a row or a column directly results in image continuity and correlation, and when the pixel

Figure 1. Compression algorithm overall flow chart.

Figure 2. Standard 8 * 8 pixel block.

Figure 3. Pixel block interlaced scan acquisition.

is expanded at the receiving end, bilinear interpolation and bicubic interpolation are generated. Calculation, wasting system resources, and accuracy is not ideal.

2.2. Image Sampling Improvement Method

After the camera transmits the image data as a stream of data, we use interlaced neighbor acquisition to increase the correlation between the pixels of the entire image row and column, and because of the pixel points with correlation when performing pixel expansion. More, the restored image works better. Figure 4 shows the pixel block interlaced neighbor scan acquisition. The blue reserved pixels are in X arrangement. The collected data is stored as a matrix of two pixel points, so that the arrangement of the pixels can be avoided when the image is saved and transmitted. Figure 5 is a separation method for pixel block interleaved neighbor scan acquisition. It is the result of the acquisition in Figure 4, which constitutes a set of two image pixel points. The red pixel point and the blue pixel point are separated pixel point sets A and B, respectively, to form an image. The two images are respectively JPEG-compressed, transmitted, and synthesized and restored by the receiving end. During the restoration process, since the pixel points of the restored image are X-distributed, the sum of the points near the points to be replenished can be summed, the calculation amount is greatly reduced, and the system resources are also saved. Figure 6 extends the pixel value for the image. If the image to be transmitted is not very sharp, we can transfer one of the A and B pictures, which can greatly improve the sampling efficiency.

By directly observing Table 2 through multiple experiments on the two methods, it is difficult for the human eye to distinguish the original image from the post-sampling reduction. Therefore, the peak signal-to-noise ratio (PSNR) is used as the quality index of the restored image [9] . The image obtained by the improved sampling method is partially improved by the improved image method and the original image by the comparison of Table 3. Therefore, in the case of the same amount of transmission, the improved method has a good reduction effect.

Figure 4. Pixel block interlaced neighbor scan acquisition.

Figure 5. Separation of pixel block interlaced neighbor scan acquisition.

Figure 6. Image expansion pixel value method.

Table 2. Simulation comparison of two image sampling methods.

Table 3. PSNR/dB ratio of two image sampling methods.

3. Design and Improvement of JPEG Compression Algorithm

JPEG image compression can achieve a high compression ratio. The reason is that the lossy compression technique is used to remove the unimportant parts of the original data, so that it can be saved in a smaller volume. Considering that the color image volume size is about 3 times the gray image volume, in order to further reduce the transmitted image volume, here we use the gray image for image sampling and compression. As shown in Figure 7, the image is first divided into 8 * 8 pixel blocks, and then the divided sub-blocks are respectively DCT-transformed, and all coefficients are currently quantized. The ZigZag scan is continued on the quantized result to form a code stream, and finally the compression code rate is further increased by Huffman coding with respect to the code table.

DCT transform:

After DCT transformation, the total energy in the transform domain remains unchanged, but the energy is rearranged. The energy of the matrix is concentrated on the DC component of the upper left corner of the matrix, while the other values are smaller because the image is coherent. So there won’t be a big change.

F ( u , v ) = alpha ( u ) alpha ( v ) x = 0 7 y = 0 7 f ( x , y ) cos ( 2 x + 1 16 u π ) cos ( 2 y + 1 16 v π ) , u , v = 1 , 2 , , 7 (1)

alpha ( u ) = { 1 8 when u = 0 1 2 when u 0 (2)


Through the above Formulas (1) and (2), we obtain the transformation matrix of the image sub-block data, that is, the DCT coefficient matrix. Different positions in the DCT coefficient matrix represent components of different frequencies in the phase data. Most of the pictures have small high-frequency components, and the corresponding high-frequency component DCT coefficients are often close to zero, and the high-frequency components only contain fine details of the image. The human eye is more sensitive to low-frequency components than high-frequency components. Therefore, we quantize the coefficients in the matrix to reduce the proportion of the high-frequency components, which is the most important process of reducing the amount of data in the compressed image. JPEG’s own quantization algorithm is as shown in Equation (3).

Figure 7. Flow chart of JPEG compression algorithm based on DCT transform.

B i , j = r o u n d ( G i , j Q i , j ) , i , j = 0 , 1 , 2 , , 7 (3)

Here G is the DCT coefficient matrix we obtained, Q is the quantization coefficient matrix (Quantization matrices), round is the rounding function, and the JPEG algorithm itself provides a standard quantization coefficient matrix.

ZigZag scanning:

The data in the memory is stored linearly, but if you connect the first point and the last point of each row of the matrix, the two points are almost irrelevant, so we scan the Z-shaped order and transform the obtained quantized matrix into a one-dimensional array. This ensures that successive points of a one-dimensional array are also adjacent to each other at the pixel. Figure 8 shows the scan order.

Huffman coding:

Huffman coding is the basis of almost all compression algorithms. Its basic principle of use is to adjust the coding length of elements according to the frequency of use of elements in the data to obtain a higher compression ratio.

Improved Method of JPEG Compression Algorithm

In the JPEG algorithm, in order to make the data compressible, in addition to the encoding method, the quantization table is also an important factor. Since it ignores some of the high-frequency coefficients in the matrix, the accuracy of this part of the information is lost, resulting in a high compression ratio. We find a quantization table suitable for highly compressed images, replacing the original quantization table, so that the accuracy is appropriately reduced, and the compression ratio can be improved a lot.

Figure 8. ZigZag scan order.

Human vision includes nonlinearity, multi-channel, and concealment. The main indicator describing the spatial characteristics of the human visual system includes the contrast-sensitive function CSF(f), which represents the response of the human visual system to different spatial frequency components in the image and the resolving power. Considering further compression of the image and less influence on the human eye observation, we use the CSF function to generate the quantization table. Considering the calculation amount and other reasons, we use the three-exponential model jointly proposed by Movshon and Kiorpes [10] . This function is (4).

C S F l u m ( f ) = 75 f 0.2 e 0.8 f (4)

According to this, the improved quantization table is obtained, and then according to the observation angle of the scene, the observation distance, etc., the size of the f value is appropriately increased or decreased [11] [12] [13] , and is substituted into the image compression process. Through comparison of the two methods, the comparison between the two methods is given. Table 4 shows the simulation comparison of the two compression algorithms. It can be seen from Table 5 and Table 6 that the value of the PSNR value of Case 2 is reduced to 9.5% by using the value of the double quantization table. The compression ratio is significantly improved, with Case 1 reaching 88.9%, Case 2 reaching 86.6%, and Case 3 reaching 100.8%. The reconstructed image can obtain better visual effects within a certain error range, and the human eye can hardly distinguish. Not only is data transmission efficiency improved, but the pressure on wireless sensor networks is also greatly reduced.

4. Conclusions

The plant image compression algorithm based on wireless sensor network proposed in this paper is based on the already mature JPEG algorithm, so it has good feasibility. Through the comparison of PSNR, compression ratio and human eye observation, the efficiency of the algorithm is improved after the

Table 4. Comparison of simulation results of compression algorithm.

Table 5. PSNR/dB ratio of reconstructed images by two compression algorithms.

Table 6. Compression ratio of reconstructed image by two compression algorithms.

improvement of the plant image sampling method and the quantization table to meet the needs of the human eye to observe the details of the plant image. Compared with the traditional JPEG algorithm, the plant image sampling process can increase the fit of the restored image to the original image, and can obtain a larger compression ratio.

The compression algorithm of this paper is mainly applied to the compression of static plant images, and will increase many extended contents such as image recognition pests and diseases in the future [14] . The field of image compression applications has many application areas: home security, road traffic detection, satellite imagery. Image compression in the future big data era is becoming more and more important. The JPEG image compression algorithm based on DCT transform can be used in sampling, time domain, and the frequency domain transformation and coding methods are improved in order to achieve further compression and increase the image reduction degree.


This work was partially supported by the National Nature Science Foundation of China (No. 61771262), by Major Science and Technology Projects of Tianjin (No. 2018ZXRHNC00140 and No. 2017ZXHLNC00100) and by Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.


[1] Wang, Z.-F., Zhu, L., Zeng, C.-Y., Min, Q.-S. and Xia, D. (2018) Survey on Recompression Detection for Digital Images. Computer Science, 45, 20-29.
[2] Douak, F., Benzid, R. and Benoudjit, N. (2011) Color Image Compression Algorithm Based on the DCT Transform Combined to an Adaptive Block Scanning. AEU-International Journal of Electronics and Communications, 65, 16-26.
[3] Gheorghiu, R. and Iordache, V. (2018) Use of Energy Efficient Sensor Networks to Enhance Dynamic Data Gathering Systems: A Comparative Study between Bluetooth and ZigBee. Sensors, 18, 1801.
[4] Feng, F., Liu, P.-X., Li, X.-Y. and Yan, N.-B. (2016) Research of Discrete Cosine Transform for Image Compression Algorithm. Computer Science, 43, 240-241 + 255.
[5] Yang, P.Z., Wang, L.B., Pei, H.A.D., Qu, X. and Liang, S. (2019) Design of an Image Acquisition and Compression System with High Compression Ratio. Chinese Journal of Electron Devices, 42, 163-167.
[6] Tongming, J.I. and Shengli, B. (2017) Application of JPEG2000 Image Compression Algorithm in Android Platform. Journal of Computer Applications, 37, 203-206.
[7] Timofte, R., De Smet, V. and Van Gool, L. (2014) A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution. Asian Conference on Computer Vision, Springer, Cham, 111-126.
[8] Yang, C.Y. and Yang, M.H. (2013) Fast Direct Super-Resolution by Simple Functions. Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, 1-8 December 2013, 561-568.
[9] Chun, L., Kun, T., et al. (2018) Quality Assessment for Contrast-Distorted Images Based on Convolutional Neural Network. Microelectronics & Computer, 35, 84-88.
[10] Movshon, J.A. and Kiorpes, L. (1988) Analysis of the Development of Spatial Contrast Sensitivity in Monkey and Human Infants. JOSAA, 5, 2166-2172.
[11] Yao, J.-C. (2011) Evaluation of Image Quality Characteristics Based on Human Eye’s Visual Contrast Sensitivity. Chinese Journal of Liquid Crystals and Displays, 26, 390-394.
[12] Wang, J.-H., Wu, J., Zhang, C. and Cao, X.-J. (2019) Laplacian Pyramid Based Image Fusion for Use in HVS. Electronics Optics & Control, 26, 77-80 + 91.
[13] Jia, R.-M. and Zheng, Q. (2018) Fractal Enhancement Algorithm Based on Frequency Characteristics of Human Eyes. Laser & Infrared, 48, 919-924.
[14] Zeng, J.-Y., Xiao, D.-Q. and Lin, T.-Y. (2016) Design and Implementation of a Software System on Compressing Region of Interest of Agricultural Images. Modern Computer.

Copyright © 2023 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.