Semantic Segmentation Based Remote Sensing Data Fusion on Crops Detection

HTML  XML Download Download as PDF (Size: 720KB)  PP. 53-64  
DOI: 10.4236/jcc.2019.77006    882 Downloads   2,443 Views  Citations

ABSTRACT

Data fusion is usually an important process in multi-sensor remotely sensed imagery integration environments with the aim of enriching features lacking in the sensors involved in the fusion process. This technique has attracted much interest in many researches especially in the field of agriculture. On the other hand, deep learning (DL) based semantic segmentation shows high performance in remote sensing classification, and it requires large datasets in a supervised learning way. In the paper, a method of fusing multi-source remote sensing images with convolution neural networks (CNN) for semantic segmentation is proposed and applied to identify crops. Venezuelan Remote Sensing Satellite-2 (VRSS-2) and the high-resolution of Google Earth (GE) imageries have been used and more than 1000 sample sets have been collected for supervised learning process. The experiment results show that the crops extraction with an average overall accuracy more than 93% has been obtained, which demonstrates that data fusion combined with DL is highly feasible to crops extraction from satellite images and GE imagery, and it shows that deep learning techniques can serve as an invaluable tools for larger remote sensing data fusion frameworks, specifically for the applications in precision farming.

Share and Cite:

Pena, J. , Tan, Y. and Boonpook, W. (2019) Semantic Segmentation Based Remote Sensing Data Fusion on Crops Detection. Journal of Computer and Communications, 7, 53-64. doi: 10.4236/jcc.2019.77006.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.