Developments in Land Use and Land Cover Classification Techniques in Remote Sensing: A Review

Studies on land use and land cover changes (LULCC) have been a great concern due to their contribution to the policies formulation and strategic plans in different areas and at different scales. The LULCC when intense and on a global scale can be catastrophic if not detected and monitored affecting the key aspects of the ecosystem’s functions. For decades, technological developments and tools of geographic information systems (GIS), remote sensing (RS) and machine learning (ML) since data acquisition, processing and results in diffusion have been investigated to access landscape conditions and hence, different land use and land cover classification systems have been performed at different levels. Providing coherent guidelines, based on literature review, to examine, evaluate and spread such conditions could be a rich contribution. Therefore, hundreds of relevant studies available in different databases (Science Direct, Scopus, Google Scholar) demonstrating advances achieved in local, regional and global land cover classification products at different spatial, spectral and temporal resolutions over the past decades were selected and investigated. This article aims to show the main tools, data, approaches applied for analysis, assessment, mapping and monitoring of LULCC and to investigate some associated challenges and limitations that may influence the performance of future works, through a progressive perspective. Based on this study, despite the advances archived in recent decades, issues related to multi-source, multi-temporal and multi-level analysis, robustness and quality, scalability need to be further studied as they constitute some of the main challenges for remote sensing.


Introduction
The acquisition of information on the terrestrial surface and its resources has been accelerated since the launch of the first artificial Earth Observation (EO) satellite by the Earth Observation System (EOS) program in 1972. It is partly due to the efforts of World Space Agencies (WSA) that in recent years have provided several images of remote sensors coupled mainly to aircraft and satellites publicly available [1] [2] [3] and to technological advances in the computing field with powerful and efficient processors to manage large volumes of data, as well as efforts to develop robust algorithms for processing such data [2] [4].
The demand for remote sensing data for land use and land cover mapping has been growing due to the impact of land use and land cover changes for the terrestrial ecosystems. Through these spatial data it is possible to understand and assess the effects of landscape changes on the environment. Images with high spatial, temporal, radiometric and spectral resolution, allow the mapping of large areas in a relatively short time [5] [6] [7] [8], reinforced the improvement of techniques and algorithms, which allowed the automation of the mapping, with reliable results for reality.
Countries that still adopt traditional approaches to remote sensing data processing using commercial image processing software on workstation PC-based systems with proposals to demonstrate how remote sensing data can be used and presentation of GIS packages (due to technical, educational and institutional constraints), leaving aside deeper studies, such as subsurface modeling based on GIS [9], present limited performance in their studies, as related to Big Data management [6] [10] [11], because no matter how powerful the operating systems are, the entire data analysis process, including pre-processing over large areas involving thousands of images, is cumulative, slow and tedious, and it can still be expensive as it requires a lot of resources.
However, in countries which have chosen to change their approach challenges have been overcome. The advance has occurred thanks to the development and application of powerful Machine Learning Algorithms (MLAs) in cloud computing environments, such as Google Earth Engine (GEE), which process images on a planetary scale and at high spatial resolutions [6] [12]. According to Mutanga and Kumar [7], this new approach can be undertaken by researchers of less developed countries due to the fact that does not need the large processing powers of the computers.
Thus, files with several petabytes of referenced data sets (climatic, land use and land cover, digital elevation models) product or not of earth observation sa-Journal of Geographic Information System ven mainly by cooperation among space agencies around the world. The cooperation has resulted in great availability of data (free access), instruments (software), and techniques (algorithms) for processing such data [2] [9] [10], providing to the remote sensing community new applications and tools to conduct researches. This article aims to provide and highlight main tools, data, approaches related to Land Use and Land Cover (LULC) mapping issues and investigate some challenges might arise for evaluating and monitoring land use/land cover using remote sensing data and provide a critical perspective about LULC issues achievements in progress.
The article presents the following topics in sequence: 1) it addresses the process of acquiring and processing remote sensing data obtained at different scales; 2) presents the main categories of platforms and software's that can be used to process such data; 3) addresses techniques for processing geospatial data for purposes of land use and land cover mapping framing them into different approaches (pixel, subpixel, object and hybrid); 4) impacts of the application of machine learning algorithms for the land use and land cover assessment with time-series, multi-source and multi-scale data; 5) assessment of the accuracy of the maps; 6) advances achieved, challenges and future perspectives.

Remote Sensing Data Acquisition and Processing
Data are the key element in a research, and remote sensing is configured as one of the main means of acquiring spatial data [15]. Data acquisition in remote sensing (RS) involves four essential elements: Electromagnetic Radiation (EMR), light source, sensor and target and an interaction of the EMR with the targets.
To generate relevant information, it involves the fundamental elements in interaction as shown in Figure 1.
For understanding the process, it's fundamental to know the EMR, sensor (egg. Resolutions), targets essential characteristics for RS and its properties which are well documented in [16] [17] [18]. Figure 2 shows the intensity of sun and earth, the atmosphere transmittance zones and an electromagnetic spectrum highlighting the visible light.
According to Zwinkels [16], when light interacts with matter, different phenomena can occur depending upon the interaction of the wavelength (frequency) of the light with the physical size (resonant frequencies) of the interfering matter. Figure 3 shows spectral reflectance of different targets of the surface.
Considering all the aspects mentioned above, following the steps of remote sensing data processing, it is possible to generate a product for specific propose.
A list of sources and providers of free remotely sensed data for coarse, medium and fine scale is extent (Glovis, NASA Earth Observation, USGS Earth Explorer, ESA's Sentinel data, VITO Vision, IPPMUS Terra, and so on), and can be accessed partially in Table 1. The fine scale assessment data are in general purchased.
Recently, data availability problems have been overcome by policies for free Journal of Geographic Information System     The big data sets, in addition to exceeding memory, storage, and processing capacities of ordinary personal computers, impose substantial limits that lead users to take advantage of a small part of data available for scientific research and operational applications [10]. In reference to the demand, several platforms, software, and data processing algorithms have been developed as addressed in subsequent topics.
Data use applying the correct techniques by qualified professionals is the key to the maximum benefit of these tools. However, pre-processing and validation present challenges in remote sensing technology [13] [21]. Several remote sensing data products are available for specific research and do not satisfy users' needs for the integrated study of a given phenomenon as their resolutions vary among themselves [22] [23]. For instance, forest fire assessment needs high spatial and temporal resolution, however, a sensor cannot provide high resolution for both Moderate Resolution Imaging Spectroradiometer (MODIS) data provide 1-day high temporal resolution, but low spatial resolution. According to Sajjad and Kumar [22], hyperspectral sensors offer the solution to the impasse, due to their capacity of reducing the processing time for numerous spectral bands. Even though, their spatial resolution must be improved to achieve better results.

Free and Open Source and Proprietary Software
In the scientific field of GIS and remote sensing, we highlight two categories of Journal of Geographic  [28]. However, the terms differ regarding the restrictions on modifications and redistribution. According to Anand et al. [29] the only restriction on free software is that any redistributed version must be distributed with the original free use, modification, and distribution terms known as copy left.
The definition of a free software is not related to its cost, but to the freedom of reuse, modification or not, and distribution, and to execute some of the freedom, [29].
The main purpose of developing proprietary software is making monetary profit. They are developed by individuals or companies that employ engineers who work on improving them [31]. As a result, they inhibit users from being able to make copies of the software and redistribute it, sell the license to others and/or reverse engineer and infringe copyrights and patents [29] [32]. In addition, it rarely allows end users to purchase or view the source code and may require annual license fees. It limits users' understanding of what the code, and/or tools are doing [25] [31] [32].
Commercial software's are available in packages (e.g., ERDAS Imagine) and each package has its limitations requiring users to access to the complete package to be able to use all its functions. Table 2  Commercial and open-source software providers have distinct perspectives on technical support. The lack of support and documentation for users and specific training skills with technical profile are some disadvantages of FOSS [30]. Commercial software support is a service for licensed users [25].
Although several countries recommend the use of FOSS in public institutions [30] [32], cost should not be the main factor in the choice. Aspects such as security as well as manageability must be considered, due to institutional needs and capacities. Free software requires national programs to support their development and maintenance, training to adapt it to local needs. Conversely, commercial software requires institutional capacity to provide equipment to run the programs, continuous training of human resources and renewal of licenses.

Mapping Land Use and Land Cover
The terms land use and land cover, although used in associated ways, would rather be defined in a dissociated way. Land use refers to the way the biophysical attributes of the land are manipulated and the underlying intention to its manipulation. Land cover refers to the biophysical state of the Earth's surfaces and the immediate subsoil [46] [47]. Land use causes changes in land cover, and such Journal of Geographic Information System changes when intense and on a global scale affect the key aspects of the terrestrial systems functions.
According to Briassoulis [47], and Nedd et al. [48] biophysical factors (climate, temperature, topography, soil type, surface water, humidity, vegetation, and fauna) and social (population, technology, socioeconomic, cultural and institutional organization, and political changes) are responsible for such changes and seen interconnected in a space-time perspective. Gómez et al. [46] highlight that distinct types of land cover provide specific habitats and determine the energy and carbon exchange between terrestrial and atmospheric regions. The Knowledge and mapping of land use and land cover are essential to plan and manage natural resources, modeling of environmental variables, and to comprehend the distribution of habitats [48]. Land cover naturally changes over time, also due to the influence and result of anthropogenic activities.
According to Gómez et al. [46], Earth Observation (EO) data provide land use and land cover mapping and monitoring in a consistent and robust manner over large areas, and results are available by different world space agencies at different spatial and temporal scales, matching scientific and political information needs.
Geotechnologies have been relevant in the study of land use and land cover as they have enabled observation, identification, mapping, assessment, and monitoring of land cover in spatial, temporal, and thematic scales [46] [49].
Identifying types of land cover provides basic information to generate other thematic maps, and to establish a baseline for monitoring activities. According to Rogan and Chen [49] an effective approach to identify changes for a specific period may maximize exploration in the domains of spatial and spectral resolutions, as using additional data, such as vegetation indexes. On the other hand, 2 significant taxon to separate cover from use changes namely: 1) categorical-known as post-classification comparison, which occurs between a set of thematic categories of land use and land cover (i.e., urban, forest); and 2) continuous-known as pre-classification enhancement, where changes occur in the quantity or concentration of some attribute of the built or natural landscape that may be measured continuously.
Most approaches to monitoring land use and land cover have used traditional image classification algorithms that assume: 1) image data is normally distributed, 2) objects of interest on the surface are larger than the pixel size (H-resolution), and 3) the pixels are composed of a single type of land cover or land use.
However, some approaches argue that objects of interest on the surface are smaller than the pixel size (L-resolution), and therefore, they used empirical models to estimate biophysical, demographic and socioeconomic information [49] [50].

Remote Sensing Data Processing Techniques for LULC
Over the years, several studies on land cover have been conducted [3] [23] [51] [52] by using data from various sensors with different resolutions, techniques, The sub-pixel-based approach was developed to address divergences in pixel-based classification, such as the separation of land uses and land cover in mixed pixels [50] [51] [59]. The approach proved to be suitable from medium to low spatial resolution sensors, and widely used in regional, continental, or even global mapping [51] [60]. Statistical algorithms, such as Maximum Likelihood classifier proves to be complex and challengeable, due to each method presents its strengths and weaknesses as shown in Table 3.
In this regard, Ackom et al. [61], Mohammady et al. [62] suggest hybrid approaches to solve the issues which have become more powerful and diversified, due to the development of powerful and advanced classifiers. Other strategies can be incorporated such as those which possibility to infer proportions of vegetation cover, commonly known as vegetation indexes. The most used are Normalized Difference Vegetation Index-(NDVI)-Normalized Difference Water Index (NDWI)-Soil-adjusted Vegetation Index (SAVI)-the Normalized Difference Built Index (NDBI)-Spectral Mixture Analysis Modified Soil Adjusted Vegetation Index (MSAVI).
Nevertheless, the success of this approach depends on several factors, such as quality of pre-processing, analyst experience, and classifier performance. However, depending on the complexity of the subject, Gómez et al. [46] point out the following criteria to be considered when choosing the classification algorithm: type of data, statistical distribution of classes, target accuracy, ease of use, speed, scalability and interpretability in order to achieve acceptable accuracy and rationalization of resources (Table 4).

Machine Learning Algorithms
Machine Learning (ML) are algorithms or models built to gain from information and acts appropriately in future circumstances, being grouped in: Lazy (e.g. k-nearest neighbour, Case-based reasoning) and eager (Decision Tree, Naive Bayes, Artificial Neural Networks) and mainly divided into four categories: supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning [81] [82].
According to Galván et al. [81] most of the machine learning algorithms (MLAs)-based on trees, rules, functions, etc.-are eager learning methods, in the sense that the generalization is carried out beyond the training data before observing the new instance. They are a powerful tool for training Artificial Intelligence  (AI) models that can help increase automation or optimize the operational efficiency of sophisticated systems such as robotics, autonomous driving tasks, manufacturing and supply chain logistics [82]. Classification methods in ML can be Binary, which refers to the classification tasks having two class labels such as "true or yes and false or no"; multiclasswhich refers to those classification tasks having more than two class labels; and multi-label-which represent the generalization of multiclass classification, where the classes involved in the problem are hierarchically structured, and each example may simultaneously belong to more than one class in each hierarchical level [82] [83]. Several works on land use and land cover mapping using machine learning classifiers have been carried out [3]  According to Shetty [21], while some of these classifiers such as SVM find a subset of training data as support vectors by fitting a hyperplane that separates twos classes in the best possible way, CART build simple decision tree from the given training data, ANN follow a neural network pattern and build multiple layer of nodes to passes input observations back and forth during the learning process (Multi-Layer Perceptron) until it reaches a termination condition, RF uses random subset of training data to construct multiple decision trees. Figure  4 and Figure 5 illustrate respectively some Machine Learning Methods highlighting supervised learning and machine learning workflow. They register improvements of 10% to 20% higher accuracy when confronted with complex data related to large areas [46]. The success of the classifiers for the theme is due to their unlimited assumptions of parametric statistics, and therefore, more suitable for: 1) analyzing multimodal, noisy and/or absent data; 2) analyze combinations of categorical and continuous auxiliary data; 3) reduction of pre-processing steps required in traditional approaches; 4) performance in cloud computing environments, such as Google Earth Engine (GEE).
GEE is a cloud-based platform with multiple petabytes which provides parallel computing and data catalog services for geospatial analysis on a planetary scale [5] [85]. The calculations are automatically parallelized, and data sets ready for public use. The calculations come from several geospatial data development agencies, such as the United States Geological Survey (USGS) and the European Space Agency (ESA), reflection data sets from Landsat surface to Sentinel data sets, various global land cover data, climate data sets, among others. It provides several integrated methods that support pre-processing images, in addition to having a repository of vast functions, such as masks, logical operators, sampling data, etc., and they perform various operations on images and vectors [21].
An example of machine learning applications can be found in Li et al. [86], article published in Remote Sensing Journal. In this article, it's proposed to generate a land cover map of the whole African continent at 10 m resolution utilizing machine learning algorithms, and multi-source data in the GEE platform. A workflow designed for on it is shown in Figure 6 where it's highlighted the 5 steps of machine learning workflow. Journal of Geographic Information System Figure 6. Flowchart of the proposed framework highlighting the machine learning workflow steps. Source: Li et al. [86]. Edited by authors.

L. S. Macarringue et al. Journal of Geographic Information System
Through this study it was possible to generate a map of land cover of all Africa, obtaining an accuracy of 81% for 5 classes, which is relatively superior to the existing 10 m land cover product (e.g., FROM-GLC10) in detecting urban class in city areas and identifying the boundaries between trees and low plants in rural areas. Part of the results of this study are shown in Figure 7 (more details can be found in the original article).

Time Series/Multi-Temporal, Multi-Scale/Multi-Source
Time series of medium spatial resolution optical data present significant results compared to a single scene. It presents high capacity to characterize environmental phenomena describing trends, as well as discrete events of change in characterization and identification of nature changes in land use and land cover [51] [87] [88]. Journal of Geographic Information System Landsat data is considered the appropriate/standard to classify land use and land cover changes [55], due to its spatial resolution (30 m), temporal (16/8 days), covered area (185 × 185 km), rigorous calibration, and consistency in the radiometry of sensors (TM/ETM+/OLI).
For Chi et al. [20] traditionally, data fusion can be carried out in terms of pixel-level fusion, feature-level fusion, and decision-level fusion. However, big data in remote sensing usually comprise different scales and/or formats.
According to Huang and Wang [11] Big Spatial Data (BSD) can integrate data from various sources, providing a more comprehensive picture and when doing so, a huge amount of data is pulled from different formats, devices or systems and given a geographic context to facilitate building a complete picture or analysis, but it's important to wonder how to integrate the data from various sources, where data features are significantly different (e.g., spectral signatures in optical remote sensing data, electromagnetic radiation in microwave data, structural features of texts, unstructured features of images by a digital camera, etc.).
The use of multi-source data, also provides land use and land cover mapping, and improve classification accuracy [6] [23] [54] [86] by collecting samples with high resolution sensors and fusing different sensor products (optical/optical or optical/radar) allowing clear target differentiation.
In this regard, Häme et al. [70] used the Hierarchical Clustering method to L. S. Macarringue et al. Journal of Geographic Information System detect and identify changes in land cover using paired data Sentinel-2/Sentinel-2, Landsat-8/Sentinel-2, and Sentinel-2/ALOS 2 PALSAR in an area of 12,372 km 2 in Finland. Joshi et al. [89] reviewed 112 studies on fusing optical and radar data, which offer unique spectral and structural information, for land cover and use assessments were they assessed advantages of fusion for land use analysis in 32 studies, and a large majority (28 studies) concluded that fusion improved results compared to using single data sources. MateoGarcia et al. [90] proposed and implemented a methodology to mask clouds (Cloud Mask) using GEE to map a type of biome based on data from OLI/Landsat-8. The algorithms used (FMask and ACCA), showed relevant quantitative performance, improving from 4% to 5% in classification accuracy, and 3% to 10% in commission errors. Adamo et al. [80], Samal and Gedam [75] present others applications.

Validation and Accuracy Assessment
Monitoring and managing the territory requires accurate information on land cover. Attempts to obtain accurate land use and land cover maps will always accompany professionals in the area [21]. The validation of land cover products is essential to show the quality of remote sensing products for decision making.
Evaluating and reporting with appropriate information metrics is essential for the community of users [46]. Thus, factors such as size and quality of training samples, thematic accuracy, choice of classifier, and the size of study area affect the accuracy of classified maps [13] [14] [21] [91].
Comprehending these factors aids to find the appropriate accuracy classification for a given problem studied [21]. Selecting samples must comply with statistical criteria, such as the type and method of sampling. Mastella and Vieira [56], Shetty [21] states that in remote sensing, simple and stratified random sampling are largely used, with most validation indexes based on simple random.
However, authors who employ and recommend systematic sampling methods to study land use and land cover, due to their accurate results, despite the absence of an unbiased estimate of the variance [21].
Accuracy assessment is the key component to have maps with remote sensing data, as it supports evaluating the performance of various classifiers and the effect of sampling [6]. The literature recommends the inclusion of an error or confusion matrix [13] [14] to assist the identification of confusion between classes, as well as potential sources of error [21] [46] [56]. Furthermore, quantitative metrics derived from confusion matrix provide significant support, such as global accuracy, which expresses how close the classified map is to the original, as well as weighted metrics (producer accuracy and user accuracy, Kappa index, Tau, statistical Z, among others) by area and confidence intervals.

Future Perspectives and Challenges
The world has experienced a remarkable and rapid advance in the field of remote sensing, acquisition of geospatial information and mapping. Cloud archi-Journal of Geographic Information System tectures, open-source software, creative image processing developers, and a market eager to integrate data sets based on Earth observation and location to verify assumptions and predict trends continue to drive the industry.
In 2005, The Global Earth Observation System of Systems (GEOSS) was created, and a new era started for the geotechnologies and remote sensing. The new technological advances have been characterized by the flow of information, international cooperation, interconnection between current and future observation systems with impacts on cost reduction in the generation of remote sensing and geoprocessing products. It has revolutionized the ability to study and manage our planet [92].
The reuse of rockets, the launch of multiple satellites from a mission, the use of low orbits in the constellations of the satellites [15] are successful examples that have revolutionized the space sciences.
NASA and ESA developed the Landsat-8 and Sentinel2 (HLS) data harmonization project, whose objective was to provide a single data from the 2 satellites with a temporal resolution of 3 to 5 days depending on latitude.
According to Aubrecht [93], the efforts presented here, associated with the improved combined use of new types of space-based data with data from dynamic networks of sensors in situ, and the policies of free data distribution highlights an inevitable path towards to dynamic (almost) real-time monitoring, especially in application domains involving social and population activities. In additional, states that the benefits of the new trend are obtaining a substantially high spatio-temporal resolution capable of monitoring living species. Table 5 summarizes some progress achieved in the scope of production and processing spatial data for distinct purposes.
Although all this scenario seen above, the challenges of remote sensing persist, some of them are rolled below: 1) Allocate sensors with every high resolution, in a single platform since the LULC characteristics occur at finer spatial scales compared to the resolution of primary remote sensing satellites.
2) Improve MLAs based on the sub-pixel in order to attenuate the spectral mixtures of the LULC-related targets, especially in regions with very fragmented uses and coverage.
3) Definition of the number of samples: according to the Shetty [21], due to the small number of training samples and the diversity of the spatial and spectral distributions of land covers, the existing spectral classification methods and spectral-spatial classification methods usually perform better for certain land cover types and relatively worse for others. 4) Understaffed qualified personnel associated with the financial, political and economic constraints can be considered as a challenge in the developing countries. According to Cerbaro et al. [94], these could limit the capacity of institutions to develop the qualified personnel and infrastructure to benefit from the acknowledged gains that EO data and information can bring to their environmental and sustainability management roles. Table 5. Some progress achieved in the scope of production and processing spatial data.

Time series
Initiatives Events

GEOSS initiative
Interconnecting existing and future Earth observation systems.
Reduces costs, promotes international cooperation and serves the public good.
One user may require many data sets, while one data set may serve many users.

2004-2013
EO Open-data initiatives (INPE/NASA/ESA) INPE becomes a pioneer by making CBERS-2 images available free of charge from 2004.
In 2008, the USGS adopted a free and open Landsat data policy which led to a substantial increase in the use of Landsat data.
ESA's Sentinel-2 data product become publicly available at no cost through accessible web portals.

Commercial microsatellite constellations
Planet Labs Inc. operates a constellation of more than 100 cubes (doves) to capture daily high-resolution images (3 -5 m).
The TripleSat/DMC3 Constellation successfully launched in 2015 which makes it possible to target anywhere on earth once per day.

Rockets Reuse
Launching multiple satellites from one mission, and the use of low orbit in the constellations of the satellites.
Rocket reuse by Space X in 2017, with the possibility of simultaneously launching many satellites.
Functionalities for big EO data management, storage and access.
Provide a more complete solution for big EO data management and analysis by integrating different kinds of technologies.
Application Programming Interfaces (API) and web services.
Uses a set of algorithms to obtain seamless products from OLI and MSI: atmospheric correction, cloud and cloud-shadow masking, spatial co-registration and common gridding, illumination and view angle normalization and spectral band pass adjustment.
In general, the possibility of using all the data simultaneously, combine all available information about the areas studied regardless of their media and take advantage of complementarity of heterogeneous methods; opportunity of new kinds of analysis and incremental new methods; need to strengthen the links between geographer and computer scientists; use of unsupervised (or guided) approaches and rethink the algorithms; define algorithm and method able to take into account error/inaccuracies in data and so does in knowledge; remain the remote sensing challenges. More details, on a systematized way of these and other challenges can be found in Nedd et al. [48].

Conclusions
The present study aimed to address the advances achieved in the field of acquisition and processing of remotely sensed data for the purpose of land use and land cover mapping. Advances in data acquisition techniques were presented, such as Journal of Geographic Information System reusing satellite and rocket launch bases; software and algorithms to treat spatial data, as well as approaches for processing such data. Several approaches were developed due to the limitations presented by each one, as well as the algorithms. Thus, it has been proposed to use sub-pixel learning algorithms to solve problems with mixed pixels found in the pixel-based approach, since they disintegrate the pixel spectrum into its constituent spectra.
Relatively to Big Data, cloud processing using platforms, such as GEE is proposed and recommended. However, since it is not possible to find a universal approach for data processing on land use and cover, due to, the classification systems which, for example, incorporate remotely sensed data and land observations as an essential function for analysis and assessment of land use and land cover maps, several studies point to hybrid or improved approaches, because there are certain classes, little highlighted using a certain technique.
The world is interconnected, joining synergies in order to find solutions to the challenges proposed, such as the combined improved use of new types of spacebased data from dynamic sensor networks in situ, making essential updates.
Nevertheless, issues related to multisource, multi-temporal and multilevel analysis, robustness and quality, scalability, remain challenging the remote sensing.