Article Information

Authors:
Alberto J. Perea1
José E. Meroño2
María J. Aguilera1
José L. de la Cruz 1

Affiliations:
1Department of Applied Physics, University of Cordoba, Rabanales campus 14071, Spain

2Department of Graphics Engineering and Geomatics, University of Cordoba, Rabanales campus 14071, Spain

Correspondence to:
Alberto Perea

email:
g12pemoa@uco.es

Postal address:
Department of Applied Physics, University of Cordoba, Rabanales campus 14071, Cordoba, Spain

Keywords
digital aerial photography; expert classification algorithm; land-cover classification; object-oriented classification; UltracamD

Dates:
Received: 16 Jan. 2010
Accepted: 16 Mar. 2010
Published: 08 June 2010

How to cite this article:
Perea AJ, Meroño JE, Aguilera MJ, De la Cruz JL. Land-cover classification with an expert classification algorithm using digital aerial photographs. S Afr J Sci. 2010;106(5/6), Art. #237, 6 pages. DOI: 10.4102/sajs.v106i5/6.237

Copyright Notice:
© 2010. The Authors. Licensee: OpenJournals Publishing. This work is licensed under the Creative Commons Attribution License.

ISSN: 0038-2353 (print)
ISSN: 1996-7489 (online)

Land-cover classification with an expert classification algorithm using digital aerial photographs
In This Research Letter...
Open Access
Abstract
Introduction
   • New techniques for classification
Materials & Methods
   • Obtaining the principle components
   • Obtaining the NDVI
   • Supervised classification
   • Object-oriented classification
   • Expert classification algorithm
Results and Discussion
   • Results of the object-oriented classification
   • Expert classification algorithm
Conclusion
References
Abstract

The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1) bare soil, (2) cereals, including maize (Zea mays L.), oats (Avena sativa L.), rye (Secale cereale L.), wheat (Triticum aestivum L.) and barley (Hordeun vulgare L.), (3) high protein crops, such as peas (Pisum sativum L.) and beans (Vicia faba L.), (4) alfalfa (Medicago sativa L.), (5) woodlands and scrublands, including holly oak (Quercus ilex L.) and common retama (Retama sphaerocarpa L.), (6) urban soil, (7) olive groves (Olea europaea L.) and (8) burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.

Introduction

In recent years, the development of remote sensing technologies has increased exponentially. Until recently, high-resolution satellites could only obtain images to a size of 5 m in spatial resolution. Nowadays, these technologies have been improved. Data obtained from this type of sensor have generated a large amount of environmental information.1 Extracting useful information from high-resolution satellite imagery is a major technical problem of remote sensing, however, as the data obtained are difficult to use because the spectral information contained in pixels is not sufficient, in the majority of cases, to identify vegetation species or the types of surface cover.2 Pixels normally include a radiometric mixture from their neighbours and consequently few zones have total homogeneity.2

Currently, process improvements have enabled digital photogrammetry based on aerial photography to generate geometrically corrected products compatible with conventional mapping detail. They are able to provide decisions or potential territorial element analysis of natural resources surpassing those available from satellites. The production of digital orthophotos is an ideal complement to environmental assessment processes and spatial planning that heretofore made use only of satellite imagery.1 Digital orthophotos constitute a basic tool in the task of managing the environment and they are also a basis of reference in spatial plans. 3

The launch of photogrammetry using digital cameras has made available multispectral information concerning large areas of territory. This information is being used solely from the geometric point of view, because there are no algorithms and models to exploit infrared information captured simultaneously with colour information. There is currently great interest in the development of new classification algorithms in the area of the digital treatment of images.4 The combination of spectral data with other sources of auxiliary data allows the use of more information to improve classifications.5

In recent years, and probably due to the availability of more powerful software, some researchers have reported that the segmentation techniques used in classifications reduce the local variation caused by textures, shadows and shape.6,7 Object-based classification may be a good alternative to the traditional pixel-based methods. To overcome the H-resolution problem and the salt-and-pepper effect, it is useful to analyse groups of contiguous pixels as objects instead of using the conventional pixel-based classification unit.6

Expert systems use data other than spectral characteristics to improve the results of classification. The use of auxiliary information to increase the accuracy of digital classification involves combining an existing knowledge base with information extracted from images.8 To improve automatic classification procedures, it is necessary to introduce a set of parameters to inform the classification beyond the digital values of the pixels.9 With the use of auxiliary data, the initial results of the procedures can be corrected through knowledge-based rules.5

New techniques for classification

In high-resolution images from satellites or aerial digital cameras (UltracamD, DMC, ADS-40, etc.), each pixel does not refer to an object, character or area as a whole, but to a portion of some components, which limits the classic techniques of pixel-based classification. 10 Similarly, the great detail in digital images obtained from airborne sensors can lead to excessive variability within an area that has the same coverage, associated with decreased separability of different types of coverages.

Alternative approaches to classification techniques involve the object-oriented analysis of images, which takes into account, inter alia, the shapes, textures, background information and spectral information in the image. Recent studies have demonstrated the superiority of the new concept of traditional classifiers.11,12,13,14 Its basic principle is to make use of important information (shape, texture, background information) that is only present in significant image objects and their mutual relations. This type of classification is called ‘object-oriented classification’ and requires a prior segmentation, defined as the search for homogeneous regions in an image and then classifying these regions.15 Software called eCognition® is available that allows segmentation and classification according to this concept. The influence the described parameters have on the segmentation is flexible and can be specified by the user through the manipulation of different parameters based on colour and shape (compactness and smoothness) factors.16 The second step is the classification of these regions based on examples (by nearest neighbourhood algorithm) or membership functions, allowing users to develop an expert knowledge base (based on fuzzy logic) and to assign regions to certain classes.16

Another current trend is to develop algorithms that improve the classifications based solely on the reflectance of the pixels. It should be noted, however, that the neighbouring pixel radiometric mixture prevents the extraction of homogeneous regions of interest.16

Gong and Howarth17 argue that it is important to recognise that conventional classifiers (maximum likelihood classifier, minimum distance classifier) do not recognise the spatial patterns in the same way as the human performer. An expert system was therefore developed to incorporate data other than the spectral features to improve the outcome of the purely spectral classification.

This work aims to evaluate the utility of spectral information from these photogrammetric sensors in determining land covers.

Materials and Methods

The area of study was located in the Pedroches Valley of Cordoba Province, Spain (Figure 1) and includes the municipality of Hinojosa del Duque (38°23′ N – 38°33′ N; 5°16′ W – 5°50′ W). This rectangular area of 16 km × 20 km, covering 32 000 ha, is representative of Andalusian dryland crops and has a typical continental Mediterranean climate, characterised by long dry summers and mild winters.

To carry out the study, 64 frames were captured by the sensor of Vexcel UltracamD photogrammetric on 23 May 2006, with dimensions of 7500 × 11 500 pixels and encoded in 8 bits. The frames had a spatial resolution of approximately 0.5 m and consisted of infrared, red, green and blue bands. These frames were orthorectified and referred to European Datum 1950 on the International Ellipsoid.

Figure 1: A map showing the study area in Spain

To develop this work, information was used from field visits by the Public Enterprise for Agricultural and Fisheries Development. Land covers evaluated included, (1) bare soil, (2) cereals, including maize (Zea mays L.), oats (Avena sativa L.), rye (Secale cereale L.), wheat (Triticum aestivum L.) and barley (Hordeun vulgare L .), (3) high protein crops, such as peas (Pisum sativum L.) and beans (Vicia faba L.), (4) alfalfa (Medicago sativa L.), (5) woodlands and scrublands, mainly holly oak (Quercus ilex L.) and common retama (Retama sphaerocarpa L.), (6) urban soil, (7) olive groves (Olea europaea L.) and (8) burnt crop stubble.

To perform the supervised classification and expert classification algorithm, the Erdas Imagine 9.0® system (Leica Geosystems Geospatial Imaging, Norcross, United States of America) was used. In the case of object-oriented classification, the eCognition Professional 5.0® software (Definiens, Munich, Germany) was used.

The methodology began with the calculation of principle components and then calculated the normalised difference vegetation index (NDVI). Images were obtained with the desired combination of bands and classifications made. Finally, the results of the classifications were validated.

Obtaining the principle components

The objective of ‘principle component analysis’ (PCA) is to summarise a wide group of variables in a new and smaller set, without losing a significant part of the original information.18 For the final user of distance imaging products, the goal of PCA is to construct images in order to increase their capacity to differentiate types of covers.

Obtaining the NDVI

Vegetation has very characteristic spectral behaviour. It shows a high absorption of red wavelengths, yet exhibits high reflectivity with respect to the near infrared ones.

The NDVI was obtained so as to highlight the different spectral behaviours of each type of ground cover. The reflectivity image was obtained by calculating this index, following a study of the influence of the calculation of apparent reflectance as a reference in obtaining the green vegetation index (NDVI) and its cartographic expression, which showed a positive effect.19

This index is based on the difference between the maximum absorption in the red (690 nm), owing to chlorophyll pigments, and the maximum reflection in the near infrared (800 nm), owing to the cellular structure of leaves.20 Using narrow hyperspectral bands, this index is quantified according to the following equation:

Eqn 1

where RNIR and RRED are reflectance in the near infrared band (R800 nm) and the red band (R690 nm), respectively.

Supervised classification

Starting from different combinations of bands (Table 1), a series of images was obtained. Next, a supervised classification was made from all these images.

The Bayesian ‘Classifier of Maximum Probability’ was used to classify the image. This algorithm is the most exact of the classifiers in the ERDAS Imagine 9.0® system because it takes into consideration the largest number of analytical parameters and because of the variability of the classes using a covariance matrix.

Table 1: Images used in supervised classification

Object-oriented classification

As noted above, the particularity of this type of analysis is that the classification is based on objects rather than pixels. Being the image formed by pixels, the first step in object-oriented analysis is to group adjacent pixels through region-growing techniques, in order to classify objects subsequently extracted. In this way, the number of parameters that can be valued greatly increases, allowing criteria such as size, shape, colour stockings, highs and lows, proximity to other objects and texture. At the same time, segmentation reduces the number of objects to classify, so the processing time decreases.

The stopping criterion in the process of merging regions occurs through the so-called scale parameter, which can be defined by the user in relation to the maximum global heterogeneity of the segments. The larger the scale parameters for a database, the bigger the objects in the image and, since the scale parameter can be changed, different types of segmented images can be obtained. Thus, the generated objects in a coarser segmentation inherit information from smaller objects generated with finer scale parameters. Subsequently, the rankings are trained using the same plots of training and validated using the same validation plots used in previous classifications.

The output of the segmentation process depends on specifications and weighting of input data and controlling parameters such as scale (control size parameter), colour (spectral information) and shape (smoothness and compactness information) of the resulting image objects. The option ‘multiresolution segmentation’ was used, which performs automatic extraction of homogeneous objects. The scale parameter is an abstract term that determines the maximum allowed heterogeneity for the resulting image objects. Colour parameter and shape parameter (smoothness and compactness) define the percentage that the spectral values and the shape of objects, respectively, will contribute to the homogeneity criterion. Finally, the values of 211, 0.9, 0.1, 0.5 and 0.5 were defined for scale, colour, shape, smoothness and compactness. For most cases, colour was the most important and had the greatest weight in the definition of objects.

The nearest-neighbour algorithm was used for the classification: some samples were chosen (training area) for each of the classes. The rest of the scene was then classified accordingly. This is a very rapid and simple method, adequate when the classification of an object requires many bands/criteria. It also takes into account different parameters related to the objects (area, longitude, mean colour, brightness, and texture).

Expert classification algorithm

The expert classification algorithm used in this work consisted of assigning the classes that made up the legend, based on the area of coincidence among different types of images that had been classified previously. To do this, the following information was necessary: an image created based on a field visit and the map of land cover and vegetal cover for Andalusia for 2005, used as the true terrain. The ERDAS Imagine 9.0® system and the supervised classifications were based on the image formed by the principle components, the image formed by the principle components and NDVI, as well as the object-oriented classification.

This algorithm was designed with the following decision-making criteria or rules, (1) when the pixels of each class of the classified image of the principle components and NDVI coincided with the image classified from principle components, they were assigned to this class and (2) in the case of the other pixels, where there was no coincidence, they were assigned by the object-oriented classification. To evaluate the quality of classifications, a total of 75 000 verification points were taken (approximately 2% of the area) for those that provided both real cover (true terrain) and for those obtained by classification.

The overall accuracy, kappa statistic and the producer’s and user’s accuracy were calculated for each one of the classifications. The overall accuracy was calculated through the plot ratio, correctly classified, divided by the total number included in the evaluation process. The kappa statistic is an alternative measure of classification accuracy that subtracts the effect from random accuracy; it quantifies how much better a particular classification is in comparison with a random classification. Some authors have suggested the use of a subjective scale where kappa values < 40% are poor, 40% – 55% fair, 55% – 70% good, 70% – 85% very good, and > 85% excellent.21

For individual classes, two accuracies can be calculated, (1) the producer’s accuracy is a measure of omission error and indicates the percentage of pixels of a given land-cover type that are correctly classified and (2) the user’s accuracy is a measure of the commission error and indicates the probability that a pixel classified into a given class actually represents that class on the ground.

Results and Discussion

Results of the object-oriented classification

The result of segmentation is a new image that divides the original image into regions such that the pixels included in each of them are similar. After the process of segmentation, a new image was obtained and divided into 13 243 regions that were later classified (Figure 2).

Figure 2: Example of segmentation of the digital aerial photography at the scale of 211

The accuracy assessment of this classification was measured using randomly selected points for which land cover was determined with an orthophoto mosaic that was geo-referenced to the image. Table 2 shows the accuracy of classification in the digital aerial classification according to its boundary analysis.

The improvement achieved by the introduction of textural and contextual features was significant for all classes with respect to the pixel-based analysis. For some classes, the producer’s and user’s accuracy reached a value of 100% (e.g. for ‘urban soil’ and ‘olive groves’). For others, the producer’s and user’s accuracy increased, but remained low, for example, in the case of ‘woodlands and scrublands’.

The highest producer’s accuracies were for the ‘burnt crop stubble’, ‘urban soil’ and ‘olive groves’ categories, all with the value of 100%. In contrast, the lowest value was for ‘alfalfa’ (78.3%), because of its spectral similarity to ‘high protein crops’. Referring to the user’s accuracy, the best results were achieved for the categories ‘urban soil’ (100%) and ‘olive groves’ (100%) and, as in the case of the producer’s accuracy, the lowest value was for the category ‘alfalfa’ (74.5%), due to misclassification of ‘high protein crops’ during image classification.

The overall accuracy and kappa statistics were excellent, reaching values of 91.7% and 87.5%, respectively. In addition, the object-oriented method significantly narrowed down the variation of class-based accuracies compared with the result of the pixel-based classification method.

A map obtained from the object-oriented classification is presented in Figure 3.

Figure 3: Example of object-oriented classification

Expert classification algorithm

The accuracy of the expert classification algorithm was higher than the pixel-based classification. Both the overall accuracy and kappa coefficient were significantly higher and the producer’s and user’s accuracy also gave better results in the expert classification algorithm.

The results of the expert classification algorithm (Table 2) showed a marked improvement in the reliability of both producer and user in most categories, when comparing them with purely spectral classifications. Besides, this algorithm achieved some accuracy rates and kappa statistics that were above 90%. The producer’s accuracy increased in all cases, except in those of ‘cereal’ and ‘alfalfa’, but was nevertheless above 87%. The category ‘alfalfa’ was confused with the category ‘high protein crops’ for the reason already mentioned. The user’s accuracy increased in all categories except that of ‘bare soil’ (92.9%). The overall accuracy was 95% and the kappa statistic had a value of 91.1%, indicating strong agreement between the classification map and the ground reference information.

Table 2: Producer’s and user’s accuracy, overall and Kappa statistic for supervised classifications, object-oriented classification and expert classification algorithm

The accuracy values obtained with object-oriented classification and with the expert classification algorithm in digital aerial photography were similar to, and/or higher than, the values obtained by other authors using satellite images. The methodology is therefore adequate for the classification of land covers, which is presented in Figure 4 as a comparison between supervised classifications and the expert classification algorithm.

Figure 4: Example of comparison between, (a) supervised classification of the image formed by the principle components, (b) supervised classification of the image formed by the principle component and the NDVI index and (c) the classification using the expert algorithm

In the southern Baltic sea, Janas et al.22 used object-oriented classification methods to classify seagrass landscape, composed of meadows, beds and patches/gaps, obtaining a total precision of 83%.

On the Gulf coast of Texas, Green and Lopez23 classified bivalve reef, sea grass, land, mangroves, emergent marsh, unconsolidated sediments and unknown benthic habitat using object-oriented classification in images from the ADS40 aerial sensor, obtaining an accuracy of 90%, lower than that obtained with the expert classification developed in this work.

In the Three Gorge area of Chongqin in China, Zhang et al.24 made a classification using expert classification of 17 categories. SPOT5 XS and Pan data were acquired between 2004 and 2006 for cloud-free images, with two scenes of different seasons for each area being selected for vegetation detection, attaining a total precision of 86%, again lower than that obtained in the present work (although it should be noted that a larger number of uses were classified).

Conclusion

The results obtained in the different classifications of digital aerial photographs show that the photographs from digital aerial sensors can be used in tasks that previously were only specific to satellite images, offering the ability to discriminate land cover with great precision. Moreover, the new classification techniques represent a breakthrough in agricultural field controls, as the quality of the results of digital aerial photography, together with the development of the new techniques described, allows the control and monitoring of various agricultural areas without making field visits. The combination of bands, which provides a better result in the supervised classification, is the image formed by the principle components and NDVI. Finally, it is noteworthy that the use of object-oriented classification and the expert classification algorithm yielded the best results, greatly reducing the problems associated with the use of high-resolution images, such as the salt-and-pepper effect. The best result was obtained with the expert classification algorithm, achieving a kappa index and a confidence rating of 91.1% and 95%, respectively.

References

1. Moreira JM. [Digital orthophotos of Andalusia, an important environmental value.] J Environ. 2005;49:35–37. Spanish.

2. Wilkinson GG, Kanellopoulos I, Kontoes C, Schoenmakers R. Advances in the automatic processing of satellite images. Paper presented at: Conference on the Application of Remote Sensing to Agricultural Statics; 1991 Nov 26–27; Villa Carlotta, Belgirate, Lake Maggiore, Italy. Luxembourg: Office for Publications of the European Commission; 1991. p. 125–132.

3. Ayala RM, Menenti M. [Alternatives to problems presented in a classification process based on spectral pattern recognition.] Mapping. 2002;75:72–76. Spanish.

4. Abkar AA, Sharifi MA, Mulder NJ. Likelihood-based image segmentation and classification: A framework for integration of expert knowledge in image classification procedures. Int J Appl Earth Obs Geoinf. 2000;2:104–119.

5. Wicks TE, Smith GM, Curran PJ. Polygon-based aggregation of remotely sensed data for regional ecological analyses. Int J Appl Earth Obs Geoinf. 2002;4:161–173.

6. Yu Q, Gong P, Clinton N, Biging G, Kelly M, Schirokauer D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm Eng Remote Sens. 2006;72(7):799–811.

7. Hay GJ, Blaschke T, Marceau DJ, Bouchard A. A comparison of three image-object methods for the multiscale analysis of landscape structure. Photogramm Eng Remote Sens. 2003;57:327–345.

8. Trotter CM. Remotely-sensed data as an information source for geographical information systems in natural resource management: a review. Int J Geogr Inf Syst. 1991;5:225–239.

9. Heyman O. Automatic extraction of natural objects from 1-m remote sensing images [homepage on the Internet]. c2003 [cited 2009 May 4]. Available from: http://www.cobblestoneconcepts.com/ucgis2summer/heyman/heyman.htm

10. Sánchez N. [Current overview of mixed techniques of image classification using spectral and texture segmentation. Application to high spatial resolution images.] Mapping. 2003;88:32–37. Spanish.

11. Leukert K, Darwish A, Reinhardt W. Urban land-cover classification: An object-based perspective. Paper presented at: URBAN 2003. Proceedings of the 2nd Joint Workshop on Remote Sensing and Data Fusion over Urban Areas. 2003 May 22–23; Berlin, Germany. Berlin: IEEE Geoscience and Remote Sensing Society; 2003. p. 278–282.

12. Tansey K, Chambers I, Anstee A, Denniss A, Lamb A. Object-oriented classification of very high resolution airborne imagery for the extraction of hedgerows and field margin cover in agricultural areas. Appl Geogr. 2008;29:145–157.

13. Geneletti D, Gorte BGH. A method for object-oriented land cover classification combining Landsat TM data and aerial photographs. Int J Remote Sens. 2003;24:1273–1286.

14. Perea AJ, Meroño JE, Aguilera MJ. [Oriented-based classification in aerial digital photography for land-use discrimination.] Interciencia. 2009;34:612–616. Spanish.

15. Mather P. Computer processing of remotely-sensed images: An introduction. Chichester: John Wiley and Sons; 1999.

16. Flanders D, Hall-Beyer M, Pereverzoff J. Preliminary evaluation of eCognition object based software for cut block delineation and feature extraction. Can J Remote Sens. 2003;29:441–452.

17. Gong H, Howarth PJ. An assessment of some factors influencing multispectral land-cover classification. Photogramm Eng Remote Sens. 1990;56:597–603.

18. Chuvieco E. [Fundamentals of satellite remote sensing.] Madrid: Ediciones Rialp; 1990. Spanish.

19. Marini MF. [Influence of calculating the apparent reflectance in obtaining the green index (NDVI) and its cartographic expression.] Paper presented at: XXIII Reunión Científica de la Asociación Argentina de Geofísicos y Geodestas; 2006 August 14–18; Bahía Blanca, Argentina. Spanish.

20. Haboudane D, Miller JR, Pattey E, Zarco-Tejada PJ, Strachan I. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens Environ. 2004;90:337–352.

21. Monserud RA, Leemans R. Comparing global vegetation maps with the Kappa statistic. Ecol Modell. 1992;62:275–293.

22. Janas U, Urbański J, Mazur A. Object-oriented classification of QuickBird data for mapping seagrass spatial structure. Oceanol Hydrobiol Stud. 2009;38:27–43.

23. Green K, Lopez C. Using object-oriented classification of ADS40 to map benthic habitats of the state of Texas. Photogramm Eng Remote Sens. 2007;73:861–865.

24. Zhang L, Yueming Z, Bingfang W. Expert system based on object-oriented approach for land cover mapping. Paper presented at: ISPRS Congress Beijing 2008, Proceedings of Commission VII; 2008 July 3–11; Beijing, China. Nschede: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences; 2008; Vol. 37, Part B7. p. 679–684.


Reader Comments

Before posting a comment, read our privacy policy.

Post a comment (login required)

Crossref Citations

No related citations found.