Thursday, April 21, 2016

Lab 10: Advanced Classifiers 3

Goals and Background

The main purpose of the lab is to learn the processes for two new algorithms for performing advanced classification.  The lab will have me performing an expert system/decision tree classification utilizing provided ancillary data and then I will use an artificial neural network to perform complex image classification.  Throughout the following blog I will summarize the methods I used to complete the classification for both methods and display the results.

Methods

Expert System Classification

All of the methods for this section were performed in Erdas Imagine 2015.

Expert system classification is one of a limited number of classifiers which can produce results above the required accuracy requirements.  The expert system utilizes ancillary data such as zoning data or Census (population) data to assist in classifying a remotely sensed image.

I was provided an image from my professor of the Eau Claire and Chippewa Falls area in Wisconsin (Fig 1). The image has a few accuracy errors when analyzing the original classification which was performed. Certain green vegetation areas along with certain agricultural areas and a few other features have been classified incorrectly. In the following steps I will be utilizing the expert system classification method to improve the accuracy of the original classification.

(Fig. 1) Orginal classified image provided to me by my professor of the Eau Claire and Chippewa Falls area.


The first step is to develop a knowledge base by which to improve the original classified image. To accomplish this I opened the Knowledge Engineer interface from the Raster sub-menu. From the Knowledge Engineer interface I created hypothesis & rule to classify each class for the image.  The first rule I created was for water (Fig. 2).  I then created a rule for the remaining classes.

(Fig. 2) Rule Props window creating the rule for the water class.


The second step is to add the ancillary data in to the rules to assist in the classification. The first new hypothesis I created was labeled Other urban. In the Rule Props window inserted the variable provided to me by my professor which will assist the algorithm select out the "other urban" areas (Fig. 3). Then a second variable was inserted to classify the areas with the proper label in the image. The second step is to make a counter argument for the original Residential classification to exclude the residential area from being classified in to the "other urban" area (Fig. 4). I then utilized the same process to use ancillary data to convert green vegetation which was wrongly classified as agriculture and a separate hypothesis for the opposite. After completing the previous tasks the decision tree was complete and ready to perform the classification (Fig. 5).

(Fig. 3) Creation of the Other Urban area rule with tree display.

(Fig. 4) Creation of exclusion rule for the original Residential class.
(Fig. 5) Knowledge Engineer window with the decision tree completed.

Opening the Knowledge Classifier from the Knowledge Engineer menu opened the Knowledge Classification window after opening the above knowledge file in the first window (Fig. 6). Selecting next will then open up a new window to declare your output file name and parameters (Fig. 7).  The cell size was set to 30 by 30 per the instructions of my professor. The last step was to run the classification by selecting OK.

(Fig. 6) Knowledge Classification window with specified classes selected.

(Fig. 7) Knowledge Classification window with output file set.
Neural Network Classification

All of the step unless noted for this section were performed in ENVI 4.6.1.

I opened the image provided to me from my professor and displayed as 4,3,2 band combination in the viewer. I then imported an ROI file containing training samples previously collected by my professor (Fig. 8).

(Fig. 8) Original image with ROI locations overlayed in the ENVI viewer.

I then selected Neural Net from the Supervised sub-meun of Classification to open the Classification Input File window (Fig 9************).  From this window I selected the image and clicked OK to proceed to the  Neural Net Parameters window. I set the parameters according to my professors guidelines/specification (Fig. 10***********). I then ran the classification and opened the results in new viewer (Fig.11).

(Fig. 11) Results from the Neural Network classification in ENVI.
An optional portion of the lab was to create my own training samples and classifiy a QuickBird image of the University of Norther Iowa campus (Fig. 12).  I opened the provided image and created my training samples per my instructions.  I then proceeded to utilize the same procedure to complete the classification of the image (Fig. 13).


(Fig. 12) Quickbird image with display of created training samples and parameter windows.



(Fig. 13) Results of the classification for the campus image in ENVI.

Results




Sources


Thursday, April 14, 2016

Lab 9: Object Based Classification

Goals and Background
The main purpose of this lab is to gain knowledge in performing object-based classification within eCognition.  eCogniton is a computer program which only performs object-based classification of satellite images. Object-based classification is a somewhat new form of classification in the world of remote sensing.   In this lab I will learn the step required to perform object-based classification.  I will be using two different classification methods which include, random forest classifier and support vector machine classifier.

Methods
I performed both forms of classification on the same image of the Chippewa and Eau Claire County in Wisconsin from previous classification labs.  The classification process was performed in eCognition as stated above.  Additional programs will be noted for there uses.

I imported the image provided to me utilizing Import Image Layer after creating a New Project. The resolution was set to 30 /pxl and the Use geocoding box was selected. The image was displayed in an unfamiliar color scheme.  I utilized Edit Image Layer Mixing to change the color display to a familiar 4,3,2 band combination (Fig. 1).

(Fig. 1) Edit Image Layer Mixing window set to display a 4,3,2 band combination image.


Segmentation and Sample Collection

I created a new process in the Process Tree after opening from the Process menu.  The first “process” acts like a label/heading to build the “tree” from. From the first process I created a New Child which I labeled Generate objects.  The step will be creating segmentation with in the image.  Segmentation groups like features based on pixel value, texture, and compactness. In the edit window I set the Algorithm to multiresolution segmentation along with setting Shape to .3 and Compactness to .5. per the instruction of my professor.  The setting for shape and compactness were derived from trial and error from previous work my professor has done. I selected Execute after assuring all of my parameters were set correctly. The image was now displayed with the created segmented areas (Fig. 2).

(Fig. 2) Zoomed in image with the segmentation layer over top of the satellite image.


Creating training samples is the next step of the classification process. For this lab I will be creating classification for the following classes:
  • ·         Forest
  • ·         Agriculture
  • ·         Urban/Built-up
  • ·         Water
  • ·         Green vegetation/shrub
  • ·         Bare soil

Using Class Hierarchy and Insert Class I was able to create classes and label them with a corresponding color (Fig. 3). Take from Lab instructions if needed). With the classes created I was able to start collecting samples using Select Samples from the Classification menu and Samples sub-menu. I zoomed and panned around the image selecting areas which were homogeneous with forest areas after selecting Forest from the classes list. After selecting an area the segmented area would change to the color corresponding to the class (Fig.4*****************).  After collecting the required number of samples (specified by my professor) for the forest areas I utilized the same process for all the remaining classes.

(Fig. 3) Class list created for classification.

(Fig. 4) Image of sample areas selected with appropriate color displayed.




Implement object classification

I created a Scene Variable from the Manage Variable window found under the Process menu. The scene variable is like setting environments in ArcMap as it gives a location for the processing to store information.

I Appended New process labeled RF Classification below the segmentation “Level 1” step. I then inserted a New Child to the new process. The new child process was labeled Train RF Classifier.  Within the Edit Process window I set a number of parameters to properly execute the training classification (Fig. 5).  The parameters again were given to me from my instructor based on his experience with the RF Classification. The training samples were brought in to the classifier window through the Feature drop down in the Edit Process window. The Select Features window opens after selecting <select features…> from the drop down menu. I added selected parameters from my trainings samples within the window per the advice of my professor (Fig. 6*************). The final step is to implement/perform the classification. I inserted a child to RF Classification labeled Apply RF Classifier. The parameters were set in the window to properly apply the previous steps to the image (Fig. 7*****).  Select the View Classification icon after Executing the apply step of the process.

(Fig. 5) Edit Process window with parameters set to train the RF classifier.

(Fig 6) Select Features menu with training data selected.


(Fig. 7) Edit Process window applying the RF classification.


(Fig. 8) Process Tree after imputing all of the processes to perform RF Classification.


Export to Erdas

Utilizing Export Results from the Export tab I created an Erdas Imagine Image.  I opened the image in Erdas Imagine to alter the color scheme of the classification to match previously completed classifications. 

Support Vector Machines

Support Vector Machines (SVM) is another learning algorithm which utilizes a hyperplane to perform classification of segmented areas.

I utilized the same steps above to perform the SVM classification on the same segmented image. The only difference is the parameter settings when training the classifier (Fig. 9). The parameter setting were again provided to me by my professor in the lab instructions.




(Fig. 9) SVM classification parameters set in the Edit Process window.


Results



Both classification results had some error when comparing the produces map with Google Earth. The areas with errors could have been corrected using Manual Editing in eCognition.  The likely reason for the error is from incorrectly selected training samples. The SVM classification produced more accurate results compared to the RF classification from my observations.

Sources

Lta.cr.usgs.gov,. (2016). Landsat Thematic Mapper (TM) | The Long Term Archive

Thursday, April 7, 2016

Lab 8: Advanced Classifiers 1

Goals and Background

The purpose of this lab is gain knowledge and utilize spectral linear unmixing and fuzzy classification.  Spectral linear unmixing and fuzzy classification are advanced classifiers. These powerful algorithms produce more accurate results when classifying remotely sensed images compared to conventional supervised and unsupervised classification.

Methods

Spectral Linear Unmixing

For this portion of the lab I utilized Environment for Visualizing Images (ENVI) software.  I performed my spectral linear unmixing on an ETM+ image of Eau Claire and Chippewa county in the state of Wisconsin.

The Available Bands List opens after opening your image file in the ENVI 4.6.1 software. From the window I set the RGB Color scheme to a 4,3,2 band combination (Fig. 1). The image will be displayed in 3 different windows of varying zoom levels after selecting Load RGB button from the list.

(Fig. 1) Available bands list in ENVI

First, I had to convert the image to principle component before running the spectral linear unmixing. Principle component removes the noise from the original image which helps improve the accuracy during the unmixing process. Accessing Compute New Statistics and Rotate from the Transform menu I was able to convert the original image to Principal Component. After opening the PC image created in the previous step the Available Bands List now contains six PC band images.

I moved the zoom window around till I found an area with agriculture, bare soil, and water contained in the viewer. Next I opened the scatter plots for PC Band 1 and PC Band 2. Select 2-D Scatter Plots from the Tools menu to open the The Scatter Plot Band Choice window.  I then selected PC Band 1 for X and PC Band 2 for Y.

With the scatter plot display I collected my end-member sample.  The end-member sample contains "true" samples for a given class. When drawing the polygon/circle for the end-member selection you don't know what class (land cover type) you are selecting. I selected Item 1:20 from the Class menu and then I selected Green Color.  I then drew a circle around one of the ends of the triangle and right mouse clicked inside the circle to finish my selection (Fig. 2).  Once I finished the selection the sample areas turned green on the image depicting the area I had selected.  In (Fig. 2) I had selected the bare soil end member.  I completed the same process but changed the color for each of the other end members (Fig. 3).  The other 2 end members contained agriculture areas and water.

(Fig. 2) Collecting of the first end member from the scatter plot.
(Fig. 3) Completed collecting the other end member samples from the scatter plot.
The next objective of the lab was to located the urban area from the scatter plots.  I loaded PC 3 and PC 4 with the same process as before.  The scatter plot was not as triangular like other plots.  I search around till I found the area of the scatter plot which contained the urban/built up areas (Fig. 4). 

(Fig. 4) Selection from the scatter plot displaying urban/built up area.
After collecting the end member I saved the ROIs in preparation for implementing Linear Spectral Mixing. I then opening Linear Spectral Mixing from the Spectral tab. Utilizing the orginal image and the ROIs from the previous step I created fraction images displaying the various land covers (Fig 5).

(Fig. 5) Fractional images created from the Linear Spectral Mixing. The brighter (white) the area the more suitable the reflectance for the specific class.  (Left to right) Bare soil, Water, Forest, Urban/Built up.

The fractional images can be utilized to create a classification image.  For the purposes of this lab I will not be creating a classification image from the fractional images.  In order to create a classified image I would need to collect the 5th end member to give me the 5 LULC classes I have been creating in previous labs.

Fuzzy Classification

The process of fuzzy classification was performed in Erdas Imagine.

Fuzzy classification is like a advanced version of supervised classification. The classification method has the ability to break the pixel down to different classes (Fig. 6). Breaking down the pixel allows for more accurate classification of remotely sensed images.

( Fig. 6) The ability for Fuzzy classification to break a single pixel to individual classes.
I collected training samples the same way I did in Lab 5. One difference when collecting training samples for fuzzy classification you want to collect an equal number of homogeneous samples and heterogeneous samples. The collection of both types of samples is a better representation of the real world and results in more accurate classification.  After collecting the signature I merged the files as I did in lab 5.

To perform fuzzy classification I opened the Supervised Classification window.  I input the proper files including my signature file and distance file I created in the previous step. I utilized Maximum Likelihood as the Parametric Rule.  I changed the Best Classes Per Pixel to 5 and then proceeded to run fuzzy classification.   I selected  Fuzzy Convolve to combine the results to make one image.  I tooked the combined image and created a map in ArcMap (Fig 7).


(Fig. 7) Classified image using fuzzy classification.



Sources

Lta.cr.usgs.gov,. (2016). Landsat Thematic Mapper (TM) | The Long Term Archive