Sunday, November 12, 2017

Module 10: Supervised Classification

This week we focused on Supervised Image Classification. Supervised Classification differs from the classification method used last week, Unsupervised Classification, in that the analyst must create training sites prior to executing a classification. The classification is based off of spectral signatures and the idea that similar features will exhibit similar spectral characteristics.

The bulk of this week's assignment was completed using tools in ERDAS Imagine. We examined two different methods in ERDAS for creating training sites for the execution of the classification. The selection of training sites is heavily dependent upon the knowledge of the analyst and, therefore, careful examination of the sites selected and the pixels encompassed within is crucial. Confusion in the image can occur if a training site is created that includes spectral signatures from different features.

My map for this week is pictured below. We were directed to coordinates for each land class, but had to determine on our own how much of the area and which pixels to include in each signature/training site we created. At times it was difficult to determine how much was too much to include, or if I was including enough of the pixels for the feature I was creating a signature for.  If too many pixels making up the feature were left out, this could create confusion similar to the inclusion of pixels not representative of a given feature.

After reviewing histograms associated with the image, I chose to use a band combination of Red-4, Green-5, and Blue-3. Reviewing the histograms and associated bands allows for determination of which bands exhibit the least amount of confusion between signatures. The histograms for the created signatures should display a bell shaped curve and be separate from one another. Using bands with the least amount of confusion between signatures is most appropriate for classifying the image correctly.

The inset map pictured is a Distance Image of the same area that has been classified. The purpose of the distance image is to highlight areas of the image that may have been incorrectly classified. Areas that appear bright in the image are likely to have been classified incorrectly. While there are some bright areas in the distance file created, the majority of the image is dark and classified correctly. The classification and distance image were produced using the Maximum Likelihood rule. This rule is governed by the probability that a pixel belongs to a particular class.

Monday, November 6, 2017

Module 9: Unsupervised Classification


 This week we focused on learning about Unsupervised Classification methods in ArcMap and ERDAS Imagine. Overall, digital image classification uses spectral information in different bands to attempt and identify features in an image. Unsupervised Classification relies on algorithms to classify data without the influence of an analyst. In contrast, Supervised Classification uses training areas set by the analyst to "supervise" the classification process.

Since we utilized the Unsupervised method this week, after the process was complete, we were required to examine the newly classified image and recode the features present. This entailed closely examining the image and changing the colors of the features in the image to accurately represent the features (e.g. dark green for trees). We were also required to narrow down the features in the image from fifty into just five categories as shown in the map.

This task required reviewing the image in a true color band combination to ensure that features were being classified correctly. Several tools in ERDAS Imagine, slide and blend, make this task easier, but I found turning the image on and off (viewable) while it was layered on top of the true color image was easiest for me.


Sunday, October 29, 2017

Module 8: Thermal Imagery

This week we worked on Module 8: Thermal Imagery. Learning points for the week included the thermal properties of terrain, how the time of day an image is taken can affect thermal imagery, Wein's Law and Stefan-Boltzman Law, and finally some properties of thermal remote sensors.

For the map this week we were required to create two composite images and then choose one of the images to run analysis on to identify an area of interest. I chose to work with the image of coastal Ecuador created in ArcMap as opposed to the image of Pensacola, Fl created in ERDAS Imagine (we learned how to use both programs to complete the task.)

Below is my map and a brief explanation of how I identified and analyzed my chosen area of interest.

I started in ArcMap with the ETMcomposite.img and viewed the image in the Stretched setting viewing Band6 (thermal band). In this view, there are patches on the mountainous region to the northwest of the image that exhibit higher temperature signatures. Switching to RGB Composite (Red-1, Green-2, and Blue-3) and taking another look, there appears to be some deforestation going on in the mountains. I then switched the bands to Red-6, Green-4, and Blue-7 in an attempt to figure out if this is from logging or if there are urban features in the mountains. The areas I noticed are showing up as red blurs and not grey or blue, so it appears the heat signatures are from areas of bare land where logging could be occurring and not urban developments.

Monday, October 23, 2017

Module 7: Spectral Enhancements and Band Indices

This week we continued learning about image preprocessing with a focus on spectral enhancement and band indices. Spectral enhancement utilizes the spectral properties of images to improve image interpretation, or to bring to the forefront of the image a given feature of interest. We mainly focused on spectral enhancements in ERDAS Imagine, but also learned that band combinations can be manipulated in ArcMap.

After a few tutorials in ERDAS on how to preprocess images, we were given an image file and asked to use the techniques we had just learned to locate and enhance the appearance of certain features. We were only given a description of the spectral properties of the features and then asked to find them. Below are my maps for this week with the three features we were asked to find. Additional information on the features and how I enhanced them are on the maps.



Tuesday, October 17, 2017

Module 6: Image Enhancement



The topic of study this week has been image enhancement using ERDAS Imagine and ArcMap. We've learned about the different tools available to enhance images, such as spatial enhancements (high pass and low pass filters, Fourier transformations), and spectral enhancements. After background information and few exercises, we were on our own to utilize ERDAS Imagine and ArcMap to produce the best image possible from a provided data file.

Seen in the map above is the image that was provided. The image was taken by the Landsat 7 sensor. The challenge with this image was to reduce the impact of "striping" in the image that occurred due to a malfunction of the sensor in 2007. The first step was to use a Fourier transformation to reduce the effects of the striping and then start using filters to enhance the detail of the image.

The Fourier transformation was a bit tricky because it is drawn by hand and therefore running the process a number of times can produce different results. I tried this several times before I felt that the striping was reduced enough for me to start working on bringing out the detail in the image. The first filter I applied was a 3x3 high pass filter (allows high frequency data through to enhance edges), followed by a 3x3 sharpen filter. I continued to apply filters but felt that I was losing detail rather than gaining, so I stopped at the high pass and sharpen filters.

I next went to work adjusting the contrast using the Adjust Radiometry tool in ERDAS Imagine. I utilized the options found within this tool until I had produced what I felt was the best image possible. Though it is possible to run a few of these processes in ArcMap, I chose to work solely in ERDAS Imagine in order to become more familiar with the program.

Monday, October 2, 2017

Module 5a: Intro to Electromagnetic Radiation (EMR)

This week we conducted further study of Electromagnetic Radiation (EMR) and were introduced to  ERDAS Imagine (EI). We also worked on understanding the calculations behind finding wavelength and frequency using Maxwell's Wave Theory C =(wavelength)(frequency) with C representing the speed of light. Planck's Relation, Q =hv, where Q= energy of photon, h= Planck's Constant, and v= frequency, was also studied to understand the amount of energy associated with different parts of the EMR spectrum.

After working on the equations associated with understanding EMR, we were introduced to EI. The lab for the week entailed a walk through of basic functions in EI and how to add images, save them, zoom in/out, add columns to the attribute table, etc. Following the walk through of EI, we began preparing the image to be used for this week's map.

For the map this week we were required to use both EI and ArcMap to produce a map showing land classification of an area of forest in the Olympic National Forrest, Washington. The image shown below was extracted from a larger image using EI. The area calculations shown for each land classification were also calculated using EI prior to being mapped in ArcMap.

Monday, September 25, 2017

Module 4: LU/LC Classification Accuracy Assessment



This week we were tasked with using Ground Truthing techniques, specifically 'ex-situ' techniques, to assess the accuracy of our Land Use/Land Cover maps from last week. In order to assess our maps we were directed to use Google Maps Street View to try and determine if we were correct in our classifications.

Using 30 sample points placed randomly (but ensuring each classification received at least one sample point) on the map, I determined the overall accuracy of my map to be 67%. A few of the points surprised me that I was so far off in my assessment. For example, what I thought might be a small farm in the aerial view turned out to be a large rectangular shaped church next to an empty field on Google Maps. Going back and assessing my work on the aerial photo compared to Google Maps was enlightening and made me realize what an art aerial photo interpretation really is. Using Google Maps to figure out what an object on the ground actually is is fairly easy because of the resolution and ability to see ground level images, but relying solely on an aerial photo for identification is much more difficult.

Considering that it is early in the semester and this has been my first attempt at interpreting an aerial photo, I'm happy/okay with my accuracy percentage for this assignment. It will be interesting to revisit this assignment later in the semester to see how much my skills have improved and maybe identify what features/interpretations led me astray in this assessment.