ENVS 421 – Advanced GIS Applications

Scroll down to content

ENVS 421: GIS IV: Advanced GIS Applications

 

Lab 1 – Map Digitization and Georeferencing

 

Fenske_ENVS421_Lab1-1
For this Map, we were required to recreate a historic map of plantations on the island of Madagascar. We did this by georeferencing an old static image of a similar map and creating a new feature dataset based on digitized polygon and point features from the original map. We also created domains for our feature classes. We then used advanced cartographic techniques to create an old-style map. We were instructed to adhere to the styles and symbology of the original map as much as possible.

Click here to view a larger image of this map

Click here to view the original image that this map is digitized from

 

Lab 2 – Remote Sensing in ArcGIS Pro

 

Fenske_Lab2_Map1-1
This lab focused on the use of Landsat 4 and 5 satellite imagery to conduct remote sensing analysis. I first calculated the Normalized Difference Vegetation Index for a region including parts of Whatcom, Skagit, and Island counties. To do this, I used the Raster Calculator tool with the expression: ((Float(“Near IR Band”) – Float(“Red Band”)) / (Float(“Near IR Band”) + Float(“Red Band”)))

Click this link to view a larger image of this map

 

Fenske_Lab2_Map2_NDVIChange-1
After calculating NDVI for 1998 and 2008, I subtracted the 1998 NDVI raster from the 2008 NDVI raster to find change in NDVI over that particular decade.

Click this link to view a larger image of this map

 

Lab 2 Practical Exam

Fenske_Lab2_Map3-1
I then conducted a supervised classification of Bellingham to determine different basic landcover types within the city limits. To do this, I used the Image Classification toolbar in ArcMap to create representative samples of each cover type based on false cover Landsat images. The result can be seen below.

Click here to view a larger image of this map

 

Lab 3 – Watershed Analysis

In this lab, we used elevation data to model where streams should be (based on what we know about the topography) and delineate sub-watersheds and catchments. My task was to produce a Model Builder model that other analysts can easily run using their own data, along with a guide to using the model and interpreting the output files. In the last section of the lab, I modeled connectivity along the streams using a geometric network dataset.

Fenske_Lab3_Map1
This is an aspect map of the South Fork Nooksack Watershed. This gives us a general knowledge of how this mountainous territory is shaped, each different color represents which cardinal direction a slope is facing based off of the Aspect tools 360° range.

Click this link to view a larger image of this map

 

Fenske_Lab3_Map2
I used the flow direction grid to compute the accumulated number of cells that are draining to any particular cell in the DEM, this is known as the Flow Accumulation. This map was the output of the Model Builder shown below. Before doing this I had to use the Fill Sinks tool to fill in any raster inconsistencies.

Click this link to view a larger image of this map

 

Fenske_Lab3_ModelBuilder1

Click this link to view the Model Builder image

 

Fenske_Lab3_Map3
This map was made using the Stream Link tool. The Stream Link tool assigns unique values to sections of a raster linear network between intersections. Links are the sections of a stream channel connecting two successive junctions, a junction and the outlet, or a junction and the drainage divide.

Click this link to view a larger image of this map

 

Fenske_Lab3_ModelBuilder2

Click this link to view the Model Builder image

 

Fenske_Lab3_Map4
This is map showing Strahler stream orders in the South Fork Nooksack Watershed. Unfortunately at the scale I was working at and the fact that the streams were raster, there was a problem differentiating color of stream orders which is why I inserted a detail map. I used the Stream Link tool to create stream orders.
350px-Flussordnung_(Strahler).svg
Here’s an example of how Strahler stream orders work.

Click this link to view a larger image of this map

 

Fenske_Lab3_ModelBuilder3
This is the end of the Model Builder model used to create the above map.

Click this link to view the Model Builder image

 

Fenske_Lab3_Map5
This map of Flow Length was created using the next step in the model builder below For this map I calculated flow direction using the “Flow Length” tool, I used SFN_flow as my flow direction raster. “DOWNSTREAM” was my direction of measurement inside the “Flow Length” tool. This measures the distance from each cell to the most downstream cell in the DEM.

Click this link to view a larger image of this map

 

Fenske_Lab3_Map6
Before I made this map. I went into the attribute table of my SFN_Boundary shapefile and deleted all of the streams and boundaries that extended outside of the South Fork Nooksack Watershed. I then clipped the streams to my new boundary shapefile. After this I made a point feature class in my geodatabase and placed a point where on the stream line raster downstream of the confluence of any two Order 4 or higher streams. I then used the “Snap Pour Point” spatial analyst tool, “snap pour” points to the cell of highest flow accumulation within a specified distance. I used my Outlet points shapefile that I made as the input feature pour point data. I used SFN_acc as the input for flow accumulation raster. I chose 30 m as my snap distance, and my output was called “snap_pt.” I then opened the “Watershed” Spatial Analyst tool and used FSN_flow as my flow direction input. I used my newly created “snap_pt” as the input pour raster. I then called the output “Subwatershed”

Click this link to view a larger image of this map

 

Fenske_Lab3_Map7
From the above map to this one, I opened my “Watershed” spatial analyst tool again and used my flow direction raster “SFN_flow” and for the pour point data I used “StreamLink”. This map shows catchments, which is any area of land where precipitation collects and drains off into a common outlet, such as into a river, bay, or other body of water. The drainage basin includes all the surface water from rain runoff, snow-melt, and nearby streams that run down slope towards the shared outlet, as well as the groundwater underneath the earth’s surface.

Click this link to view a larger image of this map

 

Lab 4 – Watershed Crossings Assessment

 

     Forest roads potentially impact freshwater fish habitat through the rerouting of water, the triggering of shallow rapid landslides, and the delivery of fine sediments to the stream.  Fine sediments can wash down from forest road impacted streams into fish spawning streams and clog up the coarser spawning ground substrates, severely impacting the survival of incubating fish.  Additionally, fine sediments can fill in fish habitat rearing pools, resulting in shallower depths, higher water temperatures and less habitat area for juvenile rearing in the summer. The impacts make forest roads a potential factor limiting fish production throughout the Pacific Northwest. 

     Forest road density and fish stream crossing density are two metrics that can serve as proxies of the potential for forest roads to impact streams. There is more fine sediment produced as road densities increase and there is a higher probability of road sediment entering streams as road/stream crossings increase.

In this lab I completed a GIS assessment of potential forest road conditions in the Stillaguamish River watershed, sub-watersheds, and catchments. Specifically, I will calculated road density as miles of forest road per square miles of area for watershed, sub-watersheds and catchments, and I calculated road crossing density as number of road crossings per miles of fish bearing streams within a watershed, sub-watersheds and catchments. Upon completion I created an ArcGIS Story Map to tell the story of my GIS Assessment of potential forest road impacts in the Stillaguamish watershed.

Click this link to view my ESRI Story Map

 

Lab 5 – Primary Data Collection

 

The objective for this lab was to design both a data collection system and a data storage system for primary spatial data collection. My professor provided each student in the class with 15 survey point locations that were selected randomly across Western Washington University’s campus.  We were to visit each of the survey points and collect data on land cover, moisture, noise, and aesthetics. Much of the data was highly variable and dependent on the individual collecting it. 

We were tasked with creating a field data collection form that allowed us to efficiently collect data on paper/Excel/Collector for the each category. The points were given to us in an Excel file and we had to import them into Arc in order to create a field map. Once we created the field map, we would visit the points, collect the data, and then enter the data into our collection form and into our attribute table. 

Fenske_Lab5_Map1-1

Click this link to view a larger image of this map

Fenske_Lab5_Map2-1

Click this link to view a larger image of this map

This exercise was the first part of a geostatistical analysis that you will see in my Lab 6 post where we aggregate the entire classes data together to map two variables, campus safety and campus noise.

Lab 6 – Data Management and Information Creation

 

This lab is a continuation of Lab 5 “Primary Data Collection” my objective was to compile all of the point feature classes collected by my class into a single point feature class in a single database and to create interpolated surfaces from the combined points for noise, and safety. We were all given 32 geodatabases with 32 feature classes of 15 survey points with matching attributes. The goal was to populate a geodatabase with empty feature classes would serve as the container for the final combined 480 points. The geodatabase and empty feature class had the same exact domains and attributes that I used in Lab 5. 

The best way I found to get the data from each of the individual 32 geodatabases was through the use of an iterator in Model Builder. I used the “Iterate Feature Classes” tool and loaded the folder the 32 geodatabases were stored in into the iterator. The iterator will visit each of the 32 geodatabases inside of the folder and look for feature classes inside of each of the geodatabases.  I then added the “Append” tool to my model.  If each of my classmates designed our geodatabases and feature classes exactly the way we were directed to in the Geodatabase section of Lab 5 then all of the feature classes would append perfectly, points and attributes,  into the feature class I was appending to. Any attribute that has not been formatted correctly will be Null in the feature class I was appending to.  Meaning, the points will append, but you will see Null values in the cells for that particular attribute.  

After cleaning up some of the data that was incorrectly formatted all of the attributes were populated in the final feature class of points. I was then tasked with researching different interpolation methods inside of the Geostatistical Analyst toolbox. After making a decision on which interpolation method to use I was to make two maps .One of the maps is for noise surface and the other map is for safety surface which, again, were highly variable and dependent on the individual who collected the data.

 

Fenske_Lab6_NoiseMap-1

Click this link to view a larger image of this map

Fenske_Lab6_SafetyMap-1

Click this link to view a larger image of this map

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: