Portfolio

ENVS 421 – Advanced GIS Applications

ENVS 421: GIS IV: Advanced GIS Applications

 

Lab 1 – Map Digitization and Georeferencing

 

Fenske_ENVS421_Lab1-1
For this Map, we were required to recreate a historic map of plantations on the island of Madagascar. We did this by georeferencing an old static image of a similar map and creating a new feature dataset based on digitized polygon and point features from the original map. We also created domains for our feature classes. We then used advanced cartographic techniques to create an old-style map. We were instructed to adhere to the styles and symbology of the original map as much as possible.

Click here to view a larger image of this map

Click here to view the original image that this map is digitized from

 

Lab 2 – Remote Sensing in ArcGIS Pro

 

Fenske_Lab2_Map1-1
This lab focused on the use of Landsat 4 and 5 satellite imagery to conduct remote sensing analysis. I first calculated the Normalized Difference Vegetation Index for a region including parts of Whatcom, Skagit, and Island counties. To do this, I used the Raster Calculator tool with the expression: ((Float(“Near IR Band”) – Float(“Red Band”)) / (Float(“Near IR Band”) + Float(“Red Band”)))

Click this link to view a larger image of this map

 

Fenske_Lab2_Map2_NDVIChange-1
After calculating NDVI for 1998 and 2008, I subtracted the 1998 NDVI raster from the 2008 NDVI raster to find change in NDVI over that particular decade.

Click this link to view a larger image of this map

 

Lab 2 Practical Exam

Fenske_Lab2_Map3-1
I then conducted a supervised classification of Bellingham to determine different basic landcover types within the city limits. To do this, I used the Image Classification toolbar in ArcMap to create representative samples of each cover type based on false cover Landsat images. The result can be seen below.

Click here to view a larger image of this map

 

Lab 3 – Watershed Analysis

In this lab, we used elevation data to model where streams should be (based on what we know about the topography) and delineate sub-watersheds and catchments. My task was to produce a Model Builder model that other analysts can easily run using their own data, along with a guide to using the model and interpreting the output files. In the last section of the lab, I modeled connectivity along the streams using a geometric network dataset.

Fenske_Lab3_Map1
This is an aspect map of the South Fork Nooksack Watershed. This gives us a general knowledge of how this mountainous territory is shaped, each different color represents which cardinal direction a slope is facing based off of the Aspect tools 360° range.

Click this link to view a larger image of this map

 

Fenske_Lab3_Map2
I used the flow direction grid to compute the accumulated number of cells that are draining to any particular cell in the DEM, this is known as the Flow Accumulation. This map was the output of the Model Builder shown below. Before doing this I had to use the Fill Sinks tool to fill in any raster inconsistencies.

Click this link to view a larger image of this map

 

Fenske_Lab3_ModelBuilder1

Click this link to view the Model Builder image

 

Fenske_Lab3_Map3
This map was made using the Stream Link tool. The Stream Link tool assigns unique values to sections of a raster linear network between intersections. Links are the sections of a stream channel connecting two successive junctions, a junction and the outlet, or a junction and the drainage divide.

Click this link to view a larger image of this map

 

Fenske_Lab3_ModelBuilder2

Click this link to view the Model Builder image

 

Fenske_Lab3_Map4
This is map showing Strahler stream orders in the South Fork Nooksack Watershed. Unfortunately at the scale I was working at and the fact that the streams were raster, there was a problem differentiating color of stream orders which is why I inserted a detail map. I used the Stream Link tool to create stream orders.
350px-Flussordnung_(Strahler).svg
Here’s an example of how Strahler stream orders work.

Click this link to view a larger image of this map

 

Fenske_Lab3_ModelBuilder3
This is the end of the Model Builder model used to create the above map.

Click this link to view the Model Builder image

 

Fenske_Lab3_Map5
This map of Flow Length was created using the next step in the model builder below For this map I calculated flow direction using the “Flow Length” tool, I used SFN_flow as my flow direction raster. “DOWNSTREAM” was my direction of measurement inside the “Flow Length” tool. This measures the distance from each cell to the most downstream cell in the DEM.

Click this link to view a larger image of this map

 

Fenske_Lab3_Map6
Before I made this map. I went into the attribute table of my SFN_Boundary shapefile and deleted all of the streams and boundaries that extended outside of the South Fork Nooksack Watershed. I then clipped the streams to my new boundary shapefile. After this I made a point feature class in my geodatabase and placed a point where on the stream line raster downstream of the confluence of any two Order 4 or higher streams. I then used the “Snap Pour Point” spatial analyst tool, “snap pour” points to the cell of highest flow accumulation within a specified distance. I used my Outlet points shapefile that I made as the input feature pour point data. I used SFN_acc as the input for flow accumulation raster. I chose 30 m as my snap distance, and my output was called “snap_pt.” I then opened the “Watershed” Spatial Analyst tool and used FSN_flow as my flow direction input. I used my newly created “snap_pt” as the input pour raster. I then called the output “Subwatershed”

Click this link to view a larger image of this map

 

Fenske_Lab3_Map7
From the above map to this one, I opened my “Watershed” spatial analyst tool again and used my flow direction raster “SFN_flow” and for the pour point data I used “StreamLink”. This map shows catchments, which is any area of land where precipitation collects and drains off into a common outlet, such as into a river, bay, or other body of water. The drainage basin includes all the surface water from rain runoff, snow-melt, and nearby streams that run down slope towards the shared outlet, as well as the groundwater underneath the earth’s surface.

Click this link to view a larger image of this map

 

Lab 4 – Watershed Crossings Assessment

 

     Forest roads potentially impact freshwater fish habitat through the rerouting of water, the triggering of shallow rapid landslides, and the delivery of fine sediments to the stream.  Fine sediments can wash down from forest road impacted streams into fish spawning streams and clog up the coarser spawning ground substrates, severely impacting the survival of incubating fish.  Additionally, fine sediments can fill in fish habitat rearing pools, resulting in shallower depths, higher water temperatures and less habitat area for juvenile rearing in the summer. The impacts make forest roads a potential factor limiting fish production throughout the Pacific Northwest. 

     Forest road density and fish stream crossing density are two metrics that can serve as proxies of the potential for forest roads to impact streams. There is more fine sediment produced as road densities increase and there is a higher probability of road sediment entering streams as road/stream crossings increase.

In this lab I completed a GIS assessment of potential forest road conditions in the Stillaguamish River watershed, sub-watersheds, and catchments. Specifically, I will calculated road density as miles of forest road per square miles of area for watershed, sub-watersheds and catchments, and I calculated road crossing density as number of road crossings per miles of fish bearing streams within a watershed, sub-watersheds and catchments. Upon completion I created an ArcGIS Story Map to tell the story of my GIS Assessment of potential forest road impacts in the Stillaguamish watershed.

Click this link to view my ESRI Story Map

 

Lab 5 – Primary Data Collection

 

The objective for this lab was to design both a data collection system and a data storage system for primary spatial data collection. My professor provided each student in the class with 15 survey point locations that were selected randomly across Western Washington University’s campus.  We were to visit each of the survey points and collect data on land cover, moisture, noise, and aesthetics. Much of the data was highly variable and dependent on the individual collecting it. 

We were tasked with creating a field data collection form that allowed us to efficiently collect data on paper/Excel/Collector for the each category. The points were given to us in an Excel file and we had to import them into Arc in order to create a field map. Once we created the field map, we would visit the points, collect the data, and then enter the data into our collection form and into our attribute table. 

Fenske_Lab5_Map1-1

Click this link to view a larger image of this map

Fenske_Lab5_Map2-1

Click this link to view a larger image of this map

This exercise was the first part of a geostatistical analysis that you will see in my Lab 6 post where we aggregate the entire classes data together to map two variables, campus safety and campus noise.

Lab 6 – Data Management and Information Creation

 

This lab is a continuation of Lab 5 “Primary Data Collection” my objective was to compile all of the point feature classes collected by my class into a single point feature class in a single database and to create interpolated surfaces from the combined points for noise, and safety. We were all given 32 geodatabases with 32 feature classes of 15 survey points with matching attributes. The goal was to populate a geodatabase with empty feature classes would serve as the container for the final combined 480 points. The geodatabase and empty feature class had the same exact domains and attributes that I used in Lab 5. 

The best way I found to get the data from each of the individual 32 geodatabases was through the use of an iterator in Model Builder. I used the “Iterate Feature Classes” tool and loaded the folder the 32 geodatabases were stored in into the iterator. The iterator will visit each of the 32 geodatabases inside of the folder and look for feature classes inside of each of the geodatabases.  I then added the “Append” tool to my model.  If each of my classmates designed our geodatabases and feature classes exactly the way we were directed to in the Geodatabase section of Lab 5 then all of the feature classes would append perfectly, points and attributes,  into the feature class I was appending to. Any attribute that has not been formatted correctly will be Null in the feature class I was appending to.  Meaning, the points will append, but you will see Null values in the cells for that particular attribute.  

After cleaning up some of the data that was incorrectly formatted all of the attributes were populated in the final feature class of points. I was then tasked with researching different interpolation methods inside of the Geostatistical Analyst toolbox. After making a decision on which interpolation method to use I was to make two maps .One of the maps is for noise surface and the other map is for safety surface which, again, were highly variable and dependent on the individual who collected the data.

 

Fenske_Lab6_NoiseMap-1

Click this link to view a larger image of this map

Fenske_Lab6_SafetyMap-1

Click this link to view a larger image of this map

 

 

ENVS 420 – Analysis and Modeling

Environmental Studies 420: GIS III: Analysis and Modeling

 

Lab 1 – Data Management and Model Builder

 

00001
For this assignment, we were required to construct a model using ModelBuilder to create a folder directory for our project. After creating a folder directory, we were given stream and road data for Whatcom County, Washington and were instructed to determine the number of stream and road crossings per Watershed Administrative Unit in the county. This project identifies points where streams and roads intersect within Whatcom County watersheds. Restoration efforts are being made at these points to reduce the amount of automobile runoff entering Whatcom watersheds. Restoration projects will be prioritized in those watersheds with the highest frequency of intersections. The first step of analysis was to select Whatcom county out of the Washington Counties dataset. The clip tool was then used to identify Washington watersheds falling within Whatcom County. The next step of analysis was identifying point locations where there are intersections of roads and streams; both line and polygon intersections were accounted for. Identity and frequency geoprocessing tools were used to create an output table containing the number of intersections in each watershed. Shown in the map below is Whatcom County watersheds symbolized by the number of stream-road intersections. The watersheds with the highest number of intersections are: Maple Creek – North Fork Nooksack River, Hedrick Creek – North Fork Nooksack River, Canyon Creek, Dakota Creek – Frontal Drayton Harbor, Lake Whatcom and Skookum Creek.

Click this link to view a larger image of this map

Lab 1 Practical Exam

Lab1_TakeHome

Fenske_ModelBuilder_Annotation
After each in-class lab we are presented with a practical exam in which we apply what we learned in lab for that week in a take home exam. For this practical exam, we received instructions to determine the walkability of neighborhoods in Bellingham, Washington using the number of street intersections per neighborhood as our index. To run this analysis, I created a model in ModelBuilder and used the Clip, Intersect, Identity, and Frequency tools to determine the number of street intersections per neighborhood. I then created a chloropleth map symbolized using graduated colors to show the number of street intersections per neighborhood cartographically.

Click this link to view a larger image of the map

Click this link to view the Model Builder image

Lab 2 – Coordinate Systems & Map Projections

 

Lab2_MapB
Focusing specifically on projections, this lab involved exercises in choosing a correct projection for a given task, determining what projection a given dataset was in, changing a dataset’s projection using the Project tool, defining a dataset’s projection using the Define Projection tool, and creating a custom projection for a custom task. To model ashfall risk from a potential Yellowstone Caldera eruption, we modified the standard parallels and central meridian of a USA Contiguous Equidistant Conic projection in order to maximize accuracy of distance in the Northwest USA region. The United States Geological Survey (USGS) has determined there is potential for 1.5 centimeters of ash fall within 500 kilometers of the Yellowstone Caldera. In the event of the Yellowstone Caldera eruption, ash would cover a much greater area of the United States, though we are interested in the areas at highest risk. Shown below is a map depicting the towns that fall within 500 kilometers of the eruption site. To begin analysis, it was important to identify issues of projection in the data given. Manually editing the USA Contiguous Equidistant Conic projection to better fit our area of study was the first step. Our Central Meridian and Latitude of Origin were changed to the coordinates of Yellowstone, with our Standard Parallels two degrees North and South of Yellowstone. Editing these aspects minimized distortion at the location of interest. To re-project all data layers, the ‘Iterate Feature Class’, ‘If Coordinate System Is’ and ‘Project” geoprocessing tools were used. The ‘If Coordinate System Is’ tool read through each file in our geodatabase to identify which data needed to be re-projected. If the data was not correctly projected, it fed into the ‘Project’ tool, changing it to our custom projection. With analysis created, 35 towns were identified in the 500 kilometer buffer around the Yellowstone Caldera.

Click this link to view a larger image of this map

 

Lab 2 Practical Exam

Lab2_TakeHome
Our task for this practical exam was to create a map showing the closest National Park or Preserve to any given area in the U.S. We were given all of the data necessary to run this analysis, but there were issues with the projections for the datasets that we had to sort out. This included redefining a projection for a dataset that was in the wrong projection, figuring out the projection for a dataset which had lost its .prj file, and then reprojecting all the data so that it was in an appropriate projection for our analysis. I then used the Thiessen Polygon tool to find the areas closest to National Parks and Preserves in the United States. I then clipped the Thiessen Polygons to a US shapefile and created an annotation feature class for all labeling. This was one of the most enjoyable exams for me I had a lot of fun with the cartography side of this project and using the Thiessen Polygon tool.

Click here to view a larger image of this map

 

Lab 3 – Attribute Tables & Vector Analysis

 

Lab_3-1
This lab had us show similar data normalized through different methods to show the impact we as the cartographer and analyst can present information that can be perceived differently. This analysis of King County was done to determine if the effects from hazardous waste sites are distributed equally throughout the greater Seattle area. A quantitative analysis was done to identify the demographics of those residents within a 1.5 mile distance to these sites. The two factors of interest are percent of the population made up of non-white minorities and resident who fall beneath the poverty line. We began data collection through the United States Census Bureau to select all non-white minorities and residents in poverty. The data was stored in an excel file that we had to then format appropriately for the GIS software to process. From the same website, we obtained a shapefile containing King County census blocks. Areal Weighted Interpolation was used to estimate the population in buffers around each hazardous waste site. This was done by determining the proportion of each block group that falls within 1.5 of each site. A series of fields and calculations were added to the attribute table of our joined data sets. Darker colors in the map represent a higher population while census blocks symbolized with lighter colors have a lower population of the factors of interest.

Click this link to view a larger image of this map

 

Lab 3 Practical Exam

 

Fenske_Exam3_Export-1
Our practical exam was similar to the in-class except the extent of the area we were working with. Some problems I ran into included getting census data for the “Salish Sea” which crosses national borders. So I had to play around in Excel merging US tables with Canadian tables. Once done in Excel I brought the tables into Arc and began to work with them there. I used batch project to get all of my layers into HARN State Plane Washington North. Once I had all of the layers in the same projection I had to recalculate area for my newly joined polygons of counties and provinces because Arc does not auto update user generated fields (sq. km). After this I performed weighted areal interpolation to find out the projected change in population of this area over a 25 year time period.

Click this link to view a larger image of the map

 

Lab 4 – Raster Analysis

 

Lab4_Elevation-1
This lab focused on the processing, analysis, and display of raster data. In particular, we utilized the Raster Calculator, Aggregate, Focal Statistics, Con, Reclassify, Polygon to Raster, Raster to Polylines, Slope, Least Cost Path, Aspect, and Raster Clip tools. We used these tools to accomplish two goals: to create a Swiss Hillshade effect by layering a transparent DEM over two specialized Hillshades, and to use a weighted overlay to determine a least cost path across the North Cascades. The bottom two maps display Aspect and Slope of the same extent of the North Cascades.

Click this link to view a larger image of the map

Fenske_Model1-1

Click this link to view the annotated Model Builder used to make this map

 

Lab 4 Practical

 

Lab4_TradeRoutes-1
In order to try and pinpoint likely archeological sites in the North Cascades, we used a least cost analysis based off a weighted overlay to determine the easiest routes through the range where people might have traveled before roads existed in the area. The main parameters for this analysis were steepness of slope (steeper slope=more cost to travel across) and distance to streams (as people need to be near a water source when traveling). We removed waterbodies from the analysis as well, as humans are not likely to travel across lakes or large rivers. The map above shows the least cost paths between Winthrop and modern day coastal cities. Below is the model used to conduct this analysis.

Click this link to view a larger image of the above map

Fenske_Model2-1.PNG

Click this link to view a the Model Builder used to make this map

 

Lab 5 – Multi Criteria Evaluation

 

ChangeInSuitability-1
For this Lab, we paired up in groups of 3-4 and were instructed to predict the potential changes in habitat suitability of Sitka Spruce (Picea sitchensis) by the year 2080 based on projections of global climate change. We underwent an initial literature review to pinpoint certain climatic variables that were influential in dictating where the species grew. We then downloaded historical and future climate data for these variables and used the data to determine how and where the species habitat range would change. The variables my group and I decided would most greatly impact the Sitka Spruce’s suitability were: Mean Annual Temperature (MAT), Mean Annual Precipitation (MAP), Mean Maximum Temperature (Tmax), Mean Minimum Temperature (Tmin), Degree Days below 0°C (DD.0), Degree Days Above 5°C (DD.5), Temperature difference between MWMT and MCMT, or continentality °C (TD), Winter Precipitation (mm) (PPTwt). For our analysis we wanted to quantify climate variables through the use of zonal statistics. The zonal statistics table tool was used for each climate variable, in each time period. We calculated 1.5 standard deviations from the mean, to create an appropriate range at which the species could exist. We then added and subtracted STD(1.5) from the mean, to achieve minimum and maximum values for all data sets. This created the envelope of values that could potentially be suitable for the Sitka spruce in both current and projected climates.

Click this link to view a larger image of this map

 

PART4SuitabilityModel_SUS-1

Click this link to view the Model Builder used to create the above map

 

Tmin_Change-1
This is a map of just 1 of the climatic variables instead of the 8 variables used above. This is only the Tmin variable, Minimum Monthly Temperature.

Click this link to view a larger image of this map

ENVS 321 – Computer Cartography

Environmental Studies 321: GIS II: Computer Cartography

The purpose of this course was to introduce students to computerized cartography and graphic design techniques & skills. Maps are powerful communication tools for describing geographic distributions and geographic relationships and this class covered various cartographic methods, as well as some of the limitations of graphic communication, for illustrating reports, papers, and theses. Topics covered include symbology, text (map annotation), layout and the use of color for cartography.

Lab 1 – Basics

Fenske_Lab1b
This lab consisted of maps related to the 2004 tsunami in the Indian Ocean. This lab focused on the basics of cartographic design (map elements common to most maps such as a scale indicator and orientation) and the basics of ArcGIS Pro. The emphasis is on the use of the tools and properties for cartographic output: Setting the page size and parameters, selecting and working with basic symbols, working with the scale indicator, working with layer, map frame, and layout properties, modifying legends, etc.

 

Lab 2 – Symbols

Fenske_Lab2a
This lab focused on the use of symbology to communicate spatial information. We were not allowed to use color to differentiate areas of the map. The purpose of this was to be able to have us use elements such as text and point symbology to make map elements stand out from one another.

Click here to view a larger image of this map

 

Fenske_Lab2b
Our primary focus was on San Juan County but we also included some of the adjoining islands of Whatcom and Skagit Counties. This map had us focus on the difference that size of text and color of text can make. Again, no color was allowed.

Click here to view a larger image of this map

 

Fenske_Lab2c

Similar to the second map, the focus here was to use annotation feature classes for labeling as well as being able to use color to differentiate map elements.

Click here to view a larger image of this map

 

Lab 3 – Text

 

Fenske_Lab3a
This lab focused on the requirements and possibilities of the written word when used as part of a cartographic product. The two previous labs have given minimal attention to the text. Lettering, however, is one of the most common symbols found on maps.

Click here to view a larger image of this map

 

Fenske_Lab3b
With this map we were tasked with coming up with our own symbology for the different camp sites of Sucia Island State Park. I chose to color code each camp site with a different color as seen in the legend. Our instructions were to symbolize every single point feature in our given file no matter what.

Click here to view a larger image of this map

 

Lab 4 – Color

 

Fenske_Lab4MapB
This lab gave us our first opportunity to work with DEM data as well as bathymetric data. We were not allowed to use any black or grey to symbolize map elements aside from text. This became increasingly difficult as I went on as I found myself running out of colors to use. This gave me an appreciation for earlier labs where everything was black and white only. It goes to show how visually distracting and contrasting so much color in one single area can be.

Click here to view a larger image of this map

 

Lab 5 – Final

 

Fenske_Lab5aWA
The purpose of this lab was to have the students come up with their own maps based on anything they’d like using the elements we had been learning about through our quarter. This first map I did was showing all of the major lakes and reservations in Washington state. My focus was mainly on the text aspect of the class which is something I enjoyed because of how detail oriented it can be.

Click here to view a larger image of this map

 

Fenske_Lab5aMT
I did a similar theme with this map of Montana, but focused a bit more on the color aspect such as in lab 4. I included population density data along with National Parks. We were told to include multiple themes within these maps and to have them have a relationship with one another in terms of content and cartographic style.

Click here to view a larger image of this map