ENVS 420 – Analysis and Modeling

Scroll down to content

Environmental Studies 420: GIS III: Analysis and Modeling


Lab 1 – Data Management and Model Builder


For this assignment, we were required to construct a model using ModelBuilder to create a folder directory for our project. After creating a folder directory, we were given stream and road data for Whatcom County, Washington and were instructed to determine the number of stream and road crossings per Watershed Administrative Unit in the county. This project identifies points where streams and roads intersect within Whatcom County watersheds. Restoration efforts are being made at these points to reduce the amount of automobile runoff entering Whatcom watersheds. Restoration projects will be prioritized in those watersheds with the highest frequency of intersections. The first step of analysis was to select Whatcom county out of the Washington Counties dataset. The clip tool was then used to identify Washington watersheds falling within Whatcom County. The next step of analysis was identifying point locations where there are intersections of roads and streams; both line and polygon intersections were accounted for. Identity and frequency geoprocessing tools were used to create an output table containing the number of intersections in each watershed. Shown in the map below is Whatcom County watersheds symbolized by the number of stream-road intersections. The watersheds with the highest number of intersections are: Maple Creek – North Fork Nooksack River, Hedrick Creek – North Fork Nooksack River, Canyon Creek, Dakota Creek – Frontal Drayton Harbor, Lake Whatcom and Skookum Creek.

Click this link to view a larger image of this map

Lab 1 Practical Exam


After each in-class lab we are presented with a practical exam in which we apply what we learned in lab for that week in a take home exam. For this practical exam, we received instructions to determine the walkability of neighborhoods in Bellingham, Washington using the number of street intersections per neighborhood as our index. To run this analysis, I created a model in ModelBuilder and used the Clip, Intersect, Identity, and Frequency tools to determine the number of street intersections per neighborhood. I then created a chloropleth map symbolized using graduated colors to show the number of street intersections per neighborhood cartographically.

Click this link to view a larger image of the map

Click this link to view the Model Builder image

Lab 2 – Coordinate Systems & Map Projections


Focusing specifically on projections, this lab involved exercises in choosing a correct projection for a given task, determining what projection a given dataset was in, changing a dataset’s projection using the Project tool, defining a dataset’s projection using the Define Projection tool, and creating a custom projection for a custom task. To model ashfall risk from a potential Yellowstone Caldera eruption, we modified the standard parallels and central meridian of a USA Contiguous Equidistant Conic projection in order to maximize accuracy of distance in the Northwest USA region. The United States Geological Survey (USGS) has determined there is potential for 1.5 centimeters of ash fall within 500 kilometers of the Yellowstone Caldera. In the event of the Yellowstone Caldera eruption, ash would cover a much greater area of the United States, though we are interested in the areas at highest risk. Shown below is a map depicting the towns that fall within 500 kilometers of the eruption site. To begin analysis, it was important to identify issues of projection in the data given. Manually editing the USA Contiguous Equidistant Conic projection to better fit our area of study was the first step. Our Central Meridian and Latitude of Origin were changed to the coordinates of Yellowstone, with our Standard Parallels two degrees North and South of Yellowstone. Editing these aspects minimized distortion at the location of interest. To re-project all data layers, the ‘Iterate Feature Class’, ‘If Coordinate System Is’ and ‘Project” geoprocessing tools were used. The ‘If Coordinate System Is’ tool read through each file in our geodatabase to identify which data needed to be re-projected. If the data was not correctly projected, it fed into the ‘Project’ tool, changing it to our custom projection. With analysis created, 35 towns were identified in the 500 kilometer buffer around the Yellowstone Caldera.

Click this link to view a larger image of this map


Lab 2 Practical Exam

Our task for this practical exam was to create a map showing the closest National Park or Preserve to any given area in the U.S. We were given all of the data necessary to run this analysis, but there were issues with the projections for the datasets that we had to sort out. This included redefining a projection for a dataset that was in the wrong projection, figuring out the projection for a dataset which had lost its .prj file, and then reprojecting all the data so that it was in an appropriate projection for our analysis. I then used the Thiessen Polygon tool to find the areas closest to National Parks and Preserves in the United States. I then clipped the Thiessen Polygons to a US shapefile and created an annotation feature class for all labeling. This was one of the most enjoyable exams for me I had a lot of fun with the cartography side of this project and using the Thiessen Polygon tool.

Click here to view a larger image of this map


Lab 3 – Attribute Tables & Vector Analysis


This lab had us show similar data normalized through different methods to show the impact we as the cartographer and analyst can present information that can be perceived differently. This analysis of King County was done to determine if the effects from hazardous waste sites are distributed equally throughout the greater Seattle area. A quantitative analysis was done to identify the demographics of those residents within a 1.5 mile distance to these sites. The two factors of interest are percent of the population made up of non-white minorities and resident who fall beneath the poverty line. We began data collection through the United States Census Bureau to select all non-white minorities and residents in poverty. The data was stored in an excel file that we had to then format appropriately for the GIS software to process. From the same website, we obtained a shapefile containing King County census blocks. Areal Weighted Interpolation was used to estimate the population in buffers around each hazardous waste site. This was done by determining the proportion of each block group that falls within 1.5 of each site. A series of fields and calculations were added to the attribute table of our joined data sets. Darker colors in the map represent a higher population while census blocks symbolized with lighter colors have a lower population of the factors of interest.

Click this link to view a larger image of this map


Lab 3 Practical Exam


Our practical exam was similar to the in-class except the extent of the area we were working with. Some problems I ran into included getting census data for the “Salish Sea” which crosses national borders. So I had to play around in Excel merging US tables with Canadian tables. Once done in Excel I brought the tables into Arc and began to work with them there. I used batch project to get all of my layers into HARN State Plane Washington North. Once I had all of the layers in the same projection I had to recalculate area for my newly joined polygons of counties and provinces because Arc does not auto update user generated fields (sq. km). After this I performed weighted areal interpolation to find out the projected change in population of this area over a 25 year time period.

Click this link to view a larger image of the map


Lab 4 – Raster Analysis


This lab focused on the processing, analysis, and display of raster data. In particular, we utilized the Raster Calculator, Aggregate, Focal Statistics, Con, Reclassify, Polygon to Raster, Raster to Polylines, Slope, Least Cost Path, Aspect, and Raster Clip tools. We used these tools to accomplish two goals: to create a Swiss Hillshade effect by layering a transparent DEM over two specialized Hillshades, and to use a weighted overlay to determine a least cost path across the North Cascades. The bottom two maps display Aspect and Slope of the same extent of the North Cascades.

Click this link to view a larger image of the map


Click this link to view the annotated Model Builder used to make this map


Lab 4 Practical


In order to try and pinpoint likely archeological sites in the North Cascades, we used a least cost analysis based off a weighted overlay to determine the easiest routes through the range where people might have traveled before roads existed in the area. The main parameters for this analysis were steepness of slope (steeper slope=more cost to travel across) and distance to streams (as people need to be near a water source when traveling). We removed waterbodies from the analysis as well, as humans are not likely to travel across lakes or large rivers. The map above shows the least cost paths between Winthrop and modern day coastal cities. Below is the model used to conduct this analysis.

Click this link to view a larger image of the above map


Click this link to view a the Model Builder used to make this map


Lab 5 – Multi Criteria Evaluation


For this Lab, we paired up in groups of 3-4 and were instructed to predict the potential changes in habitat suitability of Sitka Spruce (Picea sitchensis) by the year 2080 based on projections of global climate change. We underwent an initial literature review to pinpoint certain climatic variables that were influential in dictating where the species grew. We then downloaded historical and future climate data for these variables and used the data to determine how and where the species habitat range would change. The variables my group and I decided would most greatly impact the Sitka Spruce’s suitability were: Mean Annual Temperature (MAT), Mean Annual Precipitation (MAP), Mean Maximum Temperature (Tmax), Mean Minimum Temperature (Tmin), Degree Days below 0°C (DD.0), Degree Days Above 5°C (DD.5), Temperature difference between MWMT and MCMT, or continentality °C (TD), Winter Precipitation (mm) (PPTwt). For our analysis we wanted to quantify climate variables through the use of zonal statistics. The zonal statistics table tool was used for each climate variable, in each time period. We calculated 1.5 standard deviations from the mean, to create an appropriate range at which the species could exist. We then added and subtracted STD(1.5) from the mean, to achieve minimum and maximum values for all data sets. This created the envelope of values that could potentially be suitable for the Sitka spruce in both current and projected climates.

Click this link to view a larger image of this map



Click this link to view the Model Builder used to create the above map


This is a map of just 1 of the climatic variables instead of the 8 variables used above. This is only the Tmin variable, Minimum Monthly Temperature.

Click this link to view a larger image of this map

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: