Tuesday, November 25, 2025

Lab 5: Supervised Land Use Classification of Germantown, Maryland


 

Lab 5: Supervised Land Use Classification of Germantown, Maryland

This week’s lab focused on applying supervised image classification techniques using ERDAS Imagine to produce a current land-use map of Germantown, Maryland. The workflow required building a complete spectral signature set using AOIs at specific coordinates, refining signatures through histogram and mean plot evaluation, and running a maximum likelihood classification. I also created additional signatures for water and road surfaces to ensure the major land-cover types were accurately represented.

After generating the initial supervised classification, I recoded the output into eight final land-use classes—urban, road, grass, deciduous forest, mixed forest, fallow, agriculture, and water—and calculated the area for each category using the thematic attribute table. The final map includes the recoded classified image, an inset distance file showing areas of higher spectral uncertainty, and a legend with class names, colors, and area values.

Overall, this lab reinforced the importance of high-quality training signatures and careful class merging, and it demonstrated how supervised classification can support land-use monitoring and planning.


Tuesday, November 18, 2025

Lab 4: Spatial Enhancement & Multispectral Analysis – GIS 4035

 

Lab 4: Spatial Enhancement & Multispectral Analysis – GIS 4035

This week’s lab introduced several important remote sensing skills that helped me better understand how satellite imagery is structured and how different filters, band combinations, and indices can bring out features that aren’t obvious at first glance. Module 4 focused on working in both ArcGIS Pro and ERDAS IMAGINE, and I definitely noticed that each program has its strengths depending on the task.


Getting Started: Image Enhancements

We began by learning about spatial enhancement techniques, which modify pixel values to highlight patterns, edges, or generalize areas of uniform texture. I applied both low pass and high pass filters to see how they changed the appearance of land cover. The low pass filter softened the image, while the high pass filter amplified edges—especially along roads, buildings, and ridgelines. These enhancements turned out to be important for understanding how different kernels and spatial frequencies affect interpretation.

Working With Multispectral Data

A major part of the lab involved exploring the individual spectral bands and learning what each wavelength reveals. Opening each layer in ERDAS IMAGINE helped me identify which bands were most useful for vegetation, soil, water, and built features.

I used grayscale views and the Inquire Cursor to evaluate DN values, which became very important once we reached the feature identification portion of the exercise.

Creating NDVI

Next, I created an NDVI raster using the built-in tool in ERDAS. NDVI’s formula (Band 4 – Band 3) / (Band 4 + Band 3) made it easy to separate healthy vegetation from clearcuts, soil, water, and other features.

Using the swipe tool to compare NDVI with the original image showed me how vegetation completely dominated the scene, which explained why so much of the NDVI surface looked “washed out.”

Exercise 7: Feature Identification and Subsetting

The final exercise brought together everything from earlier in the lab. I used the image histogram, grayscale views, multispectral combinations, and the Inquire Cursor to identify three distinct features in the scene. Once each feature was located, I created subsets and mapped them in ArcGIS Pro using different RGB band combinations.

Feature 1: River (Band Combo 4-3-2)

The river was identified by its very low NIR reflectance, visible as a spike between DN 12–18 in Layer 4. Water absorbs NIR strongly, so it appeared extremely dark in the grayscale display. The 4-3-2 composite (false color) made the river stand out sharply against bright red vegetation.


Feature 2: Barren Land (Band Combo 5-4-3)

This area showed a unique spectral signature—bright in visible bands (DN ~200) but very dark in SWIR (DN 9–11). This is typical of exposed soil or dry ground. Using the 5-4-3 combination highlighted these moisture differences and made barren areas easy to distinguish from vegetated surroundings.


Feature 3: Brackish or Shallow Water (Band Combo 3-2-1)

The last feature was a patch of water with unusually high reflectance in the visible bands but no change in SWIR. This indicated shallow or sediment-filled water. Using the 3-2-1 true-color display made this tonal variation in the water much more visible.




Reflection

This lab required a lot of trial and error, especially when navigating ERDAS IMAGINE’s tools and interpreting histograms. However, by the end I felt much more comfortable reading DN values, selecting effective band combinations, and understanding how different wavelengths help reveal different types of features. This was also my first time creating multiple image subsets for mapping, and it really helped reinforce how multispectral analysis supports real-world feature identification.

Tuesday, November 11, 2025

Exploring ERDAS Imagine: Navigating Lab 3a & 3b Challenges and Discoveries

 

Lab 3A – Intro to ERDAS IMAGINE and Digital Data

Overview

This week’s lab was my first deep dive into ERDAS IMAGINE, learning how to calculate properties of electromagnetic radiation (EMR) and how to view, manipulate, and subset raster imagery. I explored different datasets, including AVHRR and Landsat TM, and practiced preparing imagery for mapping in ArcGIS Pro.

I’ll admit, I struggled a bit at first — some of the tools in ERDAS IMAGINE have been renamed or rearranged since the lab instructions were written. It took some exploring to find the modern equivalents of certain commands, but once I got the hang of it, the workflow made a lot more sense


.

Key Takeaways

  • Understanding EMR: I reviewed how wavelength, frequency, and energy relate through Maxwell’s and Planck’s formulas. It’s amazing how these invisible properties directly affect what we see in satellite images.

  • Viewer Basics: I learned to open and display raster layers, use the Viewer to switch between True Color and Pseudo Color, and even set up multiple 2D views for side-by-side comparison.

  • Navigation & Preferences: I customized the Viewer for easier zooming and set defaults like Fit to Frame and Background Transparent — which made a big difference when layering multiple images.

  • AVHRR vs. Landsat: Comparing coarse-resolution AVHRR data with high-resolution Landsat TM imagery really highlighted how spatial resolution impacts what you can actually see.

  • Subsetting Data: Using the Inquire Box and Subset Image tools, I clipped out a smaller area (tm_subset.img) and calculated the area for each land-cover class before exporting it for mapping in ArcGIS Pro


    .

Deliverable

I created a map of the TM Subset image in ArcGIS Pro with color-coded land-cover classes and area values in hectares.


Lab 3B – Intro to ERDAS IMAGINE and Digital Data Part 2

Overview

Lab 3B built on everything from 3A and introduced me to the analytical side of ERDAS IMAGINE. I learned to interpret image metadata, understand the different types of resolution, and analyze thematic data. Again, a few menu names didn’t match the lab sheet, so I spent some time experimenting to find where certain options had moved — but figuring it out gave me a better sense of how flexible the software really is

Key Takeaways

  • Layer Info & Metadata: I examined image details like file type, projection, pixel size, and brightness statistics for subset_tm_00.img — useful context when deciding how to process or symbolize data.

  • Spatial Resolution: I compared a series of Pensacola images (sra – srd) with pixel sizes from 2 m to 16 m. It was eye-opening to see how smaller pixels capture more ground detail (down to visible cars!) while larger ones blur those features together.

  • Radiometric Resolution: Comparing 1-bit, 4-bit, and 8-bit images helped me understand how more bits allow finer distinctions in brightness — even if our eyes can’t always tell the difference.

  • Spectral & Temporal Resolution: I reviewed how multiple bands improve spectral detail and how satellite revisit frequency defines temporal resolution.

  • Thematic Raster Analysis: I opened soils_95.img and hydro_00.shp to calculate area and percent coverage of soil types. Then, I built a query to highlight soils that were fine-textured and humus-rich — indicators of higher erosion potential




🧭 Reflection

Together, Labs 3A and 3B gave me a solid foundation in ERDAS IMAGINE — from basic image viewing to raster analysis. While it was a little tricky adapting to tool names that didn’t quite match the instructions, the process really reinforced my understanding of image resolution and thematic mapping. These labs made me appreciate how much precision and preparation go into transforming satellite data into meaningful maps.

Sunday, October 26, 2025

Visual Interpretation of Aerial Photography

 

Visual Interpretation of Aerial Photography

This lab introduced key visual interpretation skills used in remote sensing. Each map explores different visual elements—tone, texture, shape, size, shadow, pattern, association, and color—to identify geographic features using aerial imagery.


Figure 1. Identification of Tone and Texture in Aerial Photography



This map illustrates how variations in tone (brightness) and texture (surface roughness) reveal distinct land cover types such as vegetation, water, and urban areas. Differences in tone and texture help interpret the landscape even without attribute data or scale.


Figure 2. Identifying Geographic Features Using Shape, Size, Shadow, Pattern, and Association



This layout demonstrates how visual cues—such as shape, shadow length, repeating patterns, and spatial associations—assist in recognizing man-made and natural features. Examples include identifying buildings by shape, trees by shadow, and neighborhoods by pattern.


Figure 3. Feature Identification Using True Color Imagery



This true color image shows features as they appear to the human eye, making it easier to distinguish forests, water bodies, and developed land. It emphasizes how true color composites provide realistic visual context for aerial interpretation.



Through these three exercises, I learned how visual interpretation techniques transform raw aerial imagery into meaningful geographic information. Recognizing features through tone, texture, and spatial relationships is an essential foundation for remote sensing analysis.

Monday, October 20, 2025

Starting GIS 4045: Photo Interpretation & Remote Sensing


Starting GIS 4045: Photo Interpretation & Remote Sensing

This week marks the start of a new adventure — GIS 4045: Photo Interpretation and Remote Sensing! I’m really excited to explore how satellite imagery and aerial photos can reveal patterns that aren’t visible from the ground. Coming from an emergency management background, I’m especially interested in how remote sensing supports disaster response, from mapping hurricane damage to monitoring flood zones.

As someone who loves traveling and scuba diving, I think it’ll be fascinating to see how these same tools are used to study coastlines, coral reefs, and environmental change here in Florida and around the world. Looking forward to sharpening my “spatial eye” and seeing the planet in a whole new way! 

Saturday, October 18, 2025

🌎 Bobwhite–Manatee Transmission Line GIS Project

 

🌎 Bobwhite–Manatee Transmission Line GIS Project

Overview

This project focused on analyzing the environmental and community impacts of the Florida Power & Light (FPL) Bobwhite–Manatee Transmission Line. Using GIS tools, I evaluated conservation lands, wetlands, uplands, and parcel data within the study corridor to understand potential environmental sensitivities. The project built on skills from earlier modules and combined spatial analysis with cartographic design to communicate complex spatial relationships through clear visual maps.

📂 Project Files:
ARCGIS map 

https://drive.google.com/file/d/1ge_rg2Bsvc27qIdms6xDdW2dLJwcog-b/view?usp=drive_link

MP4 of Presentation 

https://drive.google.com/file/d/1t3-IEiTKalGSNogWwbT_DbV5nQsTRkUi/view?usp=drive_link

Power Point Presentation

https://docs.google.com/presentation/d/1nIGaeSvLzEY3msQOoG9ZRcaCaHvxSgIs/edit?usp=drive_link&ouid=108553125521979567446&rtpof=true&sd=true

Transcript of Presentation



🔧 Tools and Techniques

Throughout this project, I used several ArcGIS Pro tools and workflows:

  • Clip, Select by Location, and Intersect to define environmental impacts.

  • Calculate Geometry and Summary Tables to quantify affected lands.

  • Layout View for creating final maps with unified color schemes, legends, and scale bars.

  • Attribute queries and labeling to differentiate between wetlands and uplands.

I also incorporated data from multiple authoritative sources, including:

  • Florida Department of Education (School Locations)

  • U.S. Census Bureau TIGER/Line Shapefiles (Parcels, Roads, Boundaries)

  • FGDL Conservation Lands Dataset

  • NWI Wetland and Upland Polygons


💡 Learnings

This project reinforced the importance of data organization, projection management, and clear cartographic communication. I learned how essential it is to track coordinate systems and attribute structures before beginning analysis. Working through multiple data layers helped strengthen my understanding of overlay analysis and how different GIS operations can produce unique insights depending on the sequence of tools applied.


😣 Frustrations and Fixes

Not everything went smoothly! Some of the main challenges included:

  • Difficulty adding neatlines to maps—this option was unexpectedly unavailable in the Insert menu.

  • Trouble summarizing multiple environmental datasets together in one statistical table.

  • Confusion over source citations, since not all datasets included clear attribution metadata.

Despite these obstacles, persistence and experimentation paid off. By testing alternate symbology, refining selections, and verifying projection settings, I was able to complete a cohesive analysis and presentation.


🎤 Reflection

This project demonstrated how GIS supports environmental planning and infrastructure assessment, particularly when balancing community needs with ecological sensitivity. The ability to visualize overlaps between human development and conservation priorities makes GIS a powerful decision-making tool in both academic and professional contexts.

Friday, October 10, 2025

Ten Hours, One Misplaced Campus, and a Lesson in Georeferencing

 Blog Title: Ten Hours, One Misplaced Campus, and a Lesson in Georeferencing

I’ll be honest — I spent ten hours trying to georeference the UWF SP1 image before realizing I was aligning it to the completely wrong part of campus. Ten. Whole. Hours. I zoomed, stretched, and cursed at Null Island (that mysterious spot off the coast of Africa where all unreferenced rasters go to die), wondering why the roads never quite matched. Only after an embarrassingly long stare at the campus map did it hit me: I’d been forcing the image to fit a section it didn’t belong to. Lesson learned — sometimes it’s not your control points that are off, it’s your entire frame of reference.

Once I finally lined up the right buildings, everything clicked. Using the Georeferencing tools in ArcGIS Pro, I matched the unknown rasters (uwf_n.jpg and uwf_s1.jpg) to known vector data — roads, buildings, and the Eagles Nest feature. Through this, I learned the delicate dance between Root Mean Square Error (RMSE) and actual visual accuracy. A low RMSE feels satisfying, but if the image doesn’t look right, it isn’t right. Precision means nothing if your buildings are swimming in the bay.

After georeferencing, we moved into editing and digitizing. Creating new polygons for campus buildings and tracing new road segments taught me how essential snapping and attribute management are for clean, logical data. It’s oddly satisfying to see your newly drawn Gym building sitting perfectly atop the raster you just anchored to reality. (And yes, saving edits — manually — is a must. Auto-save doesn’t exist here to save you from yourself.)

The geoprocessing tools came next, with the Multiple Ring Buffer (MRB) tool taking center stage. By buffering 330 and 660 feet around the Eagles Nest, we mapped the FWC’s conservation zones — a reminder that GIS isn’t just about pixels and points, but protecting habitats through spatial awareness.

Finally, the lab ventured into 3D mapping, hyperlinking data (like the eagle nest photo stored on Google Drive), and visualizing layers in a scene that felt almost tangible. Seeing those buffers rise in a 3D environment made the entire process — the frustration, the misalignment, the rediscovery — feel worth it.

In the end, this lab wasn’t just about georeferencing or buffers. It was about patience, perspective, and realizing that accuracy in GIS depends as much on critical thinking as it does on technical skill. And next time, before spending another ten hours georeferencing, I’ll double-check that I’m even on the right part of campus.

Lab 5: Supervised Land Use Classification of Germantown, Maryland

  Lab 5: Supervised Land Use Classification of Germantown, Maryland This week’s lab focused on applying supervised image classification tec...