Date of Award
7-2023
Document Type
Thesis
Degree Name
Master of Science (MS)
Department
Ocean Engineering and Marine Sciences
First Advisor
Steven Lazarus
Second Advisor
Eraldo Ribeiro
Third Advisor
Pallav Ray
Fourth Advisor
Michael Splitt
Abstract
Image feature detection is a potent tool with many applications, such as fog identification, roadway conditions, etc. As part of the recent surge in machine learning applications, cloud detection has also become an increasingly engaged area of research. Identifying low clouds is especially useful with respect to aviation, particularly in regions of complex topography prone to visibility-related hazards such as haze or fog. To address this issue, a threshold-based semi-automated algorithm was developed and tested to determine whether or not an image is obscured by fog or haze. Images were obtained from a ground-based camera network in Southern California, the ALERTWildfire network, comprising more than 1000 cameras. The algorithm was trained and tested on more than 8000 images taken over a six-month period (June to December 2021). The algorithm uses various color parameters, including red, green, blue (from RGB), saturation and value (from HSV), and lightness (from L*a*b). The approach extracts the image’s relevant parameter pixel values to construct a histogram. Results indicate that the histogram width is related to the visibility, whereby narrower distributions (i.e., reduced image variability) are associated with obscured conditions. Given this relationship, a threshold-based approach was applied to individual images – classifying them as either obscured or non-obscured. The algorithm classification was compared to the ‘truth’ (i.e., human-sorted images), the results of which were used to produce contingency tables comprised of hits, misses, false alarms, and correct negatives. The contingency tables were used to calculate various skill and bias scores. The algorithm was trained on a range of thresholds to determine where, in the parameter space, the skill scores were highest. Results indicate that the skill scores were highest at the thresholds of 190 and 220 for RGB, for value at 72.5 and 77.5, saturation at 0.375 and 0.9, and lightness at 72.5, 77.5, and 87.5. The peak thresholds were then applied to an independent set of images in order to validate the algorithm. Albeit reduced somewhat from the training images, some of the resulting skill scores were still high, indicating the algorithm has the potential to classify images correctly. To determine whether the validation skill scores were statistically different from the optimal values obtained from the training data, both the training and validation data were bootstrapped – each sampled 10,000 times with replacement. A z-test was then applied to each parameter – for the skill and bias scores. Results indicated high z scores and near-zero p values for each score tested – indicating that the training and validation image sets are significantly different. This may, in part, be related to the even/odd day sorting of the data. Although the sorting of the images should have been random, inspection of the human sorted (i.e., verification images) indicates that the ratio of obscured to non-obscured imagery and misses to false alarms is different for the two datasets. Future work with a more balanced ratio of images and significant results would need to be verified against local ASOS to be beneficial for pilots and researchers.
Recommended Citation
Roelant, Patrick James, "Obscuration Analysis of Camera Imagery for Aviation Applications" (2023). Theses and Dissertations. 1362.
https://repository.fit.edu/etd/1362
Comments
Copyright held by author