Date of Award

5-2018

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Computer Engineering and Sciences

First Advisor

Eraldo Ribeiro

Second Advisor

Samuel Kozaitis

Third Advisor

Veton Kepuska

Fourth Advisor

Carlos Otero

Abstract

The classification of pollen grains is a main task in many disciplines including ecology, forensic sciences, allergy control, and oil exploration. In these application areas, visual inspection is the common method used for classifying pollen. However, recognition by visual inspection consumes much time and effort. Even though some level of automation exists, the automatic recognition of pollen grains remains an open problem. Current pollen-classification solutions rely on measurements of visual cues such as contour or texture. However, these features do not capture the visual complexity of information needed for discriminating among various species of pollen, especially when both a large-scale dataset and an accurate classification are needed. In this dissertation, we propose methods that go beyond the features extraction. First, a novel method for recognizing pollen grains in images from light microscopy and scanning electron microscopy is proposed. This method is based on two classification stages, namely, a texture classifier and a multi-layer cue decomposition. The texture-classifier stage is adopted for categorizing pollen grains into a few broad sub groups, while the multi-layer decomposition constructs multiple layers of visual cues for each pollen-grain image. Second, we use deep-learning techniques to obtain optimal features from the training data to describe the visual appearance of the pollen grains. To this end, we train convolutional neural networks. Data augmentation and drop-out layer techniques are used to reduce the effect of over-fitting and to enhance our network’s classification ability. Furthermore, we improve the classification performance using transfer learning to leverage knowledge from networks that have been pre-trained on large datasets of images. Third, we present a novel technique for pollen identification from sets of multifocal image sequences obtained from optical microscopy. We propose an algorithm that captures the variation across the entire volume of multi-focal images using low-rank and sparse decomposition. Then, we build the appearance model of the z-stack volume by decomposing each slice inside the sparse volume to create multiple concentric regions by clustering the gray-level intensity and their associated polar coordinates. After that, we perform pollen identification by matching sequences of multifocal appearance descriptors using the Longest-Common Sub-Sequence (LCSS) algorithm. In the same direction of multifocal image sequences, we combine convolutional and recurrent neural networks (CNN and RNN) to learn optimal features and recognize a pollen grain represented by a stack of multifocal images. First, we transform the stacked images of pollen sequences to 2D single images to train a CNN. The CNN is used to learn discriminative features. Then, we extract and aggregate the set of the learned features from the trained CNN to create a sequence of features to describe the stack of multi-focal images. Finally, we use the sequences of features to train a RNN network to classify the pollen multifocal images as sequential data.

Comments

Copyright held by author

Share

COinS