Date of Award

12-2022

Document Type

Thesis

Degree Name

Master of Science (MS)

Department

Mathematical Sciences

First Advisor

Ryan T. White

Second Advisor

Joo Young Park

Third Advisor

Xianqi Li

Fourth Advisor

Gnana Bhaskar Tenali

Abstract

Neurological and neurodegenerative disorders such as Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS), and stroke can cause speech and orofacial motor impairments with devastating effects on quality of life. Analysis of orofacial movement provides vital information for early diagnosis and tracking disease progression, but current clinical practice relies on perceptual assessments performed by clinicians, which are unreliable and insensitive to early symptoms. New methods in machine learning have enabled automatic and objective assessment of orofacial kinematics from color and depth videos, hence we introduce MEADepthCamera, a mobile application for RGB-D video and audio recording and automatic estimation of 3D facial movement. Utilizing infrared depth cameras on the latest iPad and iPhone models and GPU accelerated video processing, we create an accessible, multimodal system for orofacial assessment able to be used in remote applications, e.g., telemedicine and at-home disease monitoring. We also develop an evaluation protocol to evaluate the accuracy of 3D orofacial kinematics extracted using MEADepthCamera. Finally, we discuss the usage of MEADepthCamera in real-world clinical research applications, including a validation study about the clinical use of widely-available depth cameras for the estimation of 3D orofacial kinematics. Our successful demonstration of MEADepthCamera as an automatic tool for integrated collection and analysis of 3D facial videos highlights the potential of video-based assessment of orofacial motor function delivered via mobile platforms.

Share

COinS