Authors

Robert W. Nickl

Document Type

Report

Abstract

Many virtual reality tasks incorporate auditory and haptic (or touch) feedback, in addition to visual displays of spatial information. Non-visual feedback modalities such as the former increase realism; but they may have an ancillary benefit of increasing human motor performance and learning. One task for which this has been exhibited is paddle juggling, or vertically bouncing a ball off of a rigid elastic planar surface. For the objective of hitting the ball toward a target height, visual feedback is sufficient. Yet, previous studies have shown that non-visual feedback----specifically haptic---both is sufficient in the absence of vision to maintain stable ball accuracy1, and is capable of complementing vision to lengthen the duration that human jugglers can sustain such spatial accuracy2. The purpose of my research supported by the Link Fellowship in Modeling, Simulation, and Training was to better understand the roles of visual and non-visual feedback (defined as haptic or audio cues); particularly how they relatively contribute to the human nervous system’s processing of sensory information from the task, and to its subsequent action selection mechanisms for juggling. To perform this research, I developed a hard-real-time virtual reality paddle-juggling simulator, in which participants juggled a ball in virtual reality by moving the handle of a haptic paddle3. The goal was explicitly spatial; namely, to bounce the ball to a given target height. However, audio and haptic cues were presented at ball-paddle collision events via a simultaneous buzzer beep and force impulse to the hand through the haptic paddle handle. To achieve the target performance in this juggling task, the human operator must coordinate his or her arm muscles so that the paddle strikes the ball at an appropriate velocity and acceleration. Given the spatial goal and the improvement of accuracy that has been noted to accompany the introduction of non-visual feedback, the original hypothesis was that the brain interprets non-visual cues such as touch and sound as an ancillary measure of ball position. In other words, because non-visual cues occur at collision events, they signify the timing of these collisions, which can be mapped inversely to previous ball height by a simple model of ball kinematics that the brain knows both through basic knowledge of physics and through practice on this particular system456. To test this hypothesis, I exploited the hard-real-time nature of the simulator to perturb both visual and non-visual feedback, and to measure the resulting response of the human’s arm movements. While the ball flight and ball-paddle impacts were simulated using an accurate physics engine, feedback was restricted to the ball peaks (visual ball flash), and to the ball-paddle collision events only (haptic + audio pulse). That is, the information about performance was restricted to a small window about ball apex position, and to the ball-paddle contact point, in keeping with information that is sufficient for skilled ball juggling. Visual perturbations were applied by displaying the ball higher or lower than the position dictated by the physics engine. Because audio and haptic feedback was not spatial, these non-visual cues were perturbed by advancing or delaying their times of incidence.

Publication Date

9-30-2018

Comments

Link Foundation Fellowship for the years 2017-2018.

FORM Final Report Robert Nickl.pdf (93 kB)
Standard cover form for report

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.