Recognition of Dynamic Facial Expression in Point Light Displays

Yukari Takarae
Kent State University
and
Michael K. McBeath
Arizona State University


Abstract (120 words)
We investigated roles of global and local motion in recognition of dynamic facial expressions and measured its threshold using point-light displays. We videotaped actors producing each of six basic emotions while wearing black make-up with variable numbers of white dots to manipulate amount of information available. For most expressions, recognition accuracy increased with number of point lights from a threshold of about 14 lights, supporting reliance on global motion patterns produced by facial muscles. However, recognition accuracy for happy was notably higher and unaffected by number of point lights, and appears to rely on characteristic local motion. The findings support that facial expression recognition is not a unitary process and that each expression may be conveyed by different perceptual information.