Signing Avatars & Immersive Learning (SAIL)

SAIL is an NSF-funded project housed within the Action & Brain Lab and Motion Light Labs at Gallaudet University. This project involves the development and testing of a new ASL learning environment. Signing avatars are created using motion capture recordings of deaf signers signing in ASL. 


Project Report, 2018-2021:

Improved resources for learning American Sign Language (ASL) are in high demand. There has been limited progress in using cutting-edge technologies to harness the visual-spatial nature of ASL for improved learning outcomes. At the same time, interactive speaking avatars have become valuable learning tools for spoken language instruction. But the potential educational uses of signing avatars have not been adequately explored. Our project, named Signing Avatars & Immersive Learning (SAIL) investigated the feasibility of a system in which signing avatars (computer-animated virtual humans built from motion capture recordings) teach users ASL in an immersive virtual environment. The project focused on developing and testing this entirely novel ASL learning tool, while fostering the inclusion of underrepresented minorities in STEM.


This project leveraged the theory of embodied learning to design the SAIL system. We developed a way that new ASL users can enter an immersive 3D environment and interact with a “Teacher” avatar to learn introductory ASL. We used novel motion capture technology to record a native deaf signer and processed the recordings to allow for high-fidelity representations of the signer’s hands, fingers, and facial expressions. The Teacher was placed in a virtual reality environment accessed via head-mounted goggles (e.g., Oculus Rift S). Users enter the virtual reality environment, and the user's own movements are displayed via a gesture-tracking system. We intentionally included a three-dimensional view of the signed content so that users can better see and understand the handshapes and movements which are integral to signed communication. Following the development of SAIL, we conducted a large-scale online rating study to gather information about how signing avatars are perceived by ASL users with various language backgrounds. We found that overall, ASL users preferred our motion-capture avatar over a computer-synthesized avatar who moved in a more robotic way. We identified several significant relationships between how an ASL user’s own ASL fluency changes their acceptance of signing avatars. Overall, the SAIL team developed a proof-of-concept version of a novel ASL learning system and conducted a large scale assessment of how potential users react to signing avatars. This research is critical for informing future design of signing avatars. The project team pioneered the integration of multiple technologies: avatars, motion capture systems, virtual reality, and gesture tracking with the goal of making progress toward an improved tool for sign language learning.



 Meet the Team!

PI 

Dr. Lorna Quandt 

Director, Action & Brain Lab

Science Director, Motion Light Lab


Co- PI

Melissa Malzkuhn

Creative Director, Motion Light Lab

Grad Student 

Athena Willis 

Graphic Design & Art 

Yiqao Wang

Mo-cap Technician  

Jason Lamberton 

3D Artist + Animator

Jianye Wang

The Motion Light Lab is a space where creative literature meets advanced digital technology to create new knowledge and learning. The SAIL project is a collaboration between the Motion Light Lab and the Action & Brain Lab.

Above, the SAIL team after motion capture filming is completed. 

Above, Summer 2019 updates from the SAIL project, including design of avatar characters, and demonstrations of prototypes in virtual reality. Pictured are (L to R): Jason Lamberton, Dr. Laura-Ann Petitto, and Dr. Ben Bahan. 

Acknowledgement 

This material is based upon work supported by the National Science Foundation under Grant #1839379.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.