Researchers at the University of Texas, Arlington, are developing technology that will allow virtual reality headset (VR) users to see the real facial expressions of the people they interact with.
“This is one of the first devices that allow us to track human facial activity in detail, and it has a variety of potential applications ranging from [VR] gaming to healthcare,” said Professor VP Nguyen, a Texas computer scientist.
“The project bridges the gap between anatomical and muscular knowledge of the human face, and electrical and computational modeling techniques to develop analytical models, hardware and software libraries for sensing facial-based physiological signals.”
Nguyen is director of the university’s wireless and sensor systems lab, which focuses on building connected systems to monitor and improve human health and the environment. He has received a nearly $250,000 grant from the National Science Foundation for the VR project, which is part of a larger grant with University of Tennessee-Knoxville staff.
Today’s VR headsets remain bulky – despite significant R&D efforts by companies including Facebook to reduce their size, while maintaining immersion and a sharp image – and block most of the user’s face. Nguyen’s team has created a device that it describes as lightweight, unobtrusive, and privacy-friendly.
“Monitoring the activities of the human facial muscle in fine detail and reconstructing 3D facial expressions are challenging tasks,” said Professor Hong Jiang, chair of the university’s department of computer science and engineering. “This project will advance state-of-the-art on-ear detection techniques and transfer the learning across multiple detection modalities.
“The findings of this research could enable a wide range of applications ranging from virtual and augmented reality to emotion recognition in healthcare.”
The scientists hope that the device could have applications in speech therapy in particular. For example, doctors and therapists could use them to work with remote patients who have had a stroke or other speech impediments, determining exactly where the muscles are affected and tailoring treatment to that information. In addition, it can be used together with phones in noisy environments such as subways or bars, where speech recognition technology cannot perform well.
There is hope among healthcare professionals that extended-reality technologies such as VR and AR could have significant benefits for patients with mental health, neurological and developmental disorders. For example, a mobile VR game may provide warning signs of dementia, while a controlled, immersive environment called “The Blue Room” has helped children with autism overcome some of the phobias exacerbated by the condition.