Researchers at Columbia Engineering have used AI to train robots to respond correctly to human facial expressions – an ability that can build trust between humans and their robotic counterparts. This is stated on the project website.

While facial expressions play a huge role in building confidence, most robots still have an empty, static look. With the increasing use of robots in places where robots and humans must work closely together, from nursing homes to warehouses and factories, the need for a more responsive, realistic robot is becoming more pressing.

Researchers at the Creative Machines Lab at Columbia Engineering have worked for five years to create EVA, a new autonomous robot with a soft and expressive face that responds to match the expressions of nearby people.

“The idea for EVA took shape a few years ago when my students and I began to notice that robots in our lab were looking at us through plastic eyes,” recalls Hod Lipson, professor of innovation.

Lipson noticed a similar trend at the grocery store, where he came across replenishment robots with name badges and, in one case, wearing a cozy hand-knitted cap. “People seemed to humanize their fellow robots by giving them eyes, a personality or a name,” says the scientist. “It got us thinking, if eyes and clothes work, why not create a robot with a super-expressive and responsive human face?”

The first phase of the project began in Lipson’s lab a few years ago, when undergraduate student Zanwar Faraj led a team to create the robot’s physical mechanism. They designed the EVA to be a disembodied bust, much like the silent but animated Blue Man performers. EVA can express six basic emotions: anger, disgust, fear, joy, sadness and surprise, as well as many more subtle emotions using artificial “muscles” that target specific points on the face, mimicking the movement of more than 42 tiny muscles attached to different points to the skin and bones of human faces.

“The biggest challenge with EVA was designing a system that was compact enough to fit within a human skull, yet functional enough to reproduce a wide range of facial expressions,” Farage said.

Once the team was satisfied with the EVA mechanics, they embarked on the second main phase of the project: programming the artificial intelligence that would control the EVA’s facial movements. While lifelike animatronic robots have been used in theme parks and movie studios for years, Lipson’s team has made two technological advances. EVA uses deep learning artificial intelligence to “read” and then display the expressions it sees on nearby people. And EVA’s ability to mimic a wide variety of different human facial expressions is learned through trial and error by watching videos of itself.