Menu

Scientists are one step closer to giving Android child humanistic facial expressions

Date: November 15, 2018
Source: Osaka University
Summary:
Android faces must express greater emotion if robots are to interact with humans more effectively. Researchers tackled this challenge as they upgraded their android child head, named Affetto. They precisely examined Affetto’s facial surface points and the precise balancing of different forces necessary to achieve more human-like motion. Through mechanical measurements and mathematical modeling, they were able to use their findings to greatly enhance Affetto’s range of emotional expression.

Japan’s affection for robots is no secret. But is the feeling mutual in the country’s amazing androids? We may now be a step closer to giving androids greater facial expressions to communicate with.

While robots have featured in advances in healthcare, industrial, and other settings in Japan, capturing humanistic expression in a robotic face remains an elusive challenge. Although their system properties have been generally addressed, androids’ facial expressions have not been examined in detail. This is owing to factors such as the huge range and asymmetry of natural human facial movements, the restrictions of materials used in android skin, and of course the intricate engineering and mathematics driving robots’ movements.

A trio of researchers at Osaka University has now found a method for identifying and quantitatively evaluating facial movements on their android robot child head. Named Affetto, the android’s first-generation model was reported in a 2011 publication. The researchers have now found a system to make the second-generation Affetto more expressive. Their findings offer a path for androids to express greater ranges of emotion, and ultimately have deeper interaction with humans.

The researchers reported their findings in the journal Frontiers in Robotics and AI.

“Surface deformations are a key issue in controlling android faces,” study co-author Minoru Asada explains. “Movements of their soft facial skin create instability, and this is a big hardware problem we grapple with. We sought a better way to measure and control it.”

The researchers investigated 116 different facial points on Affetto to measure its three-dimensional movement. Facial points were underpinned by so-called deformation units. Each unit comprises a set of mechanisms that create a distinctive facial contortion, such as lowering or raising of part of a lip or eyelid. Measurements from these were then subjected to a mathematical model to quantify their surface motion patterns.

While the researchers encountered challenges in balancing the applied force and in adjusting the synthetic skin, they were able to employ their system to adjust the deformation units for precise control of Affetto’s facial surface motions.

“Android robot faces have persisted in being a black box problem: they have been implemented but have only been judged in vague and general terms,” study first author Hisashi Ishihara says. “Our precise findings will let us effectively control android facial movements to introduce more nuanced expressions, such as smiling and frowning.”

Journal Reference:

  1. Hisashi Ishihara, Binyi Wu, Minoru Asada. Identification and Evaluation of the Face System of a Child Android Robot Affetto for Surface Motion Design. Frontiers in Robotics and AI, 2018; 5 DOI: 10.3389/frobt.2018.00119
Tags: , , , ,

1 thought on “Scientists are one step closer to giving Android child humanistic facial expressions”

  1. Arthur Matamoros says:

    I cannot thank you enough for the blog post.Really looking forward to read more. Awesome.

Leave a Reply

Your email address will not be published. Required fields are marked *

Connect with us


Follow us on Twitter
Follow @ScienceAfrique
Call Hours: 9am - 5pm (Mon - Fri)
+234-8184916861
+234-8170023468
+234-8170023469
info@scienceafrique.com


Copyright © 2019 The Paraklet LLC. All Names, Acronyms and Trademarks displayed on this website are those of their respective owners.