From Emotions to Action Units with Hidden and Semi-Hidden-Task Learning

From Emotions to Action Units with Hidden and Semi-Hidden-Task Learning
Adrià Ruiz, Joost Van de Weijer and Xavier Binefa



Limited annotated training data is a challenging problem in Action Unit recognition. In this paper, we investigate how the use of large databases labelled according to the 6 universal facial expressions can increase the generalization ability of Action Unit classifiers. For this purpose, we propose a novel learning framework: Hidden-Task Learning. HTL aims to learn a set of Hidden-Tasks (Action Units) for which samples are not available but, in contrast, training data is easier to obtain from a set of related Visible-Tasks (Facial Expressions). To that end, HTL is able to exploit prior knowledge about the relation between Hidden and Visible-Tasks. In our case, we base this prior knowledge on empirical psychological studies providing statistical correlations between Action Units and universal facial expressions. Additionally, we extend HTL to Semi-Hidden Task Learning (SHTL) assuming that Action Unit training samples are also provided. Performing exhaustive experiments over four different datasets, we show that HTL and SHTL improve the generalization ability of AU classifiers by training them with additional facial expression data. Additionally, we show that SHTL achieves competitive performance compared with state-of-the-art Transductive Learning approaches which face the problem of limited training data by using unlabelled test samples during training.



[ paper] [supp. material]


Ruiz, Adria, Joost Van de Weijer, and Xavier Binefa. "From Emotions to Action Units with Hidden and Semi-Hidden-Task Learning." International Conference on Computer Vision (ICCV). 2015.