Navigation Bar HOME CONTACT PROJECTS PUBLICATIONS RESOURCES SEARCH STAFF face and body human motion pheromones simulation urbanisation

DEPARTMENT OF ANTHROPOLOGY- HUMAN BEHAVIOR RESEARCH | simulation

 

European Science Foundation Workshop on Emotions and New Media Hull 2002

From emoticons to avatars: the simulation of facial expression

Karl Grammer*, Angelika Tessarek* and Gregor Hofer**

*Ludwig-Boltzmann-Institute for Urbanethology Vienna/Austria

**Rochester Institute of Technology, Rochester , N.Y.

Grammer, K., Oberzaucher, E., Schmehl, S. (2011). Embodiement and expressive communication on the internet. In: Kappas, A. & Kraemer, N.C. (Eds.) Face-to-Face Communication over the Internet (pp 237-279). Cambridge: Cambridge University Press.

 

Why emotions in computers ?

In their book "The Media Equation" (1996), Nass and Reeves present research results which suggest that people treat computers as if they were real people. This, in return, also could mean that people would appreciate to be treated by computers in ways that are basically social.

The use of emoticons  (J) in text based communication on the internet underline this process. Yet the use of emoticons is limited, because their expressiveness cannot cover the subtlety of human facial expression completely and they do not allow for intermediate stages. Moreover they become quite complex and thus sometimes difficult to interpret.

In this article we will present an overview of different systems that simulate facial expression and outline of the development of a completely new and revolutionary system. In addition we will delimit the specific research questions that would form the basis of such system and its implementation in simulations.

The history of facial expression simulation

The task of implementing such a system suffers from many shortcomings especially in the high diversity of emotional and appraisal theories. Basically two tasks can be identified: the implementation of a control-architecture, i.e. how emotions-facial expressions are generated on the avatar, and the facial animation itself.

Essentially all of the current face models produce rendered images based on polygonal surfaces. Some of the models make use of surface texture mapping to increase realism. The facial surfaces are controlled and manipulated using one of three basic techniques: 3D surface interpolation, ad hoc surface shape parameterisation, and physically based with pseudo-muscles. In this part we will compare different implementation techniques of facial expression on avatars and discuss the pros and cons of each method. The conclusion is that the basic deficiencies of these early models have not been solved up to day. First, no model to date includes the complete set of muscle actions, and second on the side of the simulation there is no coherent theory of facial expression and its relation to emotion, which would allow simple playback of the expressions.

Facial Animation: Anatomical implementation of our system

Physically based models attempt to model the shape changes of the face by modelling the properties of facial tissue and muscle actions. Most of these models are based on spring meshes or spring lattices with muscle actions approximated by various force functions. These models often use subsets of the FACS system to specify the muscle actions. See the upper face . the middle face and the lower face.

AppleMark

Even the best current physically based facial models use relatively crude approximations to the true anatomy of the face. In this part we will demonstrate how to implement a complete muscle set on the basis of the Facial Action Coding System by P.Ekman and W. Friesen and compare this system to existing other implementations.  The complete set of Action Units and Action Descriptors was implemented as an appearance change of surface with 44 morph targets using a base mesh provided by DigitalArtZone.  We will discuss the implementation of the system, its limitations and the problems that occur from interaction between morph targets. 

Control Architecture: Emotion theories, appraisal and emotion simulation

An implementation of a facial expression system on an avatar allows tackling new research questions. In this part we will discuss different emotion theories and their relation to facial expression. This chapter starts with the review of emotion theories and the function of emotions and then proceeds to attempts to simulate emotions on computers. 

AppleMark

The scope of the review reaches from categorical discreet approaches, which use base emotions to componential approaches, which describe single muscle actions.  The conclusion is that most emotion theories have an appraisal part and a facial expression part. This conclusion will give the basis for general emotion simulation. One hand we could construct expert systems, which relate discrete emotions, like fear, happiness, anger, disgust etc. in various intensities and combinations to external events. Or, we could construct a system where componential and general internal states like arousal and pleasure control the contraction of each Action Unit. The second approach, although theoretically probably not more correct than others has the advantage of simple algorithmic implementation. We will show that a facial expression system can be implemented as a Dual Dynamic System with but a few internal state variables, which in return are triggered by an appraisal system. The appraisal system uses a checklist for external events and creates either arousal or pleasure scores. These scores then can be used to drive facial expressions.

Facial expression simulation: the componential approach

This part describes an experiment we conducted with the facial expression system above. In this experiment 200 subjects rated 4500 different faces with random muscle contractions on pleasure, arousal and dominance scales. With principal component analysis we then are able to describe the relation of each Muscle or Action Unit in relation to pleasure and arousal.  We will then show that this system then is able to produce facial expressions, which can be interpreted by users in a coherent way.

Furthermore we will discuss the shortcomings of the system. Although the system seems to be accurate and of communicative value, it does not produce all described base emotions. Thus we will suggest an extension to the componential theory of facial expression, which lies basically in the existence of activation algorithms for muscle units.  If such an algorithm is introduced, the system is also able to produce basic emotions. This finding has considerable impact on the theoretical approaches to facial expression analysis and interpretation.

AppleMark

This picture shows the results of the study for the ratings of single muscle contraction in an arousal and pleasure space. We depicted those Aus which show a  postive (+) or negative (-) significant correlation with  pleasure (P) and arousal (A).

The two following pictures show the Pleasure and Arousal spaces for two action units, derived from pincipal components analysis of the data from the study mentioned above.

AppleMark

AppleMark

The last picture shows the complete Circumplex Pleasure-Arousal Space (A is arousal, P is pleasure). This system can be used to create real time appraisal systems on a very simple basis. The original circumplexmodel by James A. Russell is here.
AppleMark
Features under development

The final part will deal with possible extensions of the system. It will give the rationale for inclusion of head tilts and turns, eye movements, and breathing style as emotional information carriers from non-verbal behaviour research. We will also show how these features can be linked to the simulation above.

These results solve the problem of mapping emotions to facial expression - a field which is taken for granted and rarely tackled with empirical data in the fields of emotion simulation and the construction of embodied systems.

Below is a screen shot of our animation program using Poser 4 by Curiouslabs based on the methods outlined above. The sliders can be used to create any combination of pleasure and arousal or to introduce and mix base emotions. A real time application is in preparation. Emotions can also be animated by adding any combination at any point in time. In this part  the duration and form of Onset, Apex and Offset can be specified. The program then calculates a non-linear motion curve between the frames.

Examples

The program allows any mixture of emotions and muscle movements at different intensities - even some that do not appear in reality.

Categorical sadness and anger

 Categorical Fear and Surprise

 Categorical Happiness and Anger

Categorical Happiness and Disgust

 


 

UNIVERSITY OF VIENNA

all rights reserved karl.grammer@univie.ac.at