BreMM19 | Rothenhöfer

Andreas Rothenhöfer
Bremen University | Bremen, Germany


Co-verbal facial gestures and automated facial expression recognition –  theoretical models, technical solutions and challenges in multimodal interaction analysis

Automated biometric research tools provide powerful opportunities to align verbal and nonverbal datasets without devouring incalculable person-hours for preparatory data collection chores such as manual facial coding. With the generous support of the University of Bremen’s Central Research Development Fund I have been provided with the opportunity to use a personal license for the iMotions Biometric Research Platform and its integrated Affdex facial expression analysis engine. My paper seeks to discuss my own postdoc project outline, its general aims of analysis, considerations of corpus design, theory-based assumptions and hypotheses as well as data-driven mixed-method insights into co-verbal facial gesture types and their contextualised and abstract functions. My current research is oriented primarily towards data derived from multimodal interactions in moderated multi-addressed media discourse (e.g. election debates and talk-show discussions), a later stage of research may include reviewing corpus data through further biometric respondent studies.  

I will attempt to cover conceptual distinctions, methodical considerations as well as practical experiences concerning the level of (dis-)agreement between theory-based, hermeneutical and technological findings. Can theoretical distinctions made in (facial) gesture research and facial expression models be verified through corpus oriented research and biometric facial analysis? Is ‘natural’ human perception and understanding of facial behaviour consistent with biometric data? Are significant verbal patterns of behaviour related to significant facial behaviour or rather complementary? Particular areas of research interest include the semiotic relation and interaction between verbal and nonverbal modes of communicative code (repetition, modification, specification, intensification, contradiction); Aspects and parameters of multimodal code variation in particular constellations of actors; Interactional and receptive functions or effects of facial displays in discourse and the interaction and influencing repertoire of the silent bystander in a debate on his or her respective co-panellists and audiences; Another emphasis will be given to the distinction and integration of aspects of emotive and communicative facials displays and their discourse related functions. On the more technical side of my paper I will seek to cover questions relating to data preparation, technical requirements (resolution, camera angle, multiple object selection) and limitations of the technological solutions applied. Demonstrations and discussions of practical experiences with available annotation and documentation tools as well as perceived desiderata for a more streamlined integration of multimodal data sources and their analysis will provide a brief outlook.

References

Bateman, John/Wildfeuer, Janina/Hiippala, Tuomo (Hgg): Multimodality. Foundations, Research and Analysis. A Problem-Oriented Introduction. Berlin/Boston. 

Bavelas, J. B., & Chovil, N. Some pragmatic functions of conversational facial gestures. Accepted for Gesture. 

Bavelas, J. B., Gerwing, J., & Healing, S. (2017). Doing mutual understanding. Calibrating with micro-sequences in face-to-face dialogue. Journal of Pragmatics, 121, 91-112. 

Bavelas, J. B., Gerwing, J., Healing, S., & Tomori, C. (2016). Microanalysis of Face-to-face Dialogue. An Inductive Approach. In C. A. VanLear & D. J. Canary (Eds.), Researching communication interaction behavior: A sourcebook of methods and measures (pp. 129-157). Thousand Oaks, CA: Sage. 

Berlin/Boston: De Gruyter, 245-280.Stukenbrock, Anja (2015): Deixis in der Face-to-Face-Interaktion. Berlin/München/Boston: de Gruyter. 

Ekman, Paul /Friesen, Wallace V. (2003): Unmasking the Face. A guide to recognizing emotions from facial clues. Cambridge, MA. 

Fricke, Ellen (2012): Grammatik Multimodal. Wie Wörter und Gesten zusammenwirken. Berlin/Boston: de Gruyter. 

Müller, Cornelia et al. (2013-2014): Body – Language – Communication. An International Handbook on Multimodality in Human Interaction. 2 Volumes. Berlin/Boston. 

Ricci Bitti, Pio E. (2014): Facial expression and social interaction. In: Müller, Cornelia et al.: Body – Language – Communication. (HSK) 38/2. Berlin. New York: de Gruyter, 1342-1349. 

Rothenhöfer, Andreas (2018): Diskurslinguistik und Emotionskodierung. In: Warnke, Ingo (Hg.): Handbuch Diskurs. Reihe: Handbücher Sprachwissen (HSW) 6. Berlin, New York: De Gruyter Mouton, 488-520. 

Biography

  • Since Oct 2010 Lecturer in German Linguistics; Faculty of languages and literatures/ Dep of German Linguistics, University of Bremen; head of the German Study Commission  
  • PhD in German linguistics on the linguistic construction of the end of WWII in German public discourse 
  • University of Heidelberg: Linguistic Lecturer for Collective Memory studies  
  • University Assistant (replacement), chair of German Historic Linguistics, Dep. of German, University of Heidelberg 
  • Institut für Deutsche Sprache, Mannheim: Research associate in project examining German protest language 1967-1968 
  • International Office, University of Heidelberg 
  • MA and PGCE in German and English languages and literatures  
  • Foreign Language Assistant in Tunbridge Wells, Kent  
  • Research interests include: discourse linguistics, lexicology and lexicography; semantics, contrastive linguistics, language and emotion research, multimodal interaction analysis, cognitive linguistics, construction grammar, corpus linguistics, pragmatics  
  • © 2020 University of Bremen || Faculty of Linguistics and Literary Science
Top