HUMAVIPS 2013-12-16T12:23:06Z WordPress http://humavips.inrialpes.fr/feed/atom/ Radu Horaud http://perception.inrialpes.fr/~Horaud <![CDATA[Best Paper Award at IEEE MMSP’13]]> http://humavips.inrialpes.fr/?p=1352 2013-10-04T14:59:35Z 2013-10-04T14:53:46Z The article “Alignment of Binocular-Binaural Data Using . . . → Read More: Best Paper Award at IEEE MMSP’13]]> The article “Alignment of Binocular-Binaural Data Using a Moving Audio-Visual Target” received the “Best Paper Award” at the IEEE International Workshop on Multimedia Signal Processing (MMSP’13), Pula, Italy, September-October 2013. The paper is authored by Vasil Khalidov (IDIAP), Radu Horaud (INRIA) and Florence Forbes (INRIA).

The paper addresses the problem of of aligning visual and auditory  data using a sensor that is composed of a camera-pair and a microphone-pair. The original contribution of the paper is a method for audio-visual data aligning through estimation of the 3D positions of the microphones in the visual centred coordinate frame defined by the stereo camera-pair.

The paper can be downloaded here: Alignment of Binocular-Binaural Data Using a Moving Audio-Visual Target.

]]>
0
jwienke <![CDATA[Adapting Robot Behavior to Group Composition & Group Engagement]]> http://humavips.inrialpes.fr/?p=1338 2013-04-04T12:16:47Z 2013-04-04T12:14:42Z In order to achieve a smooth interaction, it is necessary for a robot to . . . → Read More: Adapting Robot Behavior to Group Composition & Group Engagement]]> In order to achieve a smooth interaction, it is necessary for a robot to adapt as best as possible to the current group of interacting people. This includes the adaptation to the composition of the visitor group, to the current level of fluctuation of the visitors and to changes relevant to the engagement of the visitors in the current interaction. Several perceptual or behavioral abilities are necessary for realizing this, including the ability to detect group characteristics such as the group size or the age range of its members or the ability to detect indicators for the engagement of the group, such as the interest its members are showing towards the robot.

In order to collect and aggregate the relevant cues and for building and maintaining hypotheses about the group state and composition, Bielefeld University developed a new component in the HUMAVIPS project called the GroupManager. This component receives results from several perception components (such as face detection/tracking, visual focus estimation or face classification) as input cues, aggregates them (e.g. by generating sliding windows of historical information or combining several cues) and calculates several derived measures based on this aggregated data. This serves as a stabilization and layer of abstraction necessary for higher-level components making decisions on how the robot should adapt its behavior.

This video shows some highlights from an interactive demonstration incorporating the results from the GroupManager component.

Adapting Robot Behavior to Group Composition & Group Engagement

The video is available on our youtube channel.

]]>
0
Radu Horaud http://perception.inrialpes.fr/~Horaud <![CDATA[Outstanding Paper Award at ICMI’12]]> http://humavips.inrialpes.fr/?p=1326 2013-03-22T15:30:45Z 2013-03-22T15:30:45Z The article “Linking Speaking and Looking Behavior Patterns with Group . . . → Read More: Outstanding Paper Award at ICMI’12]]> The article “Linking Speaking and Looking Behavior Patterns with Group Composition, Perception, and Performance” received one of the “Outstanding Paper Awards” (best papers) at the IEEE/ACM 14th International Conference on Multimodal Interaction (ICMI’12), Santa-Monica, CA USA, October 2012. The paper is authored by Dineshbabu Jayagopi, Dairazalia Sanchez-Cortes, Kazuhiro Otsuka, Junji Yamato, and Daniel Gatica-Perez (IDIAP)

This paper addresses the task of mining typical behavioral
patterns from small group face-to-face interactions and linking them to social-psychological group variables

This paper addresses the task of mining typical behavioral patterns from small group face-to-face interactions and linking them to social-psychological group variables. The paper can be downloaded here: Linking Speaking and Looking Behavior Patterns with Group Composition, Perception, and Performance

]]>
0
odobez <![CDATA[Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition]]> http://humavips.inrialpes.fr/?p=1311 2013-03-14T09:59:01Z 2013-03-14T09:59:01Z D. Sanchez-Cortes and O. Aran and D.  Jayagopi and M.  Schmid Mast and D. . . . → Read More: Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition]]> D. Sanchez-Cortes and O. Aran and D.  Jayagopi and M.  Schmid Mast and D. Gatica-Perez

Journal on Multimodal User Interfaces, Special Issue on Multimodal Corpora, published online Aug. 2012

Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition

]]>
0
odobez <![CDATA[Robot-to-Group Interaction in a Vernissage: Architecture and Dataset for Multi-Party Dialog]]> http://humavips.inrialpes.fr/?p=1307 2013-03-14T09:59:26Z 2013-03-14T09:57:05Z D. Klotz, J. Wienke, B. Wrede, S. Wrede, S. Sheikhi, D. Jayagopi, V. Khalidov, . . . → Read More: Robot-to-Group Interaction in a Vernissage: Architecture and Dataset for Multi-Party Dialog]]> D. Klotz, J. Wienke, B. Wrede, S. Wrede, S. Sheikhi, D. Jayagopi, V. Khalidov, J.-M. Odobez

Proc. of the CogSys conference 2012

Robot-to-Group Interaction in a Vernissage: Architecture and Dataset for Multi-Party Dialog

]]>
0
odobez <![CDATA[Recognizing the Visual Focus of Attention for Human Robot Interaction]]> http://humavips.inrialpes.fr/?p=1293 2013-03-14T09:47:15Z 2013-03-14T09:47:15Z S. Sheikhi and V. Khalidov and J.-M. Odobez

IROS workshop on Human Behavior Understanding, . . . → Read More: Recognizing the Visual Focus of Attention for Human Robot Interaction]]>

S. Sheikhi and V. Khalidov and J.-M. Odobez

IROS workshop on Human Behavior Understanding, Vilamoura 2012

Recognizing the Visual Focus of Attention for Human Robot Interaction

]]>
0
odobez <![CDATA[Linking speaking and looking behavior patterns with group composition, perception, and performance]]> http://humavips.inrialpes.fr/?p=1289 2013-03-14T09:45:33Z 2013-03-14T09:45:33Z Jayagopi, D. and Sanchez-Cortes, D. and Otsuka, K. and Yamato, J. and Gatica-Perez, D.

. . . → Read More: Linking speaking and looking behavior patterns with group composition, perception, and performance]]>
Jayagopi, D. and Sanchez-Cortes, D. and Otsuka, K. and Yamato, J. and Gatica-Perez, D.

Outstanding Paper Award, Proceedings of the 14th ACM International Conference on
Multimodal interaction, Santa Monica, USA

Linking speaking and looking behavior patterns with group composition, perception, and performance

]]>
0
odobez <![CDATA[A Track Creation and Deletion Framework for Long-Term Online Multi-Face Tracking]]> http://humavips.inrialpes.fr/?p=1281 2013-03-14T09:43:12Z 2013-03-14T09:43:03Z S. Duffner and J.-M. Odobez

A Track Creation and Deletion Framework for Long-Term Online . . . → Read More: A Track Creation and Deletion Framework for Long-Term Online Multi-Face Tracking]]>

S. Duffner and J.-M. Odobez

A Track Creation and Deletion Framework for Long-Term Online Multi-Face Tracking

IEEE Transaction on Image Processing, March 2013

]]>
0
odobez <![CDATA[Gaze estimation from multimodal Kinect data]]> http://humavips.inrialpes.fr/?p=1277 2013-03-14T10:00:37Z 2013-03-14T09:41:18Z K. Funes and J.-M. Odobez

CVPR Workshop on Face and Gesture and Kinect demonstration . . . → Read More: Gaze estimation from multimodal Kinect data]]>

K. Funes and J.-M. Odobez

CVPR Workshop on Face and Gesture and Kinect demonstration competition, Providence, USA, 2012

Gaze estimation from multimodal Kinect data

]]>
0
odobez <![CDATA[Given that, Should I Respond? Contextual Addressee Estimation in Multi-Party Human-Robot Interactions]]> http://humavips.inrialpes.fr/?p=1268 2013-03-14T09:39:00Z 2013-03-14T09:36:18Z D. Jayagopi and J.-M. Odobez

Given that, Should I Respond? Contextual Addressee Estimation in . . . → Read More: Given that, Should I Respond? Contextual Addressee Estimation in Multi-Party Human-Robot Interactions]]>

D. Jayagopi and J.-M. Odobez

Given that, Should I Respond? Contextual Addressee Estimation in Multi-Party Human-Robot Interactions

Human Robot Interaction (HRI) Conference, Tokyo.

]]>
0