Welcome

The core expertise of the iBUG group is the machine analysis of human behaviour in space and time including face analysis, body gesture analysis, visual, audio, and multimodal analysis of human behaviour, and biometrics analysis. Application areas in which the group is working are face analysis, body gesture analysis, audio and visual human behaviour analysis, biometrics and behaviometrics, and HCI.

Latest News

Tensor and Matrix Factorisation Workshop in ICCV 2017

posted 10 Jul 2017, 02:11

We are organising a workshop on Tensor and Matrix Factorisation in conjunction with ICCV 2017. For more details please for visit the workshop's webpage.

We are organising the first 3D face landmark tracking "in-the-wild" competition in ICCV 2017

posted 15 Jun 2017, 23:43

We are organising the first 3D face in-the-wild tracking competition. For more details please visit the workshop's webpage. The train and test data are now available.

Today we were at the front page of Imperial

posted 02 Jun 2017, 20:51

Imperial is covering our recent research on 3D face reconstruction press here.

Lip-reading Workshop at BMVC

posted 23 May 2017, 18:28

We organise a workshop on Lip-reading using deep learning methods at BMVC 2017.  Authors can submit either regular papers or extended abstracts. For more details visit the workshop page.

We are in the Science Museum collecting 4D faces. Come and visit us !!!!!

posted 20 May 2017, 20:00

Stefanos and his team are in the Science Museum until 2 of July (Life Science, gallery, Who am I?). We are in the museum every day except for Tuesdays and Wednesdays. We are doing large scale collection of 4D sequences of faces in various expressions. The data collection is kindly sponsored by 3DMD. For more details visit our page.

Latest Publications

Automatic analysis of facial actions: A survey

B. Martinez, M. F. Valstar, B. Jiang, M. Pantic. IEEE Transactions on Affective Computing. 2017.

Bibtex reference [hide]
@article{martinez2017,
    author = {B. Martinez and M. F. Valstar and B. Jiang and M. Pantic},
    journal = {IEEE Transactions on Affective Computing},
    publisher = {IEEE},
    title = {Automatic analysis of facial actions: A survey},
    year = {2017},
}
Endnote reference [hide]

AFEW-VA database for valence and arousal estimation in-the-wild

J. Kossaifi, G. Tzimiropoulos, S. Todorovic, M. Pantic. Image and Vision Computing. 2017.

Bibtex reference [hide]
@article{kossaifi_afewva,
    author = {J. Kossaifi and G. Tzimiropoulos and S. Todorovic and M. Pantic},
    journal = {Image and Vision Computing},
    title = {AFEW-VA database for valence and arousal estimation in-the-wild},
    year = {2017},
}
Endnote reference [hide]

A survey of multimodal sentiment analysis

M. Soleymani, D. Garcia, B. Jou, B. Schuller, S. Chang, M. Pantic. Image and Vision Computing. 2017.

Bibtex reference [hide]
@article{SOLEYMANI2017,
    author = {M. Soleymani and D. Garcia and B. Jou and B. Schuller and S. Chang and M. Pantic},
    journal = {Image and Vision Computing},
    title = {A survey of multimodal sentiment analysis},
    url = {http://www.sciencedirect.com/science/article/pii/S0262885617301191},
    year = {2017},
}
Endnote reference [hide]