Project Leader: Mohammad Soleymani
Studies and identifies the most robust and significant markers and modalities for the recognition of motivation, action preparation and inhibition and performance. This will be achieved by performing statistical analysis and machine learning on existing databases and a pilot database that will be recorded in the first year. The results of the initial analysis and pilot recordings will be used to finalize the protocol for the main experiment. Also advances the science of multimodal representation learning for human behavior understanding. This will be achieved by building models that can learn joint representation spaces with weakly labeled or unlabeled data. Advances the state-of-the-art in domain generalization techniques for reducing between-person variations for human emotion and behavior recognition. Studies and develops a novel multimodal sensing framework that can support human behavior tracking in a VR/AR environment. This involves working on performant and compact neural networks that can be deployed for realtime analysis with limited computational resources.
Recent Publications
- Tavabi L., Poon A., Rizzo A.S., Soleymani M. (2020) Computer-Based PTSD Assessment in VR Exposure Therapy. In: Stephanidis C., Chen J.Y.C., Fragomeni G. (eds) HCI International 2020 – Late Breaking Papers: Virtual and Augmented Reality. HCII 2020. Lecture Notes in Computer Science, vol 12428. Springer, Cham. https://doi.org/10.1007/978-3-030-59990-4_32
- S. Rayatdoost, D. Rudrauf and M. Soleymani, “Expression-Guided EEG Representation Learning for Emotion Recognition,” ICASSP 2020 – 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 2020, pp. 3222-3226, doi: 10.1109/ICASSP40776.2020.9053004
- Liupei Lu, Leili Tavabi and Mohammad Soleymani. “Self-Supervised Learning for Facial Action Unit Recognition through Temporal Consistency,” BMVC 2020.