Project Leader: Mohammad Soleymani

Studies and identifies the most robust and significant markers and modalities for the recognition of motivation, action preparation and inhibition and performance. This will be achieved by performing statistical analysis and machine learning on existing databases and a pilot database that will be recorded in the first year. The results of the initial analysis and pilot recordings will be used to finalize the protocol for the main experiment. Also advances the science of multimodal representation learning for human behavior understanding. This will be achieved by building models that can learn joint representation spaces with weakly labeled or unlabeled data. Advances the state-of-the-art in domain generalization techniques for reducing between-person variations for human emotion and behavior recognition. Studies and develops a novel multimodal sensing framework that can support human behavior tracking in a VR/AR environment. This involves working on performant and compact neural networks that can be deployed for realtime analysis with limited computational resources.

Recent Publications