What does the human face tell the human brain?

Award Number
2373024
Award Type
Studentship
Status / Stage
Active
Dates
27 September 2019 -
27 December 2023
Duration (calculated)
04 years 03 months
Funder(s)
EPSRC (UKRI)
Funding Amount
£0.00
Funder/Grant study page
EPSRC
Contracted Centre
University College London
Principal Investigator
Uzair Hakim
PI Contact
uzair.hakim.17@ucl.ac.uk
WHO Catergories
Models of Disease
Understanding Underlying Disease
Disease Type
Dementia (Unspecified)

CPEC Review Info
Reference ID771
ResearcherReside Team
Published24/07/2023

Data

Award Number2373024
Status / StageActive
Start Date20190927
End Date20231227
Duration (calculated) 04 years 03 months
Funder/Grant study pageEPSRC
Contracted CentreUniversity College London
Funding Amount£0.00

Abstract

1) Brief description of the context of the research including potential impact

Experiments to understand how the brain functions have largely been limited to single person studies, confined to prohibitive scanners due to limitations with equipment. The rise of functional near-infrared spectroscopy (fNIRS) has allowed us the opportunity to delve deeper into our understanding of the brain in more ecologically valid scenarios.
This is especially important in social scenarios. The foundation of all social interactions is formed from the facial expressions of those involved. Therefore, observing how neural pathways function when presented with different facial stimuli grants us the opportunity to gain a deeper understanding of how the typical brain works. With this understanding fNIRS can be used to provide insight into how neurological disorders progress, in real time, in the case of stroke patients, dementia or a multitude of neurological disorders.

2) Aims and objectives

Thus, the aims of the project are to develop multi-modal integrative approaches, combining the use of EEG, fNIRS, eye-tracking, eye-contact, subject reports, and facial classifications into models of high-level facial dynamics, human communication and behaviour. Studies will be performed to examine and map facial responses to brain functional responses and cortical locations, in experiments were participants both emote expressions and observe emoted expressions from others. Machine Learning methodologies will be explored to apply a computational framework for the classification of neural signatures in relation to facial expressions. Signal processing and statistical techniques will also be developed to effectively relate the multi-modal equipment data with each other and for analysis.

3) Novelty of the research methodology

The novelty of the project lies with the experimental methodology and the computational methods to analyse data. A significant benefit of fNIRS over other neuroimaging modalities is the opportunity to examine the neural correlates underlying social interactions with two people simultaneously; it is known that the cortical processes of two-person interactions are significantly different to one person observing a video or picture. Considering this, the methodologies of investigating two-person neurological responses are underdeveloped. As such novel machine learning approaches will be investigated, developed, and applied with the aim of isolating neural responses to facial expressions. Additionally, as we will be developing a multi-modal suite to investigate multiple physiological parameters, computational tools will be developed to integrate the obtained data and to perform analysis on it.

4) Alignment to EPSRC’s strategies and research areas

This project plays a role in the Optimising Treatment grand challenge of the EPSRC strategies. Specifically, the long term impact of this project relates to the development and improvement of novel, low-cost diagnostic devices, with high sensitivity, specificity and reliability, for timely and accurate diagnosis, improving the choice and reducing the cost of intervention, and increasing the likelihood of successful health outcomes. Additionally, data analytic methods to identify disease phenotypes and associated responses to treatment from population data, allowing evidence-based selection of treatment options, with lower costs and morbidity, and improved health outcomes. Short-term the project focuses on new methodologies for making sense of complex healthcare data, aswell as the development of next generation imaging technologies for diagnostic, monitoring and therapeutic applications; with improved accuracy, affordability and incorporating new modalities.

5) Any companies or collaborators involved

This project is in collaboration with Shimadzu Corporation and the Brain Function Laboratory, Yale School of Medicine, New Haven, CT.

Aims

Thus, the aims of the project are to develop multi-modal integrative approaches, combining the use of EEG, fNIRS, eye-tracking, eye-contact, subject reports, and facial classifications into models of high-level facial dynamics, human communication and behaviour. Studies will be performed to examine and map facial responses to brain functional responses and cortical locations, in experiments were participants both emote expressions and observe emoted expressions from others. Machine Learning methodologies will be explored to apply a computational framework for the classification of neural signatures in relation to facial expressions. Signal processing and statistical techniques will also be developed to effectively relate the multi-modal equipment data with each other and for analysis.