First and Second Person Neuroscience: A Fusion, Analysis, Display and Feedback Platform for Affective, Cognitive and perceptual States Integrating Outputs of Multiple Sensors
This project addresses a principal challenge of neurophysiologically informed interventions and self-regulation, namely, the design of a platform that integrates the outputs of several physiological state sensors (such as heart rate, heart rate variability, bodily movement, galvanic skin response, spatio-temporal patterns of EEG signals, eye tracking data, thermal imaging data, facial action data) into a single platform. The platform will allow for the analysis, interpretation and display of physiological and neuro-physiological data onto variables relating to internal psychological states (‘moods’, ‘feelings’, mental imagery), behavioural states (propensities to make certain kinds of choices under controlled conditions) and task-performance metrics (accuracy, reliability, speed). It will enable researchers and developers to produce virtual-reality –based stimulus patterns that are adaptive to the user’s physiological responses (eg: heart-rate-mediated changes in visual patterns), this enabling real time first- and second-person neuroscience experience and interventions The platform will be useful to researchers wishing to expand the horizon predictive models and variable sets in laboratory and field settings, to developers of learning technology and learning experiences who wish to incorporate perceptual, affective-visceral and neuro-cognitive variables into their designs, and to developers of new AR, VR and wearable technologies and devices who are interested in user-level physiological reactions to new environments and stimuli.
A Multi-Modal Signal Fusion, Analysis and Display Platform for Neurophysiological Signals
We are building a multi-modal, multi-laboratory platform for the analysis of neurophysiological signals (fMRI, EEG, ANS-modulated signals such as GSR, HR, HRV, thermal maps of face and body) that will enable researchers and developers to pool data sets into a common analysis platform able to produce customized reports for individual users and user classes. The platform will use adaptive learning/deep learning algorithms (‘machine learning’) to extract patterns of neural and physiological activity triggered by specific stimuli or environmental conditions, and classify them by user types, by stimulus patterns, by effective and functional connectivity patterns, and by context.’