Gaze-independent c-VEP BCI

Context
Typically, a brain-computer interface (BCI) still requires its user to move the eyes towards a target stimulus (e.g., a character) on a screen (e.g., in a standard matrix speller), specifically, they are gaze-dependent. This ability, however, is gradually lost in certain patient populations, for instance people with amyotrophic lateral sclerosis (ALS). This project studies a gaze-independent BCI that uses covert spatial attention together with novel machine learning methods. This has already been studied using the P300 response, but not yet using the much faster code-modulated visual evoked potentials (c-VEP). Ultimately, the need of any voluntary muscle control is eliminated, which enables a fast, reliable, fully brain-based assistive device.
Image credit: Treder, M. S., & Blankertz, B. (2010). (C)overt attention and visual speller design in an ERP-based brain-computer interface. Behavioral and Brain Functions, 6(1), 1-13.
Research question
In this project, it shall be investigated if c-VEP as measured by EEG provide a reliable (i.e., higher than chance) control signal for gaze-independent BCI. Additionally, because of covert attention, changes in the locus of the c-VEP response can be expected (e.g., lateralization), that can be incorporated in the machine learning methods to improve decoding performance. Finally, visual alpha oscilations may provide decodable information as well, potentially requiring a hybrid decoding approach.
Skills / background required
- Very proficient in Python
- Proficient in machine learning