Abstract: Neural decoding plays a vital role in the interaction between the brain and the outside world. Our task in this paper is to decode the movement track of a finger directly based on the neural data. Existing neural decoding solutions primarily perform some preprocessing operations on neural data before feeding them into existing models (such as LSTM) for decoding. However, these solutions either are prone to overfitting or cannot well exploit the spatial and temporal information. In our previous observations, there is a symmetrical phenomenon between the unsupervised decoded trajectory and the ground truth trajectory within the activity space. This precisely motivates us to propose (or derive) a robust weakly-supervised framework (or model structure), called ViF-SD2E, for neural decoding. In particular, it consists of a space-division (SD) module and an exploration–exploitation (2E) strategy, to effectively exploit both the spatial information of the outside world and the temporal information of neural activity, where the SD2E output is analogized with the weak 0/1 vision feedback (ViF) label for training. Extensive experiments demonstrate the effectiveness of our method, which can sometimes be comparable to supervised counterparts. Therefore, we redirect our attention to the information (hidden in data) ViF-SD2E conveys to us. In other words, we believe that the advantage of ViF-SD2E lies in the fact that its processing steps are objectively determined by the inherent attributes (i.e., symmetry) of the neural data, or rather, the model structure is fixed.
Loading