L. Mancera, R. Jimenez-Galan, J.M. Cortes. Visual Information Processing trough a simulated retina: Linear vs Bayesian reconstruction of natural movies. Society for Neuroscience Annual Meeting 2009
In vision neuroscience much of the knowledge about information processing comes from experiments in presence of highly reduced stimuli: orientated bars, drifting gratings or dots and sometimes combinations of them. Very often, the implicit linearity of the stimuli works quite well to explain linear input-output relations in the visual system. However, it is not clear from here how and why these input-output relations could be extrapolated to natural conditions, with multiple non-linear stimuli with long range correlations in both time and space domains. Alternatively, we believe that a more sensible scenario to approach visual responses to natural movies is by studying information processing trough a simulated retina. To do that, we chose the virtual retina published by (Wohrer and Kornprobst 2009), which reproduces some electrophysiological properties of ganglion retina neurons. The retina architecture has 3 different layers, 1) the outer plexiform layer (light receptors, horizontal and bipolar cells) 2) the layer with contrast gain control (feedback loop at bipolar cells) and 3) the ganglion layer. The model implements retinal processing by a series of different consecutive temporal and spatial filters. Different non-linearities are incorporated to the model (spike frequency adaptation with different time scales, contrast gain control and the spikes generator at the level of ganglion cells). The aim of this work is to reconstruct the original sequence of natural scenes from neuron responses at different layers. We have applied Bayesian reconstruction (Stanley et al 1999, Babacan et al 2008, Mancera et al 2009) and compared to the linear reconstruction, which, based on minimization of the mean-square error, was successfully able to decode thalamic ganglion responses to natural stimuli (Pillow et al 2008).