COSYNELog in


Cosyne 2009 Workshops


March 3, 2009

Snow Bird, Utah


Workshop Title

Dimensionality reduction for multi-channel neural recordings

Organizer(s)

Byron Yu, Stanford University & Gatsby Computational Neuroscience Unit, UCL

John Cunningham, Stanford University

Abstract

Advances in neural recording technology have enabled simultaneous recordings from tens to hundreds of neurons. In principle, this should allow for unprecedented views of the time-evolution of neural population activity on single trials. Such views should, in turn, advance our understanding of the dynamics of neural circuitry and provide new insights into neural processing. While we currently lack the statistical tools for extracting such views from the raw data, recent work has shown that this is becoming a very real possibility. The raw data are difficult to visualize and interpret directly because they are 1) high-dimensional, and 2) corrupted by spiking noise. In many fields, dimensionality reduction techniques have been fruitfully applied to denoise and to provide a parsimonious description of noisy, high-dimensional data. However, it is unclear which conventional dimensionality reduction techniques, if any, are suitable for the point-process nature of neural data. The goal of this workshop is to bring together experts in experimental neuroscience, dimensionality reduction, and dynamical systems to discuss different approaches to reducing the dimensionality of multi-channel neural data as applied to the study of neural processing.

The suitability of a dimensionality reduction technique to neural data depends on three broad factors. The first concerns the statistical properties of the noise processes assumed by the technique, which directly affect its ability to separate signal from noise. Neurons are known to exhibit Poisson-like behavior, whereby higher spike rates are associated with higher spiking variability. Because different neurons can change their spike rates in different ways, spiking noise can vary both across neurons and across time, which makes the tasks of denoising and dimensionality reduction particularly challenging. Second, dimensionality reduction in neural data should identify scientifically important dimensions, not just directions of greatest variability. In particular, the dimensions in which the neural responses are most tightly controlled may be those that have the greatest impact on stimulus representation or motor output, and may therefore be critical to addressing scientific questions. Third, if the underlying neural circuitry is believed to evolve smoothly over time, some form of temporal smoothing may be desired. However, it is currently unclear how to properly select the degree or form of smoothing used.

In this workshop, we will consider various dimensionality reduction techniques that have been developed and applied to multi-channel neural data, along with the scientific questions being asked. In each case, we will consider questions such as the following. What do the identified directions mean in terms of the presented stimuli, behaviors elicited, and internal states? Can the presented stimuli or behaviors elicited be better predicted by the neural activity after dimensionality reduction? Are static dimensionality reduction techniques sufficient, or should dynamical systems techniques be used?

Speakers

Morning


8:00-8:05 John Cunningham (Stanford)

Welcome / introduction


8:05-8:35 Kevin Briggman (MPI, Heidelberg)


8:35-8:45 Discussion


8:45-9:15 Barani Raman (NIH)

Analysis of spatio-temporal odor codes in the locust olfactory system

Odorants are represented as spatio-temporal patterns of spiking in the antennal lobe (AL, insects) and the olfactory bulb (OB, mammals). We combined electrophysiological recordings in the locust with well-constrained computational models to examine how these neural codes for odors are generated. Extracellular recordings from the olfactory receptor neurons (ORNs) that provide input to the AL showed that the ORNs themselves can respond to odorants with reliable spiking patterns that vary both in strength (firing rate) and time course. A single ORN could respond with diverse firing patterns to different odors, and, a single odorant could evoke differently structured responses in multiple ORNs. Further, odors could elicit responses in some ORNs that greatly outlasted the stimulus duration, and some ORNs showed enduring inhibitory responses that fell well below baseline activity levels, or reliable sequences of inhibition and excitation. Thus, output from ORNs contains temporal structures that vary with the odor. The heterogeneous firing patterns of sensory neurons may, to a greater extent than presently understood, contribute to the production of complex temporal odor coding structures in the AL.

Our computational model of the first two stages of the olfactory system revealed that several well-described properties of odor codes previously believed to originate within the circuitry of the AL (odor-elicited spatio-temporal patterning of projection neuron (PN) activity, decoupling of odor identity from intensity, formation of fixed-point attractors for long odor pulses) appear to arise within the ORNs. To evaluate the contributions of the AL circuits, we examined subsequent processing of the ORN responses with a model of the AL network. The AL circuitry enabled the transient oscillatory synchronization of groups of PNs. Further, we found that the AL transformed information contained in the temporal dynamics of the ORN response into patterns that were more broadly distributed across groups of PNs, and more temporally complex because of GABAergic inhibition from local neurons. And, because of this inhibition, and unlike odor responses in groups of ORNs, responses in groups of PNs decorrelated over time, allowing better use of the AL coding space. Thus, the principle role of the AL appears to be transforming spatio-temporal patterns in the ORNs into a new coding format, possibly to decouple otherwise conflicting odor classification and identification tasks.


9:15-9:25 Discussion


9:25-9:40 Break


9:40-10:10 Byron Yu (Stanford / Gatsby Unit, UCL)

Low-dimensional single-trial analysis of neural population activity

Neural responses are typically studied by averaging spiking activity across multiple repeated trials. However, particularly in cognitive tasks (such as decision making and motor planning), the timecourse of neural responses may differ on nominally identical trials. In such settings, it is critical that the neural data not be averaged across trials, but instead be analyzed on a trial-by-trial basis. With the ability to record simultaneously from a neural population (tens to hundreds of neurons), we consider techniques for extracting a smooth low-dimensional "neural trajectory" summarizing the recorded activity on a single trial. Beyond the benefit of visualizing the high-dimensional noisy spiking activity in a compact denoised form, such trajectories can offer insight into the dynamics of the underlying neural circuitry. We applied these techniques to the activity of 61 neurons recorded simultaneously in macaque premotor and motor cortices. We found that i) the dimensionality of the linear subspace within which the neural activity evolved during the planning and execution of arm reaches to a single target ranged from 8 to 12, and ii) the neural trajectories converged during motor planning, an effect suggestive of attractor dynamics which previously could only be inferred indirectly.


10:10-10:20 Discussion


10:20-10:50 Surya Ganguli (UCSF)

One Dimensional Dynamics of Attention and Decision Making in LIP

Where we allocate our visual spatial attention depends upon a continual competition between internally generated goals and external distractions. Recently it was shown that single neurons in the macaque lateral intraparietal area (LIP) can predict the amount of time a distractor can shift the locus of spatial attention away from a goal. We propose that this remarkable dynamical correspondence between single neurons and attention can be explained by a network model in which generically high-dimensional firing-rate vectors rapidly decay to a single mode. We find direct experimental evidence for this model, not only in the original attentional task, but also in a very different task involving perceptual decision making. These results confirm a theoretical prediction that slowly varying activity patterns are proportional to spontaneous activity, pose constraints on models of persistent activity, and suggest how rapid dimensionality reduction in neuronal dynamics can yield a mechanism for the emergence of robust behavioral timing from heterogeneous neuronal populations.


10:50-11:00 Discussion


Afternoon


16:30-17:00 Christian Machens (ENS Paris)

A low-dimensional network model for recordings from the prefrontal cortex

Persistent neural activity is often correlated with the value or identity of an item held in short-term memory. Suprisingly, such activity often also depends on elapsed time during the memory period, even in cases where knowledge of elapsed time is behaviorally unimportant. To study this interaction of time and memory value, we examined the activity of neurons recorded from the prefrontal cortex of monkeys performing a short-term memory task. Although time and memory value are jointly represented across the same set of neurons, and therefore share a common anatomical substrate, we found a reduced-dimensionality representation of the population activity in which time and memory value are entirely distinct. Based on a simple linear generative model of the population activity, we show that these separate representations are likely maintained by different mechanisms. Our methods may be applicable to other tasks in which neural responses are highly heterogeneous, and dependent on more than one variable.


17:00-17:10 Discussion


17:10-17:40 Jonathon Shlens (UC Berkeley / Salk)

Exploring the network structure of populations of neurons using maximum entropy techniques

All visual signals from the eye to the brain originate in the electrical activity of retinal ganglion cells (RGCs). Standard models implicitly assume that RGCs signal information independent of one another. However, several studies have demonstrated that significant correlated activity exist amongst RGCs and may fundamentally alter visual signaling. We recorded the electrical activity of several hundred RGCs in peripheral monkey retina under various forms of visual stimulation. Pairs of RGCs fired nearly simultaneously (i.e. synchronously) several-fold more often than expected by chance indicating significant network interactions. By exploting maximum entropy techniques, we asked whether the complete network activity can be simply explained by interactions between pairs of adjacent cells. We found that such maximum entropy models explain the data with high precision and proivde a parsimonious summary of network activity. We conclude by discussing the general application of such dimensional reduction techniques as well as future directions for exploring network activity in large populations of neurons.


17:40-17:50 Discussion


17:50-18:05 Break


18:05-18:35 Jonathan Pillow (UT Austin)

Encoding and decoding of neural population spiking activity using a generalized linear model

How does the joint spiking activity of a neural population encode a visual scene? Although the response properties of individual neurons in the visual pathway have been well-studied, much less is known about the dependencies between neurons and their role in the processing of visual stimuli. In this talk, I will discuss a model-based approach to understanding the neural code in populations of spiking neurons. A multivariate point-process model captures both the stimulus-dependence and the spatio-temporal correlation structure of responses from a complete population of retinal ganglion cells, and can be used to assess the importance of correlations via Bayesian decoding of population spike responses. I will discuss the implications of this framework for studying the role of correlated activity in the encoding and decoding of sensory signals.


18:35-18:45 Discussion


18:45-19:30 Panel discussion with all speakers

Retrieved from "http://www.cosyne.org/c/index.php?title=Dimensionality_reduction_for_multi-channel_neural_recordings"

This page has been accessed 6,076 times. This page was last modified 03:38, 14 February 2009.


Cosyne 18
Meeting program
Workshops
Hotels
Transportation
Abstracts
Registration
Volunteers
Mailing lists
Travel grants
Sponsoring
Exhibitors
FAQs

Cosyne 17
Cosyne 16
Cosyne 15
Cosyne 14
Cosyne 13
Cosyne 12
Cosyne 11
Cosyne 10
Cosyne 09
Cosyne 08
Cosyne 07
Cosyne 06
Cosyne 05
Cosyne 04