COSYNELog in


“Deep learning” and the brain: Promises and limitations of using deep neural networks as a tool for neuroscience–Day 1

Monday, 27 February 2017

Wasatch


Organizers: Benjamin Grewe, Blake Richards, Alex Kell

Co-Organizers: Daniel Yamins, Timothy Lillicrap, Amelia Christensen


Deep learning has revolutionized the field of artificial intelligence, providing unprecedented performance on many real-world tasks such as image or speech recognition. Meanwhile, in systems and cognitive neuroscience, deep artificial neural networks (ANNs) are rapidly advancing the analysis of large-scale neural datasets. Convolutional networks, for example, predict stimulus-evoked visual and auditory cortical responses substantially better than previous methods. Given that deep neural networks were inspired by biological neural networks, it has been speculated that (1) deep networks can provide greater insight into experimental data than existing models, and (2) deep learning algorithms may approximate how learning occurs in the real brain.

However, the principles of computation in deep ANNs remain poorly understood, making their application to data analysis complicated. As well, many facets of deep learning algorithms in ANNs are biologically implausible, such as the large amount of labeled data required for training and non-local synaptic weight updates. Unlike learning in deep ANNs, learning in the real brain emerges from a complex interplay of diverse feedback processes acting at multiple scales, ranging from single synapses to entire networks. Furthermore, deep networks in artificial intelligence applications are crafted for very specific tasks, whereas the mammalian brain is a highly general purpose learning machine. Therefore, a large gap still exists between our understanding of learning in deep ANNs and the operations of the brain.

The overall goal of this workshop is to bring together scientists from the fields of machine learning and computational/systems neuroscience to discuss the principles and applications of deep learning in neuroscience. It will provide a forum for discussion among neuroscientists using ANNs for neural data analysis, machine learning researchers using ANNs for artificial intelligence, and any researcher interested in linking deep learning in ANNs and biological neural networks. This will help to lay the groundwork for experimental and modeling research in the next decade that seeks to bridge the current gap between deep learning and neuroscience.


Day 1: Understanding neural representations with deep neural networks – progress & limitations

Recent advances in ANNs have yielded substantial improvements in computer vision, automated speech recognition, and other related fields. As a result, for the first time in the history of neuroscience, we have stimulus-computable models that achieve human-levels of performance on certain perceptual tasks (e.g., image classification, word recognition, etc.). In addition to behaving similarly to humans, convolutional neural networks predict visual and auditory cortical responses to natural stimuli better than any existing alternatives and recurrent networks recapitulate aspects of the dynamics in primate motor cortex and prefrontal cortex. Ongoing advances in recurrent neural networks and deep reinforcement learning may yield promising models of other brain systems.

Despite this success and promise, there are reasons to be circumspect. These networks are generally trained on classification tasks, yet perception is far richer than classification. They typically require massive amounts of labeled training data, while humans likely require far less. Many of these networks are confused by so-called “adversarial images” that appear nearly identical to human observers. Furthermore, while these networks can predict cortical responses quite well, gaining intuitions about the computations implemented by model units has proven difficult.

In this session we will explore the promise, and limitations, of using deep neural networks as a tool to understand large-scale neural data. We will explore and discuss a number of open questions. What are good tests to falsify a given deep network model of a neural system? What additional constraints may allow these models to overcome these falsifications? Given that we have the full connectome and responses of all units in these models, what does it mean to “understand” a neural network (either biological or artificial)? Day 1 of the workshop will provide the opportunity to explore these and other questions.


Morning session

8.00–8.10a Introduction by Alex Kell and Dan Yamins

8.10–8.40a Nikolaus Kriegeskorte, Testing complex brain-computational models to understand how the brain works

8.40–8.55a Michael Oliver, Convolutional models of the ventral stream

8.55–9.25a Marcel van Gerven, ANN-based prediction of neural and behavioural responses inhumans

9.25–9.45a Coffee break

9.45–10.00a Alex Kell, Hierarchical computation in human auditory cortex revealed by deep neural networks

10.00–10.30a Matthias Bethge, What neuroscience can learn from computer vision

10.30–10.45a Niru Maheswaranathan and Lane McIntosh, Deep learning models of the retinal response to natural scenes

Afternoon session

4.30–5.00p Konrad Körding, Problems with the non-deep-learning based approach to neuroscience and how to fix them

5.00–5.15p Olivier Henaff, Geodesics of artificial and biological representations

5.15–5.45p Daniel Yamins, Some new, less heavily supervised, loss functions for training computational models of the visual system

5.45–6.05p Coffee break

6.05–6.20p SueYeon Chung, Manifold representations in deep networks

6.20–6.50p Friedeman Zenke, Learning in spiking neural networks

6.50–7.05p Ingmar Kanitscheider, Hippocampal coding arises from probabilistic self-localization across many ambiguous environments

Retrieved from "http://www.cosyne.org/c/index.php?title=Workshops2017_01_01"

This page has been accessed 1,303 times. This page was last modified 19:34, 22 February 2017.


Cosyne 17
Meeting program
Workshops
Hotels
Transportation
Abstracts
Registration
Volunteers
Mailing lists
Travel grants
Sponsoring
Exhibitors
FAQs

Cosyne 16
Cosyne 15
Cosyne 14
Cosyne 13
Cosyne 12
Cosyne 11
Cosyne 10
Cosyne 09
Cosyne 08
Cosyne 07
Cosyne 06
Cosyne 05
Cosyne 04