Sparse codes for natural sounds

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=uvrIfb9_qzQ



Duration: 53:50
8,970 views
24


Google Tech Talks
May, 20 2008

ABSTRACT

The auditory neural code must serve a wide range of tasks that
require great sensitivity in time and frequency and be effective over the
diverse array of sounds present in natural acoustic environments. It has
been suggested (Barlow, 1961; Atick, 1992; Simoncelli & Olshausen, 2001;
Laughlin & Sejnowski, 2003) that sensory systems might have evolved highly
efficient coding strategies to maximize the information conveyed to the
brain while minimizing the required energy and neural resources. In this
talk, I will show that, for natural sounds, the complete acoustic waveform
can be represented efficiently with a nonlinear model based on a population
spike code. In this model, idealized spikes encode the precise temporal
positions and magnitudes of underlying acoustic features. We find that when
the features are optimized for coding either natural sounds or speech, they
show striking similarities to time-domain cochlear filter estimates, have a
frequency-bandwidth dependence similar to that of auditory nerve fibers, and
yield significantly greater coding efficiency than conventional signal
representations. These results indicate that the auditory code might
approach an information theoretic optimum and that the acoustic structure of
speech might be adapted to the coding capacity of the mammalian auditory
system.

Speaker: Vivienne Ming
Vivienne Ming was born in 1971 in Pasadena, CA. She received
her B.S. (2000) in Cognitive Neuroscience from UC San Diego, developing face
and expression recognition systems in the Machine Perception Lab. She earned
her M.A. (2003) and Ph.D. (2006) in Psychology from Carnegie Mellon
University along with a doctoral training degree in computational
neuroscience from the Center for the Neural Basis of Cognition. Her
dissertation, *Efficient auditory coding*, combined computational and
behavioral approaches to study the perception of natural sounds, including
speech. Since 2006, she has worked jointly as a junior fellow and
post-doctoral researcher at the Redwood Center for Theoretical Neuroscience
at UC Berkeley and MBC/Mind, Brain & Cognition at Stanford University.







Tags:
google
techtalks
techtalk
engedu
talk
talks
googletechtalks
education