Optimization for Machine Learning

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=_U2Sn67Yrf0



Duration: 55:45
17,939 views
100


Google Tech Talks
March, 25 2008

ABSTRACT

S.V.N. Vishwanathan - Research Scientist

Regularized risk minimization is at the heart of many machine learning algorithms. The underlying objective function to be minimized is convex, and often non-smooth. Classical optimization algorithms cannot handle this efficiently. In this talk we present two algorithms for dealing with convex non-smooth objective functions. First, we extend the well known BFGS quasi-Newton algorithm to handle non-smooth

functions. Second, we show how bundle methods can be applied in a machine learning context. We present both theoretical and experimental justification of our algorithms.

Speaker: S.V.N. Vishwanathan - Research Scientist - Zurich
S.V.N Vishwanathan is a principal researcher in the Statistical Machine Learning program, National ICT Australia with an adjunct appointment at the College of Engineering and Computer Science(CECS), Australian National University. I got my Ph.D in 2002 from the Department of Computer Science and Automation (CSA) at the Indian Institute of Science.







Tags:
google
techtalks
techtalk
engedu
talk
talks
googletechtalks
education