Generalization Bounds and Consistency for Latent-Structural Probit and Ramp Loss

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=_kMwKHfipZM



Duration: 47:41
182 views
4


Linear predictors are scale-insensitive --- the prediction does not change when the weight vector defining the predictor is scaled up or down. This implies that direct regularization of the performance of a linear predictor with a scale sensitive regularizer (such as a norm of the weight vector) is meaningless. Linear predictors are typically learned by introducing a scale-sensitive surrogate loss function such as the hinge loss of an SVM. However, no convex surrogate loss function can be consistent in general --- in finite dimension SVMs are not consistent. Here we generalize probit loss and ramp loss to the latent-structural setting and show that both of these loss functions are consistent in arbitrary dimension for an arbitrary bounded task loss. Empirical experience with probit loss and ramp loss will be briefly discussed.




Other Videos By Microsoft Research


2016-07-27Gaussian Processes for Inference with Implicit Likelihoods
2016-07-273D Object Tracking for Augmented Reality: Handling Multiple Objects, Motion-Blur, & Lack of Texture
2016-07-27Robust Shallow Semantic Parsing of Text
2016-07-27Enabling Trustworthy Users
2016-07-27Video-based In Situ Tagging for Mobile Augmented Reality
2016-07-27Metric Learning and Manifolds: Preserving the Intrinsic Geometry
2016-07-27Making money with �free� apps
2016-07-27Collecting a Heap of Shapes
2016-07-27Automatically Assessing Personality from Speech
2016-07-27Ten User Experience Best Practices for Windows Phone Application Development
2016-07-27Generalization Bounds and Consistency for Latent-Structural Probit and Ramp Loss
2016-07-27Structured Prediction in NLP: Dual Decomposition and Structured Sparsity
2016-07-27High Availability for Database Systems in Cloud Computing Environments
2016-07-27Batches: Unified and Efficient Access to RPC, WS, and SQL Services
2016-07-27Reliable Multithreading through Schedule Memoization
2016-07-27Generalized Oblivious Transfer (GOT)
2016-07-27From Contextual Search to Automatic Content Generation: Scaling Human Editorial Judgment
2016-07-27Bound Analysis of Imperative Programs with the Size-change Abstraction
2016-07-27A mobile context monitoring platform for dynamic mobile computing environments
2016-07-27Privacy Amplification and Non-Malleable Extractors Via Character Sums
2016-07-27Visualization Clusters: from Tiled Displays to Remote Visualization



Tags:
microsoft research