Unified Dimensionality Reduction: Formulation, Solution and Beyond

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=ydZFQ6QHoho



Duration: 1:04:05
106 views
1


In this talk, I will address the feature dimensionality reduction problem within a unified framework from three aspects: 1) Graph Embeddingand Extensions: A unified framework for general dimensionality reduction In the past decades, a large family of algorithms-supervised or unsupervised; stemming from statistics or geometry theory-has been designed to provide different solutions to the problem of dimensionality reduction. Beyond different motivations of these algorithms, I present a general formulation known as graph embedding to unify them in a common framework. Under graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of some specific intrinsic graph characterizing certain desired statistical or geometry property of a data set. Furthermore, the graph embedding framework can be used as a general platform to help develop new algorithms for dimensionality reduction, which is validated with example algorithm called Marginal Fisher Analysis (MFA). 2) Trace Ratio: A unified solution for general dimensionality reduction A large family of algorithms for dimensionality reduction end with solving a Trace Ratio problem in the form of $\arg \max_{W}Tr(W^T S_p W) / Tr(W^T S_l W)$, which is generally transformed into the corresponding Ratio Trace form $\arg \max_{W}Tr[~(W^T S_l W)^{-1}(W^T S_p W)~]$ for obtaining a closed-form but inexact solution. I propose an efficient iterative procedure to directly solve the Trace Ratio problem. In each step, a Trace Difference problem $\arg \max_{W}Tr[W^T (S_p-\lambda S_l) W]$ is solved with $\lambda$ being the trace ratio value computed from the previous step. Convergence of the projection matrix W, as well as the global optimum of the trace ratio value $\lambda$, are proven based on point-to-set map theories. 3) Element Rearrangement for Promoting Tensor Subspace Learning I will introduce an algorithm on how to promote tensor based subspace learning by rearrange the element position within a tensor.  Monotonic convergence of the algorithm is proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm.




Other Videos By Microsoft Research


2016-09-06Provably Optimal Solutions to Geometric Vision Problems
2016-09-06Interaction Design Projects for Health and Wellness
2016-09-06Congestion Games: Optimization in Competition
2016-09-06Bayesian topic models
2016-09-06A Passion for Calendars -- From the Maya to Mars
2016-09-06Persuasive Games: The Expressive Power of Videogames           
2016-09-06In-Network, Physical Adaptation of Sensor Networks
2016-09-06Secure Virtual Architecture: A Novel Foundation for Operating System Security
2016-09-06Engineering Performance Using Control Theory: A One Day How-To: Theory Part 2
2016-09-06Effective Scientific Data Management through Provenance Collection
2016-09-06Unified Dimensionality Reduction: Formulation, Solution and Beyond
2016-09-06Engineering Performance Using Control Theory: A How-To: Control Analysis & Real world applications
2016-09-06A Real-World Test-bed for Mobile Adhoc Networks: Methodology, Experimentations, Simulation & Results
2016-09-06Fusion of Optical and Radio Frequency Techniques: Cameras, Projectors and Wireless Tags
2016-09-06Hierarchical Phrase-Based Translation with Suffix Arrays.
2016-09-06Multi-stack automata reachability: A New Tractable Subclass
2016-09-06Seduced by Success: How the Best Companies Survive the 9 Traps of Winning          
2016-09-06Everything is Miscellaneous: The Power of the New Digital Disorder
2016-09-06Accelerating High Performance Computing Applications with Reconfigurable Logic
2016-09-06Cooperative Data and Computation Partitioning for Distributed Architectures
2016-09-06Rate Control Protocol (RCP): Congestion Control to Make Flows Complete Quickly



Tags:
microsoft research