Learning over sets, subgraphs, and streams: How to accurately incorporate graph context

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=DnIo0mu0zLU



Category:
Guide
Duration: 57:36
2,218 views
82


MSR AI Distinguished Talk Series: Learning over sets, subgraphs, and streams: How to accurately incorporate graph context in network models

Although deep learning methods have been successfully applied in structured domains comprised of images and natural language, it is still difficult to apply the methods directly to graph and network domains due to issues of heterogeneity and long range dependence. In this talk, I will discuss some of our recent work developing neural network methods for complex network domains, including node classification, motif prediction, and knowledge graph inference. The key insights include incorporating dependencies from graph context into both the input features and the model architectures, employing randomization to learn permutation invariant functions over sets, and using graph-aware attention mechanisms to offset noise when incorporating higher-order patterns. Experiments on real work social network data shows that our methods produce significant gains compared to other state-of-the-art methods, but only when we think carefully about how to integrate relational inductive biases into the process.

See more at https://www.microsoft.com/en-us/research/video/learning-over-sets-subgraphs-and-streams-how-to-accurately-incorporate-graph-context/




Other Videos By Microsoft Research


2020-05-26Kristin Lauter's TED Talk on Private AI at Congreso Futuro during Panel 11 / SOLVE
2020-05-19How an AI agent can balance a pole using a simulation
2020-05-19How to build Intelligent control systems using new tools from Microsoft and simulations by Mathworks
2020-05-13Diving into Deep InfoMax with Dr. Devon Hjelm | Podcast
2020-05-08An Introduction to Graph Neural Networks: Models and Applications
2020-05-07MSR Cambridge Lecture Series: Photonic-chip-based soliton microcombs
2020-05-07Multi-level Optimization Approaches to Computer Vision
2020-05-05How good is your classifier? Revisiting the role of evaluation metrics in machine learning
2020-05-05Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes
2020-05-05Hypergradient descent and Universal Probabilistic Programming
2020-05-04Learning over sets, subgraphs, and streams: How to accurately incorporate graph context
2020-05-04An Ethical Crisis in Computing?
2020-04-21Presentation on “Beyond the Prototype” by Rushil Khurana
2020-04-20Understanding and Improving Database-backed Applications
2020-04-20Efficient Learning from Diverse Sources of Information
2020-04-08Project Orleans and the distributed database future with Dr. Philip Bernstein | Podcast
2020-04-07Reprogramming the American Dream: A conversation with Kevin Scott and J.D. Vance, with Greg Shaw
2020-04-01An interview with Microsoft President Brad Smith | Podcast
2020-03-30Microsoft Rocketbox Avatar library
2020-03-27Virtual reality without vision: A haptic and auditory white cane to navigate complex virtual worlds
2020-03-26Statistical Frameworks for Mapping 3D Shape Variation onto Genotypic and Phenotypic Variation



Tags:
deep learning methods
structured domains
heterogeneity
node classification
motif prediction
knowledge graph inference
Jennifer Neville
Paul Bennett
Debadeepta Dey
Sean Andrist