Learning to Represent Programs with Graphs | TDLS

Published on ● Video Link: https://www.youtube.com/watch?v=Q66iNWiHt_U



Duration: 1:15:49
1,090 views
15


Toronto Deep Learning Series, 25 June 2018

For slides and more information, visit https://tdls.a-i.science/events/2018-06-25/

Paper Review: https://arxiv.org/abs/1711.00740

Speaker: https://www.linkedin.com/in/amirfz/
Organizer: https://www.linkedin.com/in/amirfz/

Host: http://www.rbc.com/futuremakers/

Paper abstract:
"Learning tasks on source code (i.e., formal languages) have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known syntax. For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered. We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs. We evaluate our method on two tasks: VarNaming, in which a network attempts to predict the name of a variable given its usage, and VarMisuse, in which the network learns to reason about selecting the correct variable that should be used at a given program location. Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VarMisuse task in many cases. Additionally, our testing showed that VarMisuse identifies a number of bugs in mature open-source projects."







Tags:
deep learning
graph neural net
machine learning on source code
graph neural network
neural code parsing
learning representation
graph convolutional networks