Learning to Construct and Reason with a Large Knowledge Base of Extracted Information

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=KFwLcbttFR8



Duration: 1:08:28
285 views
2


Carnegie Mellon University's "Never Ending Language Learner" (NELL) has been running for over three years, and has automatically extracted from the web millions of facts concerning hundreds of thousands of entities and thousands of concepts. NELL works by coupling together many interrelated large-scale semi-supervised learning problems. In this talk I will discuss some of the technical problems we encountered in building NELL, and some of the issues involved in reasoning with this sort of large, diverse, and imperfect knowledge base. This is joint work with Tom Mitchell, Ni Lao, William Wang, and many other colleagues.







Tags:
microsoft research