Explainable Neural Networks based on Additive Index Models | TDLS

Published on ● Video Link: https://www.youtube.com/watch?v=J75cQmL7amA



Duration: 1:34:47
1,457 views
13


Toronto Deep Learning Series, 23 July 2018

For slides and more information, visit https://tdls.a-i.science/events/2018-07-23/

Paper Review: https://arxiv.org/abs/1806.01933

Speaker: https://www.linkedin.com/in/hassan-omidi-firouzi-4b423337/
Organizer: https://www.linkedin.com/in/amirfz/

Host: http://rbc.com/futuremakers/

Paper abstract:
"Machine Learning algorithms are increasingly being used in recent years due to their flexibility in model fitting and increased predictive performance. However, the complexity of the models makes them hard for the data analyst to interpret the results and explain them without additional tools. This has led to much research in developing various approaches to understand the model behavior. In this paper, we present the Explainable Neural Network (xNN), a structured neural network designed especially to learn interpretable features. Unlike fully connected neural networks, the features engineered by the xNN can be extracted from the network in a relatively straightforward manner and the results displayed. With appropriate regularization, the xNN provides a parsimonious explanation of the relationship between the features and the output. We illustrate this interpretable feature--engineering property on simulated examples."




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2018-10-02Prediction of Cardiac arrest from physiological signals in the pediatric ICU | TDLS Author Speaking
2018-09-24Junction Tree Variational Autoencoder for Molecular Graph Generation | TDLS
2018-09-19Reconstructing quantum states with generative models | TDLS Author Speaking
2018-09-13All-optical machine learning using diffractive deep neural networks | TDLS
2018-09-05Recurrent Models of Visual Attention | TDLS
2018-08-28Eve: A Gradient Based Optimization Method with Locally and Globally Adaptive Learning Rates | TDLS
2018-08-20TDLS: Large-Scale Unsupervised Deep Representation Learning for Brain Structure
2018-08-14Principles of Riemannian Geometry in Neural Networks | TDLS
2018-08-07Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond | TDLS
2018-07-30Program Language Translation Using a Grammar-Driven Tree-to-Tree Model | TDLS
2018-07-23Explainable Neural Networks based on Additive Index Models | TDLS
2018-07-18TMLS2018 - Machine Learning in Production, Panel Discussion
2018-07-16Flexible Neural Representation for Physics Prediction | AISC Trending Paper
2018-07-10Connectionist Temporal Classification, Labelling Unsegmented Sequence Data with RNN | TDLS
2018-06-25Learning to Represent Programs with Graphs | TDLS
2018-06-19Quantum generative adversarial networks | TDLS Author Speaking
2018-06-12[SAGAN] Self-Attention Generative Adversarial Networks | TDLS
2018-06-05[ELMo] Deep Contextualized Word Representations | AISC
2018-05-23Few-Shot Learning Through an Information Retrieval Lens | TDLS
2018-05-14Improving Supervised Bilingual Mapping of Word Embeddings | TDLS
2018-05-01[Word2Bits] Quantized Word Vectors | AISC



Tags:
deep learning
machine learning
interpretablility
explainability
interpretability
neural networks