How goodness metrics lead to undesired recommendations

Published on ● Video Link: https://www.youtube.com/watch?v=kqcJztcYw3k



Duration: 5:13
611 views
25


5-min ML Paper Challenge
Presenter: https://www.linkedin.com/in/serenamcdonnell/

Folding: Why Good Models Sometimes Make Spurious Recommendations
https://dl.acm.org/citation.cfm?id=3109911

In recommender systems based on low-rank factorization of a partially observed user-item matrix, a common phenomenon that plagues many otherwise effective models is the interleaving of good and spurious recommendations in the top-K results. A single spurious recommendation can dramatically impact the perceived quality of a recommender system. Spurious recommendations do not result in serendipitous discoveries but rather cognitive dissonance. In this work, we investigate folding, a major contributing factor to spurious recommendations. Folding refers to the unintentional overlap of disparate groups of users and items in the low-rank embedding vector space, induced by improper handling of missing data. We formally define a metric that quantifies the severity of folding in a trained system, to assist in diagnosing its potential to make inappropriate recommendations. The folding metric complements existing information retrieval metrics that focus on the number of good recommendations and their ranks but ignore the impact of undesired recommendations. We motivate the folding metric definition on synthetic data and evaluate its effectiveness on both synthetic and real world datasets. In studying the relationship between the folding metric and other characteristics of recommender systems, we observe that optimizing for goodness metrics can lead to high folding and thus more spurious recommendations.




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2019-05-02Classification of sentiment reviews using n-gram machine learning approach
2019-05-02Introduction to the Conditional GAN - A General Framework for Pixel2Pixel Translation
2019-05-02A Style-Based Generator Architecture for Generative Adversarial Networks
2019-05-02A Framework for Developing Deep Learning Classification Models
2019-05-02Revolutionizing Diet and Health with CNN's and the Microbiome
2019-05-02Efficient implementation of a neural network on hardware using compression techniques
2019-05-02Supercharging AI with high performance distributed computing
2019-05-02Combining Satellite Imagery and machine learning to predict poverty
2019-05-02Revolutionary Deep Learning Method to Denoise EEG Brainwaves
2019-04-25[LISA] Linguistically-Informed Self-Attention for Semantic Role Labeling | AISC
2019-04-23How goodness metrics lead to undesired recommendations
2019-04-22Deep Neural Networks for YouTube Recommendation | AISC Foundational
2019-04-18[Phoenics] A Bayesian Optimizer for Chemistry | AISC Author Speaking
2019-04-18Why do large batch sized trainings perform poorly in SGD? - Generalization Gap Explained | AISC
2019-04-16Structured Neural Summarization | AISC Lunch & Learn
2019-04-11Deep InfoMax: Learning deep representations by mutual information estimation and maximization | AISC
2019-04-08ACT: Adaptive Computation Time for Recurrent Neural Networks | AISC
2019-04-04[FFJORD] Free-form Continuous Dynamics for Scalable Reversible Generative Models (Part 1) | AISC
2019-04-01[DOM-Q-NET] Grounded RL on Structured Language | AISC Author Speaking
2019-03-315-min [machine learning] paper challenge | AISC
2019-03-28[Variational Autoencoder] Auto-Encoding Variational Bayes | AISC Foundational



Tags:
deep learning
machine learning
recommender system
recommendation engine
recommender systems
matrix factorization
alternating least squares
collaborative filtering
folding