Towards Trustworthy Recommender Systems: From Shallow Models to Deep Models to Large Models

Subscribers:
343,000
Published on ● Video Link: https://www.youtube.com/watch?v=GArXoRECXFU



Duration: 39:19
401 views
16


Research Talk
Yongfeng Zhang, Rutgers University

As the bridge between humans and AI, recommender system is at the frontier of Human-centered AI research. However, inappropriate use or development of recommendation techniques may bring negative effects to humans and the society at large, such as user distrust due to the non-transparency of the recommendation mechanism, unfairness of the recommendation algorithm, user uncontrollability of the recommendation system, as well as user privacy risks due to the extensive use of users’ private data for personalization. In this talk, we will discuss how to build trustworthy recommender systems along the progress that recommendation algorithms advance from shallow models to deep models to large models, including but not limited to the unique role of recommender system research in the AI community as a representative Subjective AI task, the relationship between Subjective AI and trustworthy computing, as well as typical recommendation methods on different perspectives of trustworthy computing, such as causal and counterfactual reasoning, neural-symbolic modeling, natural language explanations, federated learning, user controllable recommendation, echo chamber mitigation, personalized prompt learning, and beyond.

Learn more about the Responsible AI Workshop: https://www.microsoft.com/en-us/research/event/responsible-ai-an-interdisciplinary-approach-workshop/

This workshop was part of the Microsoft Research Summit 2022: https://www.microsoft.com/en-us/research/event/microsoft-research-summit-2022/




Other Videos By Microsoft Research


2022-12-12Combinatorial Pure Exploration with Limited Observation and Beyond
2022-12-12Oblivious Online Contention Resolution Schemes
2022-12-12Optimization from Structured Samples for Coverage and Influence Functions
2022-12-12End-to-end Reinforcement Learning for the Large-scale Traveling Salesman Problem
2022-12-12Deep Reinforcement Learning in Supply Chain Optimizations
2022-12-12Inverse Game Theory for Stackelberg Games: The Blessing of Bounded Rationality
2022-12-06Personality Predictions from Automated Video Interviews: Explainable or Unexplainable Models?
2022-12-06Responsible AI: An Interdisciplinary Approach | Panel Discussion
2022-12-06Personalizing Responsibility within AI Systems: A Case for Designing Diversity
2022-12-06Evidence-based Evaluation for Responsible AI
2022-12-06Towards Trustworthy Recommender Systems: From Shallow Models to Deep Models to Large Models
2022-12-06Development of a Game-Based Assessment to Measure Creativity
2022-12-06Interpretability, Responsibility and Controllability of Human Behaviors
2022-12-06On the Adversarial Robustness of Deep Learning
2022-12-06The Long March Towards AI Fairness
2022-12-06Towards Human Value Based Natural Language Processing (NLP)
2022-12-06Responsible AI Research at Microsoft Research Asia
2022-12-06Responsible AI Workshop | Opening Remarks
2022-12-06Low-latency, Real-time Insights from Space
2022-12-06Next Generation Networking and its Platform Workshop | Panel Discussion
2022-12-06OpenNetLab: An Open Platform for RL-based Congestion Control for Real-Time Communication