Online, Opt-in Surveys: Fast, Cheap, and Mostly Accurate

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=lCsgFGLRtqo



Duration: 29:29
386 views
3


IMS-Microsoft Research Workshop: Foundations of Data Science - Online, Opt-in Surveys: Fast, Cheap, and Mostly Accurate
Session Chair Into - David Dunson Duke University Session Chair Intro: Computational Social Science David Rothschild Microsoft Research Online, Opt-in Surveys: Fast, Cheap, and Mostly Accurate We explore varying methods of survey data collection, and transforming raw survey data into answers. We reject the standard construct that survey data is either "probability" or "non-probability" and, consequently, accurate or non-accurate; all survey data collection is on a continuum that runs from the theoretically perfect probability sample (or a complete census) to an extremely biased opt-in sample, and all raw survey data is transformed into a set of answers with methodology that runs from simple traditional weighting to modern statistically derived methods. The closer survey data collection is to perfect probability the more expensive it is to collect (in time and/or money), but the less data necessary to reach an equivalent level of accuracy. We compare and contrast the results of four different surveys that ask a series of overlapping questions and whose data collection falls at different points of the continuum from extremely rigorous traditional probability to fully opt-in sample design. We show that properly transformed survey data from the opt-in sample has similar differences, on a series of general interest and public policy questions, to the traditional surveys relative to what the traditional surveys have between themselves. Fast and cheap data collection does not produce ground truth, but it does produce data that we can transform into a mostly accurate set of answers.




Other Videos By Microsoft Research


2016-07-07Deep Mathematical Properties of Submodularity with Applications to Machine Learning
2016-07-07Social Computing Symposium 2016: Harassment, Threats, and Trolling Online Katherine Lo
2016-07-07Social Computing Symposium 2016: Transgender Experiences with Online Harassment
2016-07-07Measuring Sample Quality with Stein's Method
2016-07-07Oral Session 2
2016-07-07Edge Detection on a Computational Budget: A Sublinear Approach
2016-07-073rd Pacific Northwest Regional NLP Workshop: Afternoon Talks 2 & Closing Remarks
2016-07-07Top Characteristics of Successful People in Technology and How Asians Can Rise Even Higher
2016-07-07The gender pay gap in Computing and Engineering and Solutions in Moving the Needle
2016-07-07Tutorial Session B - Learning to Interact
2016-07-07Online, Opt-in Surveys: Fast, Cheap, and Mostly Accurate
2016-07-07Predicting Travel Time Reliability using Mobile Phone GPS Data
2016-07-07Do Neighborhoods Matter for Disadvantaged Families?
2016-07-07Automatic Differentiation - A Revisionist History and the State of the Art - AD meets SDG and PLT
2016-07-07Your looks are laughable, unphotographable, yet you're my favorite work of art ...
2016-07-07Mechanisms Underlying Visual Object Recognition: Humans vs. Neurons vs. Machines
2016-07-07Bounding Quantum Gate Error Rate Based on Reported Average Fidelity
2016-07-07Oral Session 7
2016-07-07Dense and Sparse Signal Detection in Genetic and Genomic Studies
2016-07-07History of Women in Computing and Women Leaders in Computing
2016-07-07Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits



Tags:
microsoft research