Robert Miles AI Safety

Robert Miles AI Safety

Views:
6,409,164
Subscribers:
148,000
Videos:
49
Duration:
7:34:43

Robert Miles AI Safety is a content creator on YouTube with roughly 148 thousand subscribers, publishing 49 videos which altogether total at least 6.41 million views.

Created on ● Channel Link: https://www.youtube.com/@RobertMilesAI





All Videos by Robert Miles AI Safety


PublishedVideo TitleDurationViewsCategoryGame
2023-04-28Apply to Study AI Safety Now! #shorts1:0042,816
2023-02-21My 3-Month fellowship to write about AI Safety! #shorts1:003,530
2022-12-09Why Does AI Lie, and What Can We Do About It?9:24246,832
2022-11-11Apply Now for a Paid Residency on Interpretability #short0:4516,484
2022-10-14$100,000 for Tasks Where Bigger AIs Do Worse Than Smaller Ones #short1:0029,261
2022-05-24Free ML Bootcamp for Alignment #shorts0:5219,005
2022-02-08Win $50k for Solving a Single AI Problem? #Shorts1:00485,088
2021-11-19Apply to AI Safety Camp! #shorts1:0025,893
2021-10-10We Were Right! Real Inner Misalignment11:47241,694
2021-06-24Intro to AI Safety, Remastered18:05146,843
2021-05-23Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...10:2081,874
2021-02-16The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment23:24219,015
2020-12-13Quantilizers: AI That Doesn't Try Too Hard9:5483,001
2020-07-06Sharing the Benefits of AI: The Windfall Clause11:4478,057
2020-06-0410 Reasons to Ignore AI Safety16:29334,821
2020-04-299 Examples of Specification Gaming9:40302,986
2019-12-13Training AI Without Writing A Reward Function, with Reward Modelling17:52227,750
2019-08-23AI That Doesn't Try Too Hard - Maximizers and Satisficers10:22198,262
2019-05-16Is AI Safety a Pascal's Mugging?13:41363,001
2019-03-31A Response to Steven Pinker on AI15:38200,722
2019-03-11How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification11:32163,019
2018-12-23Why Not Just: Think of AGI Like a Corporation?15:27151,099
2018-09-21Safe Exploration: Concrete Problems in AI Safety Part 613:4692,976
2018-06-24Friend or Foe? AI Safety Gridworlds extra bit3:4740,234
2018-05-25AI Safety Gridworlds7:2389,218
2018-03-31Experts' Predictions about the Future of AI6:4778,581
2018-03-24Why Would AI Want to do Bad Things? Instrumental Convergence10:36236,594
2018-02-13Superintelligence Mod for Civilization V1:04:4068,677Civilization V
2018-01-11Intelligence and Stupidity: The Orthogonality Thesis13:03647,198
2017-11-29Scalable Supervision: Concrete Problems in AI Safety Part 55:0349,415
2017-11-16AI Safety at EAGlobal2017 Conference5:3018,756
2017-10-29AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #15:2047,392
2017-10-17What can AGI do? I/O and Speed10:41115,086
2017-09-24What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 49:38109,736
2017-08-29Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.57:3288,235
2017-08-22The other "Killer Robot Arms Race" Elon Musk should worry about5:5198,147
2017-08-12Reward Hacking: Concrete Problems in AI Safety Part 36:5698,639
2017-07-22Why Not Just: Raise AI Like Kids?5:51166,522
2017-07-09Empowerment: Concrete Problems in AI Safety part 26:3365,473
2017-06-25Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.53:2350,724
2017-06-18Avoiding Negative Side Effects: Concrete Problems in AI Safety part 19:33150,511
2017-06-17Robert Miles Live Stream0:000
2017-06-10Are AI Risks like Nuclear Risks?10:1395,434
2017-05-27Respectability5:0476,724
2017-05-18Predicting AI: RIP Prof. Hubert Dreyfus8:1759,957
2017-04-27What's the Use of Utility Functions?7:0463,796
2017-03-31Where do we go now?7:4570,431
2017-03-18Status Report1:2617,325
2017-02-28Channel Introduction1:0552,330