Robert Miles AI Safety

Robert Miles AI Safety

Views:
6,614,248
Subscribers:
154,000
Videos:
50
Duration:
8:20:41

Robert Miles AI Safety is a content creator on YouTube with over 154 thousand subscribers. He published 50 videos which altogether total approximately 6.61 million views.

Created on ● Channel Link: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg





Top 100 Most Controversial Videos by Robert Miles AI Safety


Video TitleRatingCategoryGame
1.Robert Miles Live Stream1
2.My 3-Month fellowship to write about AI Safety! #shorts336
3.Apply Now for a Paid Residency on Interpretability #short928
4.Status Report1,109
5.Free ML Bootcamp for Alignment #shorts1,266
6.AI Safety at EAGlobal2017 Conference1,204
7.Channel Introduction1,510
8.Apply to AI Safety Camp! #shorts1,821
9.Superintelligence Mod for Civilization V1,819Civilization V
10.$100,000 for Tasks Where Bigger AIs Do Worse Than Smaller Ones #short2,016
11.Friend or Foe? AI Safety Gridworlds extra bit2,050
12.Scalable Supervision: Concrete Problems in AI Safety Part 52,461
13.Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.52,674
14.Apply to Study AI Safety Now! #shorts2,865
15.AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #12,991
16.Where do we go now?2,992
17.Empowerment: Concrete Problems in AI Safety part 23,267
18.What's the Use of Utility Functions?3,483
19.Predicting AI: RIP Prof. Hubert Dreyfus3,640
20.AI Safety Gridworlds3,864
21.Experts' Predictions about the Future of AI4,042
22.Respectability4,133
23.Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.54,133
24.Safe Exploration: Concrete Problems in AI Safety Part 64,253
25.Reward Hacking: Concrete Problems in AI Safety Part 34,442
26.Are AI Risks like Nuclear Risks?4,642
27.What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 44,699
28.Avoiding Negative Side Effects: Concrete Problems in AI Safety part 15,127
29.The other "Killer Robot Arms Race" Elon Musk should worry about5,640
30.Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...6,055
31.What can AGI do? I/O and Speed6,047
32.Quantilizers: AI That Doesn't Try Too Hard6,173
33.Sharing the Benefits of AI: The Windfall Clause7,002
34.Intro to AI Safety, Remastered7,647
35.Why Not Just: Think of AGI Like a Corporation?8,313
36.How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification9,274
37.Why Not Just: Raise AI Like Kids?9,808
38.AI That Doesn't Try Too Hard - Maximizers and Satisficers10,135
39.A Response to Steven Pinker on AI11,393
40.The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment13,443
41.Why Would AI Want to do Bad Things? Instrumental Convergence13,408
42.Training AI Without Writing A Reward Function, with Reward Modelling13,906
43.We Were Right! Real Inner Misalignment14,931
44.Why Does AI Lie, and What Can We Do About It?16,223
45.AI Ruined My Year16,552
46.10 Reasons to Ignore AI Safety17,293
47.Win $50k for Solving a Single AI Problem? #Shorts19,101
48.9 Examples of Specification Gaming21,464
49.Is AI Safety a Pascal's Mugging?21,005
50.Intelligence and Stupidity: The Orthogonality Thesis35,796