Displaying episodes 1 - 30 of 39 in total
John Schulman, OpenAI cofounder and researcher, inventor of PPO/TRPO talks RL from human feedback, tuning GPT-3 to follow instructions (InstructGPT) and answer long-fo...
Sven Mika of Anyscale on RLlib present and future, Ray and Ray Summit 2022, applied RL in Games / Finance / RecSys, and more!
Karol Hausman and Fei Xia of Google Research on newly updated (PaLM-)SayCan, Inner Monologue, robot learning, combining robotics with language models, and more!
Sai Krishna Gottipati of AI Redefined on RL for synthesizable drug discovery, Multi-Teacher Self-Play, Cogment framework for realtime multi-actor RL, AI + Chess, and m...
Aravind Srinivas, Research Scientist at OpenAI, returns to talk Decision Transformer, VideoGPT, choosing problems, and explore vs exploit in research careers
DeepMind Research Scientist Dr. Rohin Shah on Value Alignment, Learning from Human feedback, Assistance paradigm, the BASALT MineRL competition, his Alignment Newslett...
Jordan Terry on maintaining Gym and PettingZoo, hardware accelerated environments and the future of RL, environment models for multi-agent RL, and more!
Robert Lange on learning vs hard-coding, meta-RL, Lottery Tickets and Minimal Task Representations, Action Grammars and more!
Dr. Thomas Gilbert and Dr. Mark Nitzberg on the upcoming PERLS Workshop @ NeurIPS 2021
Amy Zhang shares her work on Invariant Causal Prediction for Block MDPs, Multi-Task Reinforcement Learning with Context-based Representations, MBRL-Lib, shares insight...
Xianyuan Zhan on DeepThermal for controlling thermal power plants, the MORE algorithm for Model-based Offline RL, comparing AI in China and the US, and more!
Eugene Vinitsky of UC Berkeley on social norms and sanctions, traffic simulation, mixed-autonomy traffic, and more!
Jess Whittlestone on societal implications of deep reinforcement Learning, AI policy, warning signs of transformative progress in AI, and more!
Aleksandra Faust of Google Brain Research on AutoRL, meta-RL, learning to learn & learning to teach, curriculum learning, collaborations between senior and junior rese...
Sam Ritter of DeepMind on Neuroscience and RL, Episodic Memory, Meta-RL, Synthetic Returns, the MERLIN agent, decoding brain activation, and more!
Thomas Krendl Gilbert on the Political Economy of Reinforcement Learning Systems & Autonomous Vehicles, Sociotechnical Commitments, AI Development for the Public Inter...
Marc G. Bellemare shares insight on his work including Deep Q-Networks, Distributional RL, Project Loon and RL in the Stratosphere, the origins of the Arcade Learning ...
Dr. Robert Osazuwa Ness on Causal Inference, Probabilistic and Generative Models, Causality and RL, AltDeep School of AI, Pyro, and more!
Marlos C. Machado on Arcade Learning Environment Evaluation, Generalization and Exploration in RL, Eigenoptions, Autonomous navigation of stratospheric balloons with R...
Nathan Lambert on Model-based RL, Trajectory-based models, Quadrotor control, Hyperparameter Optimization for MBRL, RL vs PID control, and more!
Kai Arulkumaran on AlphaStar and Evolutionary Computation, Domain Randomisation, Upside-Down Reinforcement Learning, Araya, NNAISENSE, and more!
Michael Dennis on Human-Compatible AI, Game Theory, PAIRED, ARCTIC, EPIC, and lots more!
Roman Ring discusses the Research Engineer role at DeepMind, StarCraft II, AlphaStar, his bachelor's thesis, JAX, Julia, IMPALA and more!
Shimon Whiteson on his WhiRL lab, his work at Waymo UK, variBAD, QMIX, co-operative multi-agent RL, StarCraft Multi-Agent Challenge, advice to grad students, and much ...
Aravind Srinivas on his work including CPC v2, RAD, CURL, and SUNRISE, unsupervised learning, teaching a Berkeley course, and more!
Taylor Killian on the latest in RL for Health, including Hidden Parameter MDPs, Mimic III and Sepsis, Counterfactually Guided Policy Transfer and lots more!
Nan Jiang takes us deep into Model-based vs Model-free RL, Sim vs Real, Evaluation & Overfitting, RL Theory vs Practice and much more!
Danijar Hafner takes us on an odyssey through deep learning & neuroscience, PlaNet, Dreamer, world models, latent dynamics, curious agents, and more!
Csaba Szepesvari of DeepMind shares his views on Bandits, Adversaries, PUCT in AlphaGo / AlphaZero / MuZero, AGI and RL, what is timeless, and more!
Ben Eysenbach schools us on human supervision, SORB, DIAYN, techniques for exploration, teaching RL, virtual conferences, and much more!