The Applied Reinforcement Learning (ARL) Seminar is an online seminar that presents the latest advances in reinforcement learning applications and theory, organized by Drs. Rui Song, Hongtu Zhu, Tony Qin, Jieping Ye and Michael R. Kosorok.
We are excited to welcome Prof. Keith Ross from NYU Shanghai to talk about "Recent Advances in Sample Efficient DRL".
Prof. Keith Ross has been the Dean of Engineering and Computer Science at NYU Shanghai since 2013. Previously he was a professor at NYU Tandon/Poly (10 years), University of Pennsylvania (13 years), and Eurecom Institute in France (5 years). He received a Ph.D. in Computer and Control Engineering from The University of Michigan. He is an ACM Fellow and an IEEE Fellow. His current research interests are in deep and tabular reinforcement learning. He has also worked in Internet privacy, peer-to-peer networking, Internet measurement, stochastic modeling of computer networks, queuing theory, and Markov decision processes. He is the co-author of the most popular textbook on computer networking. At NYU Shanghai he has been teaching Machine Learning, Reinforcement Learning, and Introduction to Computer Programming.
The seminar will be on *Thursday, April 29th, 2021 6:00 PM PT / 8:00 PM CT / 9:00 PM ET / Friday, April 30th, 9:00 AM Beijing*. Details about the talk can be found on our website <https://arlseminar.com>.
You can get access to the seminar by YouTube Live in the USA or Bilibili Live in China.
Zoom: https://uncsph.zoom.us/j/94124596213 Password: ARLSeminar
YouTube Live Channel: https://www.youtube.com/channel/UCYtw_0jwqtNW0-6NFsPY9BA/live
Bilibili Live: https://live.bilibili.com/22533038
If you are interested in getting updates of our seminar's events in the future, you can register on our website <https://www.arlseminar.com/registration-form/>. We will notify you of the new event.
We look forward to seeing you.
*Title: Recent Advances in Sample Efficient DRL
*Abstract: The performance of a DRL algorithm can be measured along many dimensions including: asymptotic performance; sample efficiency; computational efficiency; and simplicity and elegance. In this talk we will discuss two recent research projects in DRL algorithmic design. The first project is a new algorithm for on-policy DRL with safety constraints (spotlight paper at NeurIPS 2020); the second project is a highly sample-efficient off-policy DRL algorithm for environments with continuous action spaces (conference paper at ICLR 2021).
The Institute for Operations Research and the Management Sciences
phone 1 443-757-3500
phone 2 800-4INFORMS (800-446-3676)