In this 2-hour tutorial, you will learn how to apply cutting edge reinforcement learning (RL) techniques in production with Ray RLlib.This tutorial includes a brief introduction to provide an overview of RL concepts. The tutorial will then cover how to use Ray RLlib to train and tune contextual bandits as well as the “SlateQ” algorithm, train off offline data using cutting edge offline algorithms, and deploy RL models into a live service.RLlib offers high scalability, a large list of algorithms to choose from (offline, model-based, model-free, etc..), support for TensorFlow and PyTorch, and a unified API for a variety of applications and customizations.This tutorial will be for you if you are an:
Industry ML engineer (not necessarily with a background in RL).
Industry software developers (that would like to use RL to solve problems within their expert domain, but are not RL experts).
Industry RL engineers who would like to learn about using RLlib for the specific use cases discussed here.
Sven has been working as a machine learning engineer for Anyscale Inc. since early 2020. He is the lead developer of "RLlib", Ray's industry-grade, scalable reinforcement learning (RL) library. His team is currently focusing on better supporting the most promising industry use cases, such as massive-multi-agent algorithms for league-based self-play, working with recommender systems and slate recommendation algos, such as contextual bandits, as well as, integrating with Ray's new datasets library for a better offline RL experience. A continuing effort of his is asserting high levels of stability and test coverage to ensure RLlib's rapid adoption in industry and research and helping to grow its community and contributor base. Before starting at Anyscale, he has been a leading developer of other successful open-source RL library projects, such as "RLgraph" and "TensorForce".