HomeEventsAn introduction to Ray for scaling machine learnin...

Webinar

An introduction to Ray for scaling machine learning (ML) workloads

Wednesday, August 18, 4:00PM UTC

Modern machine learning (ML) workloads, such as deep learning and large-scale model training, are compute-intensive and require distributed execution. Ray was created in the UC Berkeley RISELab to make it easy for every engineer to scale their applications, without requiring any distributed systems expertise.

Join Robert Nishihara, co-creator of Ray, and Bill Chambers, product lead for Ray, for an introduction to Ray for scaling your ML workloads. Learn how Ray libraries (eg. Ray Tune, Ray Serve, etc) help you easily scale every step of your ML pipeline — from model training and hyperparameter search to production serving.

View Slides >>

Highlights include: 

  • Ray overview & core concepts

  • Library ecosystem and use cases

  • Demo: Ray for scaling ML workflows

  • Getting started resources

Speakers

Robert Nishihara

Robert Nishihara

Co-founder and CEO, Anyscale

Robert Nishihara is one of the creators of Ray, a distributed system for scaling Python and machine learning applications. He is one of the co-founders and CEO of Anyscale, which is the company behind Ray. He did his PhD in machine learning and distributed systems in the computer science department at UC Berkeley. Before that, he majored in math at Harvard.

Bill Chambers

Bill Chambers

Lead Product Manager, Anyscale

Bill Chambers is lead product manager at Anyscale. He is lead author of Spark: The Definitive Guide, coauthored with Matei Zaharia. Bill holds a Master's Degree in Information Management and Systems from UC Berkeley's School of Information. During his time at school, Bill was also creator of the Data Analysis in Python with pandas course for Udemy and co-creator of and first instructor for Python for Data Science, part of UC Berkeley's Master of Information and Data Science.