May 19, 2026
|
Convene, One Boston Place

Ray Workshop: Boston

Scale AI with confidence— Learn directly from Ray’s creators and power builders at this free technical hands-on workshops designed to help you move your AI workloads from experiment to production.

boston-hero

What to expect

Icon for Direct Expert Access

Direct Expert Access

Engage directly with Ray’s creators and builders to get answers, insights, and best practices for productionizing AI.

Icon for Hands-on Workshops

Hands-on Workshops

Accelerate your path to production with an instructor-led Ray workshop covering Ray’s AI libraries.  

Icon for Peer Networking

Peer Networking

Connect with experienced engineers and AI teams and learn from shared experiences.

Agenda at a glance

VLA Track — Fine tune VLA models for physical AI
Ray Track — Distributed training with Ray and PyTorch
12:00 - 12:30 PM
Registration + Networking
Registration + Networking
12:30 - 12:45 PM
Training kickoff and environment setup
Training kickoff and environment setup
12:45 - 1:30 PM
Module 1: Ray, the Foundation for Distributed Physical AI
Overview of Ray's core concepts: How Ray provides simple, unified APIs for cluster computing. Ray Libraries. Setup and basic execution examples.
Module 1: Scaling Python for AI Workloads with Ray
Learn Ray’s core concepts, including tasks, actors, and clusters. Understand how Ray scales Python and ML workloads, manage resources, and run distributed programs reliably across nodes.
1:30 - 2:30 PM
Module 2: Large scale VLA fine-tuning
Overview of data preprocessing and distributed training libraries, Ray Data & Ray Train, with an emphasis on vision language action models.
Module 2: Building scalable multimodal data pipelines
Ingest, transform, and preprocess large multimodal datasets, then create streaming pipelines to chunk data, generate embeddings at scale.
2:30 - 2:45 PM
Break
Break
2:45 - 3:45 PM
Module 3: Robotics simulation
Using Ray Core to parallelize computationally intensive simulations (e.g., Mujoco, Isaac Sim)
Module 3: Distributed Training at scale
Scale model training with data and model parallelism (e.g., FSDP). Run distributed jobs, manage checkpoints, and integrate frameworks like PyTorch with reliability and performance.
3:45 - 4:00 PM
Q&A and closing remarks
Q&A and closing remarks
4:00 - 5:30 PM
Happy Hour + Networking

Featured Speakers

Ian Jordan

Ian Jordan

Member of Technical Staff- Distributed AI Training

Member of Technical Staff- Distributed AI Training
Alicia Chua

Alicia Chua

Member of Technical Staff, Field Engineer

Member of Technical Staff, Field Engineer
Suman Debnath

Suman Debnath

Technical Lead - ML

Technical Lead - ML
Pawarit Laosunthara

Pawarit Laosunthara

Field Engineering Manager

Field Engineering Manager

Convene, One Boston Place

201 Washington St, Boston, MA 02108
boston-place

FAQ