The Simplest Path to Scaling Python

Develop on your laptop and then scale the same Python code elastically across hundreds of nodes or GPUs on any cloud—with no changes.

Ray Core Tutorial | On Demand

New to Ray? Jump-start your learning with our "Getting Started" guide.

Read now

Ray Summit 2023 | On Demand

Watch all the keynotes and sessions from Ray Summit 2023 on demand.

Watch now

Parallelize Python, with minimal code changes

Simple, flexible Python primitives

Ray translates existing Python concepts to the distributed setting, allowing any serial application to be easily parallelized with minimal code changes

Distributed libraries

Easily scale compute-heavy machine learning workloads like deep learning, model serving, and hyperparameter tuning with a strong ecosystem of distributed libraries.

Integrations

Scale existing workloads (for eg. Pytorch) on Ray with minimal effort by tapping into integrations.

ParallelizePython

Any AI/ML Workload

Ray is the AI compute engine for every AI workload and use case.

Parallel Processing

Ray is Python-native. Effortlessly scale from your computer to the cloud with one Python decorator. Plus, leverage and parallelize CPUs and GPUs in the same pipeline to increase utilization and decrease costs.

Parallel Processing
ludwig black

Building on top of Ray has allowed us to deliver a state-of-the-art low-code deep learning platform that lets our users focus on obtaining best-in-class machine learning models for their data, not distributed systems and infrastructure.

Travis Addair
CTO, Predibase and Maintainer, Horovod / Ludwig AI

Scalable machine learning libraries

Native Ray libraries — such as Ray Tune and Ray Serve — lower the effort to scale the most compute-intensive machine learning workloads, such as hyperparameter tuning, training deep learning models, and reinforcement learning. For example, get started with distributed hyperparameter tuning in just 10 lines of code.

scalableMachineLearningLibraries

Build and run distributed apps

Flawless distributed operations

Ray handles all aspects of distributed execution from scheduling and sequencing to scaling and fault tolerance.

Autoscaling

Ray dynamically provisions new nodes (or removes them) to handle variable workload needs.

Fault-tolerant

Ray gracefully handles machine failures to deliver uninterrupted execution.

distibuted-apps
Afresh

My team at Afresh trains large time-series forecasters with a massive hyperparameter space. We googled Pytorch hyperparameter tuning, and found Ray Lightning. It took me 20 minutes to integrate into my code, and it worked beautifully. I was honestly shocked.

Philip Cerles
Senior Machine Learning Engineer

Scale on any cloud or infrastructure

Public cloud, private data centers, bare metal, Kubernetes cluster — Ray runs anywhere. Or choose Anyscale, and leave the infrastructure to us.

ScaleOnAnyCloud

LLMs and GenAI

Large language models (LLMs) and Generative AI are rapidly changing industries, and compute demand at an astonishing pace. Ray provides a distributed compute framework for scaling these models, allowing developers to train and deploy models faster and more efficiently. With specialized libraries for data streaming, training, fine-tuning, hyperparameter tuning, and serving, Ray simplifies the process of developing and deploying large-scale AI models.

Ray flexible LLM Models

Trusted by leading AI and machine learning teams

From detection of geospatial anomalies to real-time recommendation, explore the stories of teams scaling machine learning on Ray.

  • Uber logo
  • Amazon logo
  • Ant Group logo
  • Nvidia logo
  • Dow logo
  • Intel logo
  • Uber logo
  • Amazon logo
  • Ant Group logo
  • Nvidia logo
  • Dow logo
  • Intel logo

Get started with Ray

From a dedicated Slack channels to in-person meetups, we have the resources you need to get started and be successful with Ray.

Documentation

Reference guides, tutorials, and examples to help you get started on and advance your Ray journey.

Read the docs

Discussion Forum

Join the forum to get technical help and share best practices and tips with the Ray community.

Join the forum

Slack

Connect with other users and project maintainers on the Ray Slack channel.

Join the Slack

Sign up for product updates

By signing up, you agree to receive occasional emails from Anyscale. Your personal data will be processed in accordance with Anyscale’s Privacy Policy.