The Ray 2.6 release features focus on a number of enhancements and improvements across the Ray ecosystem. In this blog, we expound on a few key highlights, including:
Streaming responses in Ray Serve for real-time capabilities
Ray Data streaming integration with Ray Train
Distributed Ray Training and Tuning sync with cloud storage persistence
Public alpha release of the new Multi-GPU Learner API
Let's examine each in detail.
This release introduces two new features to Ray Serve: streaming responses (including WebSockets) and batch requests. Both enhance and extend Ray Serve’s capabilities for batch inference and online real-time inference for serving, including LLM workloads.
Streaming responses: Certain applications, like text generation in large language models (LLMs), need to return results incrementally to the user. For example, large neural networks may take multiple seconds for a full forward pass, so providing incremental results enhances the user experience. Ray Serve supports StreamingResponse. Using basic HTTP ingress deployments in FastAPI, you can return this response from your HTTP request by wrapping a Python generator in your HTTP handler.
For example, this partial code snippet shows how, but check our full chatbot tutorial example.
1fastapi_app = FastAPI()
2
3@serve.deployment
4@serve.ingress(fastapi_app)
5class Textbot:
6 def __init__(self, model_id: str):
7 self.loop = asyncio.get_running_loop()
8
9 self.model_id = model_id
10 self.model = AutoModelForCausalLM.from_pretrained(self.model_id)
11 self.tokenizer = AutoTokenizer.from_pretrained(self.model_id)
12
13 @fastapi_app.post("/")
14 def handle_request(self, prompt: str) -> StreamingResponse:
15 logger.info(f'Got prompt: "{prompt}"')
16 streamer = TextIteratorStreamer(
17 self.tokenizer, timeout=0, skip_prompt=True
18 skip_special_tokens=True
19 )
20 self.loop.run_in_executor(None, self.generate_text,prompt,streamer)
21 return StreamingResponse(
22 self.consume_streamer(streamer), media_type="text/plain"
23 )
24 def generate_text(self, prompt: str, streamer: TextIteratorStreamer):
25 input_ids = self.tokenizer([prompt], return_tensors="pt").input_ids
26 self.model.generate(input_ids, streamer=streamer, max_length=10000)
27
28 async def consume_streamer(self, streamer: TextIteratorStreamer):
29 while True:
30 try:
31 for token in streamer:
32 logger.info(f'Yielding token: "{token}"')
33 yield token
34 break
35 except Empty:
36 # The streamer raises an Empty exception if the next token
37 # hasn't been generated yet. `await` here to yield control
38 # back to the event loop so other coroutines can run.
39 await asyncio.sleep(0.001)
In the above example, we focused on streaming responses. However, WebSockets allow bi-directional input and output streaming in applications. Ray Serve now supports WebSockets for bi-directional streaming with this release. For more details and a complete WebSocket example tutorial, please refer to our Serve documentation.
Batch requests: Batching is a normal pattern for many machine learning operations, including streaming requests, as it provides the receiver a batch of data to process in bulk. When done in parallel, over many Serve replicas, you benefit in the overall performance.
Batching becomes essential when your model is computationally expensive, and you want to utilize all your hardware resources. ML frameworks such as PyTorch and TensorFlow support evaluating multiple samples simultaneously. Ray Serve enables you to employ this capability through dynamic request batching.
You can enable batching by using the ray.serve.batch decorator. Let’s take a look at a simple example in the Model
class to accept a batch. Note that for brevity the code snippets are partial, while the full example is in our dynamic batch requests documentation.
1import numpy as np
2from typing import List
3import ray
4from ray import serve
5
6@serve.deployment
7class Model:
8
9 @serve.batch(max_batch_size=8, batch_wait_timeout_s=0.1)
10 async def __call__(self, multiple_samples: List[int]) -> List[int]:
11 # Use numpy's vectorized computation to efficiently process a batch.
12 return np.array(multiple_samples) * 2
13
14handle = serve.run(Model.bind())
15assert ray.get([handle.remote(i) for i in range(8)]) == [i * 2 for i in range(8)]
16
While the above example illustrates batch processing, you can stream batch responses too with ray.serve.batch decorator. For example:
1import asyncio
2from typing import List, AsyncGenerator, Union
3from starlette.requests import Request
4from starlette.responses import StreamingResponse
5from ray import serve
6
7@serve.deployment
8class StreamingResponder:
9 @serve.batch(max_batch_size=5, batch_wait_timeout_s=0.1)
10 async def generate_numbers(
11 self, max_list: List[str]
12 ) -> AsyncGenerator[List[Union[int, StopIteration]], None]:
13 for i in range(max(max_list)):
14 next_numbers = []
15 for requested_max in max_list:
16 if requested_max > i:
17 next_numbers.append(str(i))
18 else:
19 next_numbers.append(StopIteration)
20 yield next_numbers
21 await asyncio.sleep(0.1)
22
23 async def __call__(self, request: Request) -> StreamingResponse:
24 max = int(request.query_params.get("max", "25"))
25 gen = self.generate_numbers(max)
26 return StreamingResponse(gen, status_code=200, media_type="text/plain")
Following changes in 2.5 with Ray Data becoming lazily executed by default, we are introducing a new streaming integration of Ray Data and Ray Train. This allows streaming data ingestion for model training, reducing the amount of cluster memory needed to perform training on big data. Further, on each epoch, data preprocessing will be rerun, enabling per-epoch preprocessing.
From the user API, preprocessing is now specified in an explicit step outside of the train loop, and the dataset_config
argument now has a DataConfig
object that is used to configure the execution of the data processing during training.
Here is an example of using the new API:
1import ray
2from ray.air import session
3from ray.air.config import ScalingConfig
4from ray.train.torch import TorchTrainer
5
6import numpy as np
7from typing import Dict
8
9def train_loop_per_worker():
10 # Get an iterator to the dataset we passed in below.
11 it = session.get_dataset_shard("train")
12
13 # Train for 10 epochs over the data. We'll use a shuffle buffer size
14 # of 10k elements, and prefetch up to 10 batches of size 128 each.
15 for _ in range(10):
16 for batch in it.iter_batches(
17 local_shuffle_buffer_size=10000, batch_size=128, prefetch_batches=10
18 ):
19 print("Do some training on batch", batch)
20
21dataset_a = ray.data.read_text(
22 "s3://anonymous@ray-example-data/sms_spam_collection_subset.txt"
23)
24dataset_b = ray.data.read_csv("s3://anonymous@ray-example-data/dow_jones.csv")
25
26my_trainer = TorchTrainer(
27 train_loop_per_worker,
28 scaling_config=ScalingConfig(num_workers=2),
29 datasets={"a": dataset_a, "b": dataset_b},
30 dataset_config=ray.train.DataConfig(
31 datasets_to_split=["a"],
32 ),
33)
The reason for these changes is twofold:
Separate preprocessors from Trainers. That is, users now can explicitly perform data preprocessing on the Datasets before passing it into the Trainer.
Simplify the API configuration during training.
You can further refer to this Ray Enhancement Proposal, and more details about configuration can be found in the documentation.
To ensure reliable persistence of Ray training and tuning artifacts, users can now specify a cloud storage or an NFS path in their configuration for distributed training or tuning. Local paths are not supported.
This change allows different worker machines to sync artifacts directly to the cloud storage instead of syncing with the head node, significantly reducing the overall network bandwidth and memory usage. For detailed information, check the PR #37177
In our previous blog post, we discussed cost savings by using the newly introduced multi-gpu training stack in RLlib. With this release, we are introducing an alpha release of a new multi-gpu learner API that is less complex and more powerful than what we showed in the blog. By default this API is enabled under PPO algorithm.
1import ray
2from ray import air, tune
3from ray.rllib.algorithms.ppo import PPOConfig
4
5ray.init()
6
7 config = (
8 PPOConfig()
9 .framework(args.framework)
10 .environment("CartPole-v1")
11 .resources(num_learner_workers=2, num_gpus_per_learner_worker=1)
12 )
13
14 tuner = tune.Tuner(
15 "PPO",
16 param_space=config.to_dict(),
17 run_config=air.RunConfig(
18 stop={"training_iteration": 1},
19 failure_config=air.FailureConfig(fail_fast="raise"),
20 ),
21 )
22 tuner.fit()
With each release of Ray, we strive toward ease of use, performance, and stability. And this release, as previous ones, marched towards that end by:
Extending Ray Serve to incorporate both streaming requests and responses for important workloads such as LLMs. Additionally, support for bi-directional streaming using WebSockets
Enhancing Ray Train to capitalize on Ray Data’s streaming lazily execution for efficient training and optimize use of memory
Increasing reliability for supporting external cloud storage as mechanisms for Ray Train to persist its training and experimental artifacts instead of syncing with the head node, reducing overall network latency
Introducing a simple and efficient multi-GPU Learner API use in PPO algorithm
We want to thank all contributors for their valuable contributions to this new release of Ray 2.6. Your enduring support continues to foster the wider use of Ray adoption.
Have a go at the latest release with pip install “ray[default]” and let us know of your feedback. We’re always delighted to share new Ray releases with you and equally interested to hear your feedback – feel free to reach out to us on Github or Discuss.
Join our Ray Community and the Ray #LLM slack channel.
Finally, Ray Summit 2023 registration is open. Check our lineup of keynote speakers, full-day of training, and a dedicated LLM track. Secure your spot, save some money, savor the community camaraderie at the summit.
Stay tuned for additional Ray blogs, meanwhile take a peek at the following material for Ray edification:
Deep dive on Ray Data user guides
Walk through the Ray Serve tutorials, including batching and streaming examples
Peruse the new Ray example gallery
Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle.