Environment, autoscaling, dependency management, log viewing—a lot goes into monitoring and tracking your development workflows. The new and improved Anyscale Workspaces is your out-of-the-box solution, providing advanced observability features so your developers spend less time on busywork and more time building AI applications.
Ray is the AI Compute Engine at the heart of AI platforms for companies all across the world. It provides incredible computing power while abstracting away the complexity of distributed computing—without hiding the knobs, levers, and pedals that enable you to optimize underlying infrastructure for performance and cost.
But while we’ve heard from customers that Ray offers unmatched flexibility and power, we’ve also consistently heard that it can be hard to get started. New users to Ray often feel overwhelmed by the need to understand multiple complex concepts from day one.
So how do you get started with Ray quickly?
Enter: Anyscale.
At Anyscale, we’ve developed tools to make using RayTurbo, the optimized Ray engine on Anyscale, easier to get started with, easier to use, and easier to deliver real value and ROI. Giving you the power of Ray’s unmatched compute engine, but the ease of use of a best-in-class developer platform.
In this blog post, we're excited to announce key improvements to our new user experience, including an updated Log Viewer, streamlined dependency management, and the introduction of serverless mode.
Anyscale Workspaces is a developer’s one-stop-shop for building, testing, and shipping custom Ray applications. Developers can run any workload on Workspaces, from data processing to training to serving and beyond.
As a fully-managed developer environment, Anyscale Workspaces enables data scientists and machine learning engineers to build distributed apps on large scale clusters. Workspaces gives your developers time to focus on app logic—rather than on the complexities of a distributed system. Workspaces comes fully equipped with compute, dependencies, and storage to boost developer productivity:
Build on a cluster that can autoscale from a single node to multiple GPUs—without code modifications
Develop with familiar tooling like Git, Jupyter Notebook, and VSCode
Seamlessly ship to production using the exact same setup as in development
Easily add collaborators by simply sharing a Workspace link
Duplicate Workspaces to debug and optimize AI workloads
Anyscale Workspaces is an end-to-end solution for the developer experience. Let’s take a look at some common developer pain points—and how Workspaces addresses them.:
Developers typically write, test, and debug their code on a single node, which is often a local machine or laptop. This is great for rapid iteration and getting the application logic right.
However, to move this app to production, developers then need to rewrite the code to run on a cluster. Not only does this require complex adjustments to accommodate a distributed setting, it often means relying on a cloud infrastructure expert to optimize cost and performance.
It can also be a pain to debug. Once deployed, applications may run into issues that can’t be easily reproduced on a single node cluster.
Anyscale Workspaces Solution: Scalable Development Cluster With Seamless Path to Production
To address this problem, Anyscale Workspaces include built-in compute autoscaling.
Easily scale compute with an Anyscale-managed Ray cluster. Developers can scale the same code they’ve developed and tested directly to production, with no additional work.
Developers who start on their laptops and then deploy to the cloud also need to coordinate dependencies across these systems. Managing dependencies is already difficult in cloud computing—add in the challenge of managing dependencies across environments and ensuring they are the same in dev and prod, and that can dramatically slow down progress and lead to errors.
Anyscale Workspaces Solution: [New] Streamlined Automated Dependency Management
Anyscale Workspaces simplifies dependency management so developers can focus on what they do best, rather than focus on managing tedious, error-prone dependencies. Workspaces offers:
Familiar Docker-like Environment: Our new container images feature a Dockerfile-like syntax, making it easier for users familiar with Docker to manage their environments.
Optimized Base Images: Slim images significantly reduce file size from 18GB to around 2GB, speeding up workspace and job initialization.
Automated Dependency Tracking: Installed packages are tracked and can be saved into a new optimized image for reuse.
No one wants bugs. Clear bug visibility and log management are critical for debugging test code and monitoring launched applications. But open source Ray, while a powerful AI compute engine in its own right, doesn’t offer any built-in observability features. Customers in open source have frequently struggled to identify where and why their code might be breaking down.
Anyscale Workspaces Solution: [New] Log Viewing
Anyscale’s new Log Viewer gives developers detailed information about their code, making it easy to debug and monitor. The Workspaces Log Viewer includes several out-of-the-box, advanced observability features, including:
Persistent Log Storage: Logs are now persistent, meaning you can view them long after a job has terminated. Whether it’s three days or ten days later, you can still access and search through your logs.
Enhanced Search and Filter Capabilities: Improved search and filter functions make it easier to find specific log entries, speeding up the debugging process.
Cloud Export Notification: Users will be notified about changes in what’s being exported from their cloud environments, ensuring transparency and control over their data.
You can read more about the new unified log viewing in our blog:.
Choosing hardware is critical for AI workloads. The right accelerator or instance can optimize price and performance and enable entirely new workloads.
Traditionally, testing different accelerators can be really troublesome:
Developers may not know which instance types they need for those accelerators
Developers might need to pause my current work, apply hardware configuration changes, and restart their cluster in order to see changes
Developers may need to work with an infra teams to get access
These interruptions slow down development work.
Anyscale Workspaces Solution: [New] Serverless Mode
To reduce friction, we’re introducing automatic worker node selection. With this new feature, you can now configure your Ray cluster to scale based on the resources defined in the Ray code application, instead of in a separate config. Simply specify the resources you need in your code, and Anyscale will automatically match requests to a cost-effective instance that can satisfy it.
The end to end developer workflow is complicated, with the many potential roadblocks we’ve mentioned above.
Anyscale Workspaces Solution: One Unified Platform
With Anyscale Workspaces, all of your relevant development information—from metrics and dashboards to logs and dependencies to cluster management—is accessible in a single view. It’s as simple as that.
Our new user experience is designed to make Anyscale more accessible and easier to use, while still providing the powerful capabilities our users rely on. We believe these updates will significantly improve your workflow, making it faster and simpler to develop and deploy applications. Try out the new features today and let us know what you think!
Want to get started with Anyscale? Book a demo today!
Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle.