HomeResourcesBuild a chat assistant fast using Canopy from Pinecone and Anyscale Endpoints

Build a chat assistant fast using Canopy from Pinecone and Anyscale Endpoints

Update June 2024: Anyscale Endpoints (Anyscale's LLM API Offering) and Private Endpoints (self-hosted LLMs) are now available as part of the Anyscale Platform. Click here to get started on the Anyscale platform.

Explore the challenges of building a chat assistant and how Canopy and Anyscale endpoints provide the fastest and easiest way to build your RAG based applications for free. We will go through the architecture, a real live example, and a guide on how to get started with building your own chat assistant.

Canopy is a flexible framework built on top of the Pinecone vector database that provides libraries and a simple API for chunking, embedding, chat history management, query optimization, and context retrieval.

Anyscale Endpoints is a fast and performant LLM API for building your AI based applications. Anyscale Endpoints provides a serverless service for serving and fine-tuning open LLMs such as Llama-2 and Mistral. Anyscale Endpoints now provides an embedding endpoint and allows you to finetune the largest Llama-2 70B model, giving you flexibility for open LLMs through an API.

Ready to try Anyscale?

Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle.