XGBoost is an optimized distributed gradient boosting library and algorithm that implements machine learning algorithms under the gradient boosting framework. This library is designed to be highly efficient and flexible, using parallel tree boosting to provide fast and efficient solutions for several data science and machine learning problems. In a previous blog post, we explored three ways to speed up XGBoost model training.
XGBoost has quickly become the state-of-the-art machine learning algorithm for solving tasks with structured data. This is mainly due to its high speed and exceptional performance. It is faster than other ensemble classifiers and its core algorithm is parallelizable, meaning that it can run on multi-core and GPU computers.
Options for serving machine learning and XGBoost models include cloud-hosted platforms such as Amazon SageMaker, KubeFlow, Google Cloud AI Platform, and Microsoft’s Azure ML SDK. These are powerful serving tools provided by some of the largest tech companies, but they can be very expensive to use. In addition, these tools only work with their own ecosystems.
Manually taking machine learning models from concept to production is typically complex and time-consuming, so there are several frameworks for deploying XGBoost in production.
In this article, we’ll cover how to deploy XGBoost with two frameworks: Flask and Ray Serve. We’ll also highlight the advantages of Ray Serve over other serving solutions when comparing models in production.
Flask is the most common Python-based microframework used for deploying XGBoost, as it has no dependencies on external libraries. Flask is considered an exceptional deployment framework for XGBoost because it is easy to set up and is an efficient tool with REST endpoints. Plus, unlike XGBoost Server, Flask is framework-agnostic and has an HTTP request handling function. Flask is also a free deployment framework, unlike SageMaker and other cloud-hosted solutions. These are only a handful of features that make the Flask an optimal solution for deploying XGBoost into production.
In this section, we’ll train, test, and deploy an XGBoost model with Flask. This XGBoost model will be trained to predict the onset of diabetes using the pima-indians-diabetes
dataset from the UCI Machine Learning Repository website. This small dataset contains several numerical medical variables of eight different features related to diabetes, in addition to one target variable — Outcome
. So, we’ll use XGBoost to model and solve a simple prediction problem.
First, we will load some dependencies in addition to the data. Then the training starts:
1from numpy import loadtxt
2from xgboost import XGBClassifier
3from sklearn.model_selection import train_test_split
4from sklearn.metrics import accuracy_score
5
6# load data
7dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
8# split data into X and y
9X = dataset[:,0:8]
10Y = dataset[:,8]
11X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=7)
The second step is to create the XGBoost model and fit it to the numerical data we have:
1model = XGBClassifier()
2model.fit(X_train, y_train)
Once the model is trained, we test it using our testing set, and then calculate some metrics for evaluation purposes:
1y_pred = model.predict(X_test)
2predictions = [round(value) for value in y_pred]
3accuracy = accuracy_score(y_test, predictions)
4print("Accuracy: %.2f%%" % (accuracy * 100.0))
This step contains several stages:
Once the model is trained and tested, we can save it for future inferences using the pickle serialization module.
1import pickle
2
3# saving the model
4with open('model.pkl','wb') as f:
5 pickle.dump(model, f)
6
7# loading the model
8with open("model.pkl", "rb") as f:
9 model = pickle.load(f)
To deploy our XGBoost model, we'll use Flask. In order to create our Flask web app that can predict the onset of diabetes, we will need a prediction route to make inferences from our XGBoost.
1import pickle
2import numpy as np
3from flask import Flask, request, jsonify, render_template
4
5app = Flask(__name__)
6with open("model.pk", "rb") as f:
7 model = pickle.load(f)
8
9@app.route("/predict", methods=["POST"])
10def predict():
11 data = request.get_json(force=True)
12 prediction = model.predict([np.array(list(data.values()))])
13 output = prediction[0]
14 return jsonify(output)
15
16if __name__ == "__main__":
17 app.run(debug=True)
In this step, we create the request.py file. This file displays the predicted value by calling the APIs defined in the app.py
.
1import requests
2
3url = "http://localhost:5000/predict"
4r = requests.post(
5 url,
6 json={
7 "Pregnancies": 6,
8 "Glucose": 148,
9 "BloodPressure": 72,
10 "SkinThickness": 35,
11 "Insulin": 0,
12 "BMI": 33.6,
13 "DiabetesPedigree": 0.625,
14 "Age": 50,
15 "Outcome": 1,
16 },
17)
18print(r.json())
Despite the usefulness of Flask in deploying machine learning models, it still has some drawbacks. For instance, it is unsuitable for large applications and lacks login and authentication capabilities.
Beyond these, Flask’s main drawback for machine learning model serving is the challenges it presents with scaling. With Flask, scaling every component requires you to run many parallel instances, and you must decide how to do it. Will you use virtual machines, physical machines, or perhaps a Kubernetes cluster? Whichever method you choose, you will be on the hook for spinning up instances of your app and load balancing. Deploying an XGBoost model with Ray Serve is one solution to this problem, since it provides you with a simple web server that leverages the complex routing, scaling, and testing logic necessary for production deployments.
With Ray Serve, it’s easier to scale out your model on a multi-node Ray cluster, as you can take full advantage of its serving ability to dynamically update running deployments. In addition, Ray Serve is framework-agnostic, so it can serve different machine learning frameworks such as TensorFlow, PyTorch, and Scikit-learn. Altogether, it allows for high-efficiency, high-performance production.
Now, let’s use the XGBoost model that we created before and deploy it with Ray Serve.
First, let’s install Ray Serve:
pip install "ray[serve]"
Then, we start Ray Serve, which runs on top of several Ray clusters.
ray start --head
Next, we run the following Python script to import Ray Serve, start it up, and connect to the local running Ray cluster:
1import ray
2from ray import serve
3ray.init(address='auto', namespace="serve") # Connect to the local running Ray cluster.
4serve.start(detached=True) # Start the Ray Serve processes within the Ray cluster.
Note that the serve.start
method is used to start up a few Ray actors, which are used by Ray Serve to route HTTP requests to the appropriate models.
Also note that we’re only running Ray locally to test our code. This already gives us an advantage over Flask, because, by default, Ray uses all available CPU cores on our machine, while the Flask app we created previously only uses a single core. And this is only a small taste of the benefits Ray can provide — because we can just as easily deploy our model to a Ray cluster with dozens or even hundreds of nodes to serve our XGBoost model at scale without changing the code at all.
Now that Ray Serve is ready, it is time to create the model and deploy it. Since our XGBoost model is already created and trained, we just need to load and read it as a class, and then start the deployment process using Ray Serve.
Let’s jump right into it:
1import pickle
2import json
3import ray
4from ray import serve
5
6@serve.deployment(num_replicas=2, route_prefix="/regressor")
7class XGB:
8 def __init__(self):
9 with open("model.pkl", "rb") as f:
10 self.model = pickle.load(f)
11
12 async def __call__(self, starlette_request):
13 payload = await starlette_request.json()
14 print("Worker: received starlette request with data", payload)
15
16 input_vector = [
17 payload["Pregnancies"],
18 payload["Glucose"],
19 payload["Blood Pressure"],
20 payload["Skin Thickness"],
21 payload["Insulin"],
22 payload["BMI"],
23 payload["DiabetesPedigree"],
24 payload["Age"],
25 ]
26 prediction = self.model.predict([input_vector])[0]
27 return {"result": prediction}
Now, here’s where the magic happens. The following few lines of code will deploy our XGBoost model to our running Ray Serve instance. We simply do this by running Ray Serve API calls in the Python framework.
1# now we initialize /connect to the Ray service
2serve.start(detached=True)
3# Deploy the model.
4XGB.deploy()
And there we go! Our XGBoost model is now deployed successfully on a Ray Serve application by simply calling deploy on the class we have defined. In fact, there are two copies of the model running at the same time and handling responses. It is even easier to scale it out by changing the num_replicas
parameters.
We can now query the endpoint of our deployed model by sending a request to it. Note that our HTTP runs at localhost:8000
by default.
1import requests
2
3sample_request_input = {
4 "Pregnancies": 6,
5 "Glucose": 148,
6 "BloodPressure": 72,
7 "SkinThickness": 35,
8 "Insulin": 0,
9 "BMI": 33.6,
10 "DiabetesPedigree": 0.625,
11 "Age": 50,
12}
13response = requests.get("http://localhost:8000/regressor", json=sample_request_input)
14print(response.text)
15# Response:
16# "result": "1"
As seen above, the final result will be 1 for a diabetes onset that is predicted as “possible” or “imminent,” or 0 for when a diabetes onset is not predicted.
And that's all there is to it! You now know how to serve an XGBoost model using Flask and scale it easily using Ray Serve.
To learn more about Ray Serve, the official documentation is a great place to start. You can begin by learning the basics, and then dive into the details on serving up machine learning models. Or, register for our upcoming meetup, where we'll discuss productionizing ML at scale with Ray Serve.
Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle.