Deploy and Test Machine Learning Models
Last updated
Was this helpful?
Last updated
Was this helpful?
In this module, we will learn how to deploy our Machine Learning models. By doing so, we make them available for use in production such that applications and business processes can derive insights from them. There are several types of deployments available (), of which we will explore:
Online Deployments - Allows you to run the model on data in real-time, as data is received by a web service.
Batch Deployments - Allows you to run the model against data as a batch process.
This module is broken up into several sections that explore the different model deployment options as well as the different ways to invoke or consume them. The first section of this lab will build an online deployment and test the model endpoint using both the built in testing tool as well as external testing tools. The remaining sections are optional, they build and test the batch deployment, followed by using the model from a python application.
Create Online model deployment
Test the deployed model web UI
(Optional) Test model using cURL
Note: The lab instructions below assume you have completed the pre-work section already, if not, be sure to complete the pre-work first to create a project and a deployment space.
After a model has been created, saved and promoted to our deployment space, we can procceed to dpeloying the model. For this section, we will be creating an online deployment. This type of deployment will make an instance of the model available to make predictions in real time via an API. Although we will use the Cloud Pak for Data UI to deploy the model, the same can be done programmatically.
Navigate to the left-hand (☰) hamburger menu and choose Analyze
-> Analytics deployments
:
Choose the deployment space you setup previously by clicking on the name of your space.
From your deployment space overview, in the table, click on the model name that you previousely promoted. Next, you can click on the Create Deployment
.
On the Create a deployment
screen, choose Online
for the Deployment Type
, give the Deployment a name and optionally a description and click the Create
button.
The Deployment will show as In progress
and then switch to Deployed
when done.
Cloud Pak for Data offers tools to quickly test out Watson Machine Learning models. We begin with the built-in tooling.
From the Model deployment page, once the deployment status shows as Deployed
, click on the name of your deployment. The deployment API reference
tab shows how to use the model using cURL
, Java
, Javascript
, Python
, and Scala
.
To get to the built-in test tool, click on the Test
tab and then click on the Provide input data as JSON
icon.
Copy and paste the following data objects into the Body
panel
(Note: Make sure the input below is the only content in the field. Do not append it to the default content
{ "input_data": [] }
that may already be in the field. Instead, remove the exsiting content and replace it with the following data.).
Click the Predict
button. The model will be called with the input data and the results will display in the Result
window. Scroll down to the bottom of the result to see the prediction (i.e "Risk" or "No Risk"):
Now that the model is deployed, we can also test it from external applications. One way to invoke the model API is using the cURL command.
In order to get access token you need to have API Key
, that you can get from your IBM cloud account. You can create one by running following command. Remember to save your API Key as they can't be retrieved after they are created.
Next, in a terminal window, run the following command to get a token to access the API. Replace <API Key>
with the api key that you got from running above command.
A json string will be returned with a value for accessToken
that will look similar to this:
You will save the access token right after the access_token
in a temporary environment variable in your terminal. Copy the access token value (without the quotes) in the terminal and then use the following export command to save the "accessToken" to a variable called WML_AUTH_TOKEN
.
Back on the model deployment page, gather the URL
to invoke the deployed model from the API reference by copying the Endpoint
.
Now save that endpoint to a variable named URL
in your terminal by exporting it. URL also requires a version query parameter.
Example of an URL:
Now run this curl command from the terminal to invoke the model with the same payload we used previousy:
A json string will be returned with the response, including a prediction from the model (i.e a "Risk" or "No Risk" at the end indicating the prediction of this loan representing risk).
You can also access the online model deployment directly through the REST API. This allows you to use your model for inference in any of your apps. For this workshop we'll be using a Python Flask application to collect information, score it against the model, and show the results.
IMPORTANT: This SAMPLE application only runs on python 3.6 and above, so the instructions here are for python 3.6+ only. You will need to have Python 3.6 or later already installed on your machine Note: The instructions below assume you have completed the pre-work module and thus have the Git repository already on your machine (cloned or downloaded).
In order to follow along with this section, use one of the following links to get a copy of the front end application that is used in this lab.
Use the [Download]
link to get the file. If the link isn't working for you, try clicking the [Mirror]
to get it from out backup servers.
TIP To terminate the virtual environment use the
deactivate
command.
To install the Python requirements, from a terminal (or command prompt) navigate to where you cloned/downloaded the Git repository. Run the following commands:
It's best practice to store configurable information as environment variables, instead of hard-coding any important information. To reference our model and supply an API key, we'll pass these values in via a file that is read, the key-value pairs in this files are stored as environment variables.
Copy the env.sample
file to .env
.
Edit .env
to and fill in the MODEL_URL
as well as the TOKEN_REQUEST_URL
, and API_TOKEN
.
MODEL_URL
is your web service URL for scoring which you got from the section above
TOKEN_REQUEST_URL
is the URL for the identity management service where you can request your access tokens. This should not need to be changed
API_TOKEN
is the API Key that we generated earlier to authenticate with IBM Cloud
Note: Alternatively, you can fill in the
AUTH_TOKEN
instead ofAUTH_URL
,AUTH_USERNAME
, andAUTH_PASSWORD
. You will have generated this token in the section above. However, since tokens expire after a few hours and you would need to restart your app to update the token, this option is not suggested. Instead, if you use the username/password option, the app can generate a new token every time for you so it will always have a non-expired ones.
Here's an example of a completed lines of the .env file.
Start the flask server by running the following command:
TIP: Use
ctrl
+c
to stop the Flask server when you are done.
Either use the default values pre-filled in the input form, or modify the value and then click the Submit
button. The python application will invoke the predictive model and a risk prediction & probability is returned:
We have seen how to deploy our machine learning models so they can be called by external consumers. Specifically, we have seen how to create an online deployment which will make an instance of the model available to make predictions in real time via an API. Over time, these models will eventually need to be updated for any number of reasons. In this section, we will explore how you can update the deployment of models without disrupting consumers of the API.
In this section we covered the followings:
Creating and Testing Online Deployments for models.
(Optional) Creating and Testing Batch Deployments for models.
(Optional) Integrating the model deployment in an external application.
Taking a predictive model and infusing AI into applications.
Note: There may be more than one model listed in them 'Models' section. This can happen if you have run the Jupyter notebook more than once or if you have run through both the Jupyter notebook and AutoAI modules to create models. Although you could select any of the models you see listed in the page, the recommendation is to start with whicever model is available that is using a spark-mllib_2.3
runtime (this is denoted in the Software Specification
column).
Note: For some deployed models (for example AutoAI based models), you can provide the request payload using a generated form by clicking on the Provide input using form
icon and providing values for the input fields of the form. If the form is not available for the model you deployed, the icon will not be present or will remain grayed out.
Note for WINDOWS users: This section uses commands available in unix-based systems (MacOS, Linux, ...). Windows users can use or if you want to run the commands locally, it is recommended to or enable if you use Windows 10. If neither option works for you, you will need to adapt the commands for your system to follow this section (for instance, you need to change export
commands to set
commands, and find an alternative for cURL
)
First step is to install the . You can follow the instructions in the link to do so.
Python Application |
The general recommendation for Python development is to use a virtual environment (). To install and initialize a virtual environment, use the venv
module on Python 3:
Initialize a virtual environment with . Run the following commands in a terminal (or command prompt):
Use your browser to go to and try it out.
See the documentations for for details.