Deploy and Test Machine Learning Models
Last updated
Was this helpful?
Last updated
Was this helpful?
In this module, we will learn how to deploy our Machine Learning models. By doing so, we make them available for use in production such that applications and business processes can derive insights from them. There are several types of deployments available (), of which we will explore:
Online Deployments - Allows you to run the model on data in real-time, as data is received by a web service.
Batch Deployments - Allows you to run the model against data as a batch process.
This module is broken up into several sections that explore the different model deployment options as well as the different ways to invoke or consume them. The first section of this lab will build an online deployment and test the model endpoint using both the built in testing tool as well as external testing tools. The remaining sections are optional, they build and test the batch deployment, followed by using the model from a python application.
Create Online model deployment
Test the deployed model web UI
(Optional) Test model using cURL
Create Batch Deployment
Create and Schedule a Job
Note: The lab instructions below assume you have completed the pre-work section already, if not, be sure to complete the pre-work first to create a project and a deployment space.
After a model has been created, saved and promoted to our deployment space, we can proceed to deploying the model. For this section, we will be creating an online deployment. This type of deployment will make an instance of the model available to make predictions in real time via an API. Although we will use the Cloud Pak for Data UI to deploy the model, the same can be done programmatically.
Navigate to the left-hand (☰) hamburger menu and choose Deployment spaces
-> View all spaces
:
Choose the deployment space you setup previously by clicking on the name of your space.
From your deployment space overview, in the table, click on the model name that you previousely promoted. Next, you can click on the Create Deployment
.
Note: There may be more than one model listed in them 'Models' section. This can happen if you have run the Jupyter notebook more than once or if you have run through both the Jupyter notebook and AutoAI modules to create models. Although you could select any of the models you see listed in the page, the recommendation is to start with whicever model is available that is using a
spark-mllib_2.4
runtime (this is denoted in theSoftware Specification
column).
On the Create a deployment
screen, choose Online
for the Deployment Type
, give the Deployment a name and optionally a description and click the Create
button.
The Deployment will show as In progress
and then switch to Deployed
when done.
Cloud Pak for Data offers tools to quickly test out Watson Machine Learning models. We begin with the built-in tooling.
From the Model deployment page, once the deployment status shows as Deployed
, click on the name of your deployment. The deployment API reference
tab shows how to use the model using cURL
, Java
, Javascript
, Python
, and Scala
.
To get to the built-in test tool, click on the Test
tab and then click on the Provide input data as JSON
icon.
Copy and paste the following data objects into the Body
panel
(Note: Make sure the input below is the only content in the field. Do not append it to the default content
{ "input_data": [] }
that may already be in the field. Instead, remove the existing content and replace it with the following data.).
Click the Predict
button. The model will be called with the input data and the results will display in the Result
window. Scroll down to the bottom of the result to see the prediction (i.e "Risk" or "No Risk"):
Now that the model is deployed, we can also test it from external applications. One way to invoke the model API is using the cURL command.
In order to get access token you need to have API Key
, that you can get from your IBM cloud account. You can create one by running following command. Remember to save your API Key as they can't be retrieved after they are created.
Next, in a terminal window, run the following command to get a token to access the API. Replace <API Key>
with the api key that you got from running above command.
A json string will be returned with a value for accessToken
that will look similar to this:
You will save the access token right after the access_token
in a temporary environment variable in your terminal. Copy the access token value (without the quotes) in the terminal and then use the following export command to save the "accessToken" to a variable called WML_AUTH_TOKEN
.
Back on the model deployment page, gather the URL
to invoke the deployed model from the API reference by copying the Endpoint
.
Now save that endpoint to a variable named URL
in your terminal by exporting it. URL also requires a version query parameter.
Example of an URL:
Now run this curl command from the terminal to invoke the model with the same payload we used previousy:
A json string will be returned with the response, including a prediction from the model (i.e a "Risk" or "No Risk" at the end indicating the prediction of this loan representing risk).
Another approach to expose the model to be consumed by other users/applications is to create a batch deployment. This type of deployment will make an instance of the model available to make predictions against data assets or groups of records. The model prediction requests are scheduled as jobs, which are executed asynchronously. For the lab, we will break this into two steps:
Creating the batch deployment
Creating and scheduling the batch job
Lets start by creating the deployment:
Navigate to the left-hand (☰) hamburger menu and choose Deployment Spaces
-> View all spaces
:
Choose the deployment space you created previously by clicking on the name of the space.
From your deployment space overview, in the table, find the model name for the model you previously built and now want to create a deployment against. Use your mouse to hover over the right side of that table row and click the Deploy
rocket icon (the icons are not visible by default until you hover over them).
Note: There may be more than one model listed in them 'Models' section. This can happen if you have run the Jupyter notebook more than once or if you have run through both the Jupyter notebook and AutoAI modules to create models. Although you could select any of the models you see listed in the page, the recommendation is to start with whichever model is available that is using a
spark-mllib_2.3
runtime.
On the 'Create a deployment' screen: choose Batch
for the Deployment Type, give the deployment a name and optional description. From the 'Hardware definition' drop down, select the smallest option (1 standard CPU, 4GB RAM
in this case though for large or frequent batch jobs, you might choose to scale the hardware up). Click the Create
button.
Once the status shows as Deployed you will be able to start submitting jobs to the deployment.
Next we can schedule a job to run against our batch deployment. We could create a job, with specific input data (or data asset) and schedule, either programmatically or through the UI. For this lab, we are going to do this programmatically using the Python client SDK. For this part of the exercise we're going to use a Jupyter notebook to create and submit a batch job to our model deployment.
Note: The batch job input is impacted by the machine learning framework used to build the model. Currently, SparkML based model batch jobs require inline payload to be used. For other frameworks, we can use data assets (i.e CSV files) as the input payload.
The Jupyter notebook is already included as an asset in the project you imported earlier.
Go the (☰) navigation menu and click on the Projects
link and then click on your analytics project.
From the project overview page, click on the Assets
tab to open the assets page where your project assets are stored and organized.
Scroll down to the Notebooks
section of the page and click on the pencil icon at the right of the machinelearning-creditrisk-batchscoring
notebook.
When the Jupyter notebook is loaded and the kernel is ready, we will be ready to start executing it in the next section.
Notebook sections
With the notebook open, spend a minute looking through the sections of the notebook to get an overview. A notebook is composed of text (markdown or heading) cells and code cells. The markdown cells provide comments on what the code is designed to do. You will run cells individually by highlighting each cell, then either click the Run
button at the top of the notebook or hitting the keyboard short cut to run the cell (Shift + Enter but can vary based on platform). While the cell is running, an asterisk ([*]
) will show up to the left of the cell. When that cell has finished executing a sequential number will show up (i.e. [17]
).
Note: Please note that some of the comments in the notebook are directions for you to modify specific sections of the code. These are written in red. Perform any changes necessary, as indicated in the cells, before executing them.
In order to conserve resources, make sure that you stop the environment used by your notebook(s) when you are done.
Navigate back to your project information page by clicking on your project name from the navigation drill down on the top left of the page.
Click on the 'Environments' tab near the top of the page. Then in the 'Active environment runtimes' section, you will see the environment used by your notebook (i.e the Tool
value is Notebook
). Click on the three vertical dots at the right of that row and select the Stop
option from the menu.
Click the Stop
button on the subsequent pop up window.
You can also access the online model deployment directly through the REST API. This allows you to use your model for inference in any of your apps. For this workshop we'll be using a Python Flask application to collect information, score it against the model, and show the results.
There are many ways of running python applications. We will cover two of them, first running as a python application on your machine, and next as a deployed application on IBM Cloud.
Regardless of which option we choose for deployment, we need to configure our Python application so it knows how to connect to our specific model. To do that follow these steps.
It's best practice to store secrets and configurations as environment variables, instead of hard-coding them in the code. Following this convention, we will sore our API Key and model URL in a .env
file. The key-value pairs in this files are treated as environment variables when the code runs.
Copy the env.sample
file to .env
.
Edit .env
to and fill in the MODEL_URL
and API_TOKEN
variables.
Here is an example of a completed lines of the .env file. Your API_TOKEN
and MODEL_URL
will defer.
And we're done! Now you can proceed to your favorite option below.
Choose this option if you want to run the Python Flask application locally on our machines. Note that this application will still access your deployed model in Cloud Pak for Data as a Service over the internet.
Important pre-requisites
Completion of pre-work and Online Deployment section
Completion of the common steps section above.
A working installation of Python 3.6 or above
TIP To terminate the virtual environment use the
deactivate
command.
Now we are ready to start our python application.
Start the flask server by running the following command:
TIP: Use
ctrl
+c
to stop the Flask server when you are done.
Either use the default values pre-filled in the input form, or modify the value and then click the Submit
button. The python application will invoke the predictive model and a risk prediction & probability is returned:
As an alternative, you can deploy this application in the your IBM Cloud account as a Cloud Foundry application.
Important pre-requisites
Completion of the common steps section above.
Configuring our application:
First, make sure you have completed the Common Sections section above.
[Optional]: You can inspect or change the deployment definitions for this application in the file named manifest.yml
. As defined, your application will have a random URL. You can could change that by setting random-route: false
and picking a unique name in the name
section.
Authenticating with the IBM Cloud:
Make sure you have the IBM Cloud CLI installed
Then use ibmcloud login
command to authenticate. (You can also use ibmcloud login --sso
if your organization uses single-sign-on)
Target the desired cloud foundry endpoint by using the following ibmcloud target --cf
Publishing our Application
We are now ready to publish our application. Use ibmcloud cf push
to push your application to the cloud
Once it is complete you will see the URL for your application on the IBM Cloud.
Testing our application
For the final step, navigate to the URL that you got after publishing your application
You can then change the field and click Submit
. The application makes a call to your deployed model on the backend and visualize the predictions for you. All on the Cloud!
Summary of the commands
And we are all done. We configured our application, logged in using the IBM Cloud cli, and published our application to the cloud.
You can clean up the deployments created for your models. To remove the deployment:
Navigate to the left-hand (☰) hamburger menu and choose Deployment spaces
-> View all spaces
:
Choose the deployment space you setup previously by clicking on the name of your space.
From your deployment space overview, in the table, click on the model name that you previousely promoted and created deployments against.
Under 'Deployment Types', click on Online
to view the online deployments you have created for this model.
In the table on the main panel, click on the three vertical dots at the right of the row for the online deployment you created. Select the Delete
option from the menu.
Note: The vertical dots are hidden until you hover over them with your mouse
In the subsequent pop up window, click on the Delete
button to confirm you want to delete this deployment.
You can follow the same process to delete other deployments as needed.
In this section we covered the followings:
Creating and Testing Online Deployments for models.
(Optional) Creating and Testing Batch Deployments for models.
(Optional) Integrating the model deployment in an external application.
Taking a predictive model and infusing AI into applications.
Note: For some deployed models (for example AutoAI based models), you can provide the request payload using a generated form by clicking on the Provide input using form
icon and providing values for the input fields of the form. If the form is not available for the model you deployed, the icon will not be present or will remain grayed out.
Important: If you have completed this section and do not plan on completing the other optional deployment approaches below, please go ahead and cleanup your deployment. Follow the
Note for WINDOWS users: This section uses commands available in unix-based systems (MacOS, Linux, ...). Windows users can use or if you want to run the commands locally, it is recommended to or enable if you use Windows 10. If neither option works for you, you will need to adapt the commands for your system to follow this section (for instance, you need to change export
commands to set
commands, and find an alternative for cURL
)
First step is to install the . You can follow the instructions in the link to do so.
Important: If you have completed this section and do not plan on completing the other optional deployment approaches below, please go ahead and cleanup your deployment. Follow the
Unzip the python app zip file that you downloaded in the section. Depending on your operation system the command to do this will differ, so an online search might be in order if you don't know how already!
API_TOKEN
is your API Token that we created in the section. If you don't have your API Key, see the section of the .
MODEL_URL
is your online deployment's Endpoint. We covered how to find this URL (the MODEL_URL
) in the section above. For convenience, we're adding a reminder blow.)
To find your MODEL_URL 1. Go to the (☰) hamburger menu > Deployments
> View all spaces
. 2. Select your Deployment Space name from the section 3. Select your model from the ML with section. 4. In the Online
tab, select your deployment's name. 5. Finally, you can find the Endpoint in the API reference section.
You could run this Python application in your default python environment; however, the general recommendation for Python development is to use a virtual environments (see ). To install and initialize a virtual environment, use the venv
module on Python 3:
Initialize a virtual environment with . Run the following commands in a terminal (or command prompt):
Next, to install the Python requirements, from a terminal (or command prompt) navigate to where you downloaded the python app zip file during the . Unzip the downloaded python application and the following commands:
Finally, use your browser to go to and try it out.
Important: Please go ahead and cleanup your deployments. Follow the
Completion of and Online Deployment section
A working installation of IBM Cloud CLI. ()
Important: Please go ahead and cleanup your deployments. Follow the