cloudpakfordata-telco-workshop
  • Introduction
  • Getting Started
    • Pre-work
    • Git integration (using GitLab)
  • Telco Workshop
    • Data Connection and Virtualization
    • Import Data to Project
    • Data Visualization with Data Refinery
    • Enterprise data governance for Viewers using Watson Knowledge Catalog
    • Enterprise data governance for Admins using Watson Knowledge Catalog
    • Machine Learning with Jupyter
    • Machine Learning with AutoAI
    • Deploy and Test Machine Learning Models
    • Monitoring models with OpenScale GUI (Auto setup Monitoring)
    • Monitoring models with OpenScale GUI (Manual Config)
      • OpenScale Manual Config Part 2
      • OpenScale Manual Config Part 3
      • OpenScale Manual Config Part 4
      • OpenScale Manual Config Part 5
    • Monitoring models with OpenScale (Notebook)
  • Workshop Resources
    • Instructor Guide
  • Resources
    • IBM Cloud Pak for Data Platform API
    • IBM Cloud Pak for Data APIs
    • Watson Knowledge Catalog
    • Watson Knowledge Catalog Learning Center
    • IBM Developer
    • Cloud Pak for Data 2.5
    • Cloud Pak Experiences - Free 7 day trial
    • Cloud Pak for Applications
Powered by GitBook
On this page
  • Steps for basic OpenScale setup
  • 1. Open the notebook
  • Import the notebook (If you are not using the Project Import pre-work steps)
  • 2. Update credentials
  • 3. Run the notebook
  • 4. Explore the Watson OpenScale UI
  • Recap

Was this helpful?

  1. Telco Workshop
  2. Monitoring models with OpenScale GUI (Manual Config)

OpenScale Manual Config Part 5

PreviousOpenScale Manual Config Part 4NextMonitoring models with OpenScale (Notebook)

Last updated 3 years ago

Was this helpful?

For a deployed machine learning model, OpenScale will record all of the requests for scoring and the results in the datamart using feedback logging. In this submodule, we'll emulate a production system that has been used for a week to score many requests, allowing the various configured monitors to present some interesting data. Note that this Historic Data submodule can be run at any time.

Steps for basic OpenScale setup

The submodule contains the following steps:

1. Open the notebook

If you using the file, your notebook will be present in that project, under the Assets tab:

Import the notebook (If you are not using the Project Import pre-work steps)

NOTE: You should probably not need this step, and should only perform it if instructed to.

At the project overview click the New Asset button, and choose Add notebook.

On the next panel select the From URL tab, give your notebook a name, provide the following URL, and choose the Python 3.6 environment:

https://raw.githubusercontent.com/IBM/telco-churn-workshop-cpd/master/notebooks/openscale-historic-data.ipynb

When the Jupyter notebook is loaded and the kernel is ready then we can start executing cells.

2. Update credentials

WOS_CREDENTIALS

  • In the notebook section 2.0 you will add your Cloud Pak for Data platform credentials for the WOS_CREDENTIALS.

  • For the url field, change https://w.x.y.z to use the URL your ICP cluster, i.e something like: "url": "https://zen-cpd-zen.cp4d-v5-2bef101da9502000c44fc2b2-0001.us-south.containers.appdomain.cloud".

  • For the username, use your Cloud Pak for Data login username.

  • For the password, user your Cloud Pak for Data login password.

3. Run the notebook

Important: Make sure that you stop the kernel of your notebook(s) when you are done, in order to prevent leaking of memory resources!

Spend an minute looking through the sections of the notebook to get an overview. You will run cells individually by highlighting each cell, then either click the Run button at the top of the notebook. While the cell is running, an asterisk ([*]) will show up to the left of the cell. When that cell has finished executing a sequential number will show up (i.e. [17]).

4. Explore the Watson OpenScale UI

Now that we've simulated a Machine Learning deployment in production, we can look at the associated monitors again and see more detail. Re-visit the various monitors and look again at the graphs, charts and explanations after the addition of the historical data:

Recap

With the addition of historical data, we can now use the OpenScale tools in a simulated production environment. We can look at Fairness, Explainability, Quality, and Drift, and see how all transactions are logged. This workshop contains API code, configuration tools, and details around using the UI tool to enable a user to monitor production machine learning environments.

You may now skip to the next step

If, for some reason, you are not using the step in the Pre-work to import , then you will need to import the notebook file by itself. Use the following steps for that.

The notebook is hosted in the same repo as

Notebook:

Notebook with output:

Created the Project
Customer-Churn-Project.zip
the workshop
openscale-historic-data.ipynb
openscale-historic-data-with-output.ipynb
Fairness monitor and Explainability
Quality monitor and Feedback logging
Drift monitor
Update credentials
Created the Project
Customer-Churn-Project.zip
Open the notebook
Update credentials
Run the notebook
Explore the Watson OpenScale UI
Project from zip assets tab
Add a new asset
Add notebook name and URL
Notebook loaded
Stop kernel