Adding a New Project¶
Projects allow you to group multiple experiments into a single location, whether those are grouped via user or experiment type. Adding projects is easy.
Click the Add project button.
Specify a name for the project.
Click Add. In this example, we’re creating a project called Credit card test. Upon completion, the new project will appear in the list of projects in MLOps
Open the Driverless AI Projects page and refresh the page. The new project will now be visible in Driverless AI.
Exporting Experiments from Driverless AI into MLOps¶
This section assumes that you have experiments in a Driverless AI project to export into MLOps. Refer to the Driverless AI User Guide for information on how to run an experiment.
In Driverless AI, navigate to the Projects page. By default, this page shows all available projects, including the project just created.
Select the new project, and link your experiment in the Projects. If necessary, create an experiment first, and then link it. (Refer to Linking Experiments.)
Return to MLOps and verify that the default-payment-experiment experiment was exported into the Credit card test Project.
Importing Models from H2O-3 into MLOps¶
Regression and classification models from H2O-3 can be imported into MLOps.
Select a project and click Add Model.
Specify a H2O-3 MOJO model to import. Drag and drop a file into the window or select one from your machine’s file system.
Enter a name for your model and click OK to confirm and upload the model.
Note: H2O-3 binary models cannot be imported into MLOps.
In MLOps, models can be deployed to Dev and/or Production environments.
Select the project that includes the model you want to deploy. In this example, we will deploy the default-payment-experiment model from the Credit card test Project.
Click the Actions dropdown in the Models table and specify where you want to deploy this model. In this example, we will deploy to Dev.
A confirmation page displays. Click Confirm to continue the deployment or Cancel to return to the Project without deploying.
MLOps will begin the deployment, and the State column will to show that the deployment is Launching. The state changes to Healthy upon completion.
After a model has been deployed, you will be able to view details about the deployment, monitor the deployment, view a sample cURL request, and copy the endpoint URL.
Note: You can undeploy by clicking the Deploy dropdown, de-selecting the current deployment, then confirming the request.
Creating a Champion/Challenger Deployment¶
MLOps allows you to continuously compare your chosen best model (Champion) to a Challenger model with Champion/Challenger deployments. To set a Challenger model, navigate to the Models section and click Actions > Challenger for … next to the model you want to use as a Challenger. In the pop-up window that appears, select a Champion model and an environment type, then click Deploy.
Creating an A/B Test Deployment¶
A/B testing in MLOps allows you to compare the performance of two or more models. When requests are sent to an A/B deployment, they are directed to the selected models at a specified ratio known as the traffic percentage. For example, if the deployment consists of two models and the traffic percentage is set to 50% / 50%, each model will receive half of the total incoming requests.
To create an A/B deployment, navigate to the Models section and select two or more models, then click A/B Test. In the pop-up window that appears, the models listed in the upper half are the models that were selected, while the lower half allows you to select additional models for the A/B deployment if there are any still available. You can also remove models from the selection by clicking them. Once you have selected the models you want to use, specify the traffic percentage for each model, then choose an environment type and click Deploy.
Note: The traffic percentage of the last model in the deployment autocompletes so that the overall percentage adds up to 100.
In the Deployments section, click on the Actions > More Details link to view summary information about the deployment along with any alerts. From this page, you can also click on the Model Endpoint link to open the scoring service, and you can view/copy a sample URL request for scoring.
Driverless AI + MLOps can monitor models for drift, anomalies, and residuals, providing more traceability and governance of models. The alerts provided on the dashboard can help you determine whether to re-tune or re-train models.
By default, the Monitoring page shows:
Scoring latency (Populated only after the scoring service is started. See Copy Endpoint URL for an example.)
Scatterplots, histograms, and heatmaps for input columns and output columns
Note: You can also navigate to this page by clicking on an alert.
Custom Business Metrics¶
In addition to the default panels, you can also create customized business metrics for this page. Just click the Add Panel button in the upper-right menu to add a new panel. At this point, you can specify to create a new query or a new visualization. (Refer to the Grafana documentation for more information.)
In the Deployments section, click on Actions > Show sample request to see/copy the sample cURL request.
Copy the sample cURL request.
Open a Terminal window and paste the sample cURL request to view the model scoring information.
Copy Endpoint URL¶
In the Deployments section, click on Actions > Copy endpoint URL link to retrieve the model endpoint.
Paste the URL into a browser to launch the scoring service.
Upon completion, Scoring Latency graph on the Actions > Monitoring page begin to populate with data.