Amazon SageMaker Canvas now supports deploying ML models to real-time inferencing endpoints, eliminating the need for manual export, configuration, testing, and deployment. This feature enables users to easily consume model predictions and drive actions outside of the SageMaker Canvas workspace. The process of deploying a model in SageMaker Canvas to a real-time endpoint is explained in detail in the post. The post also provides an overview of the solution, workflow steps, prerequisites, and instructions on how to use the customer churn prediction model. Additionally, it shows how to configure the model deployment settings and test the deployed endpoint. The post concludes with instructions on how to clean up the resources created and highlights the benefits of using SageMaker Canvas for deploying ML models.
Deploy ML models built in Amazon SageMaker Canvas to Amazon SageMaker real-time endpoints
Amazon SageMaker Canvas now supports deploying machine learning (ML) models to real-time inferencing endpoints, allowing you to take your ML models to production and drive action based on ML-powered insights. SageMaker Canvas is a no-code workspace that enables analysts and citizen data scientists to generate accurate ML predictions for their business needs.
Overview of solution
For our use case, let’s assume we are a business user in the marketing department of a mobile phone operator. We have successfully created an ML model in SageMaker Canvas to identify customers with the potential risk of churn. Now, we want to move this model from our development environment to production. With SageMaker Canvas, we can directly deploy ML models as endpoints for real-time inferencing, eliminating the need for manual export, configuration, testing, and deployment. This saves time, reduces complexity, and makes operationalizing ML models more accessible without writing code.
The workflow steps are as follows:
- Upload a new dataset with the current customer population into SageMaker Canvas.
- Build ML models and analyze their performance metrics.
- Deploy the approved model version as an endpoint for real-time inferencing.
You can perform these steps in SageMaker Canvas without writing a single line of code.
Prerequisites
Before proceeding, make sure the following prerequisites are met:
- The SageMaker Canvas admin must give the necessary permissions to the SageMaker Canvas user for deploying model versions to SageMaker endpoints.
- Implement the prerequisites mentioned in “Predict customer churn with no-code machine learning using Amazon SageMaker Canvas.”
You should now have three model versions trained on historical churn prediction data in Canvas:
- V1 trained with all 21 features and quick build configuration with a model score of 96.903%
- V2 trained with all 19 features (removed phone and state features) and quick build configuration with improved accuracy of 97.403%
- V3 trained with standard build configuration with a 97.103% model score
Use the objective metrics associated with each model version to select the best-performing model for deployment. In our example, we select version 2.
Configure the model deployment settings, such as deployment name, instance type, and instance count. Canvas will recommend the best instance type and number of instances, but you can customize it as per your workload needs.
You can test the deployed SageMaker inference endpoint directly from within SageMaker Canvas by changing input values using the user interface.
To check out the deployed endpoint in Amazon SageMaker Studio, open a notebook and run the provided code to infer the deployed model endpoint.
If you expect an increase in end-users inferencing your model endpoint and want to provision more compute capacity, you can update the configuration directly from within SageMaker Canvas.
Clean up
To avoid future charges, delete the resources created during this process, including the deployed SageMaker endpoint. Remember to log out of SageMaker Canvas when not in use to avoid unnecessary billing.
Conclusion
In this post, we discussed how SageMaker Canvas can deploy ML models to real-time inferencing endpoints, enabling you to take your ML models to production and drive action based on ML-powered insights. We demonstrated how an analyst can quickly build a highly accurate predictive ML model without writing any code, deploy it on SageMaker as an endpoint, and test the model from both SageMaker Canvas and SageMaker Studio. To start your low-code/no-code ML journey, refer to Amazon SageMaker Canvas.
For more information and AI solutions, connect with us at hello@itinai.com or visit our website at itinai.com.
Spotlight on a Practical AI Solution:
Consider the AI Sales Bot from itinai.com/aisalesbot, designed to automate customer engagement 24/7 and manage interactions across all customer journey stages. Discover how AI can redefine your sales processes and customer engagement. Explore solutions at itinai.com.