Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
sureshkumar_raju
Explorer
In the part 2 of this blog post, Let’s continue to Deploy, Serve and Inference the sentiment model created in the Part 1 of the blog post, For better understanding and continuity I strongly recommend to read the Part 1 before proceeding with the Part 2.

Prerequisite



  1. Access to SAP Data Intelligence Instance

  2. Amazon S3 Account for holding raw training data


In the course of this tutorial, we will be creating 2 pipelines such as:

Model Deployment & Serving Pipeline


This pipeline deploys the exported sentiment model and start serving the real time prediction request.



Inference Pipeline


This pipeline helps create HTTP prediction request to get the sentiment of the given text input.







1. Data Extraction & Pre-Processing Pipeline


Refer to Part 1 of the Blog Post



2. Model Training Pipeline


Refer to Part 1 of the Blog Post



3. Model Deployment & Serving Pipeline


In this section we will deploy the previously created model for serving real-time prediction request

Below step by step approach will help you to successfully complete this section, As we are already familiar with the "Scenario Manager", Lets directly go and create the Deployment & Serving pipeline


  • Create Deployment & Serving Pipeline



    • Start creating a new pipeline by clicking the "+" icon

    • In the "Create Pipeline" pop-up, provide name of the pipeline as "sentiment-deploy-serve" and select the "Template" as "TensorFlow Serving Pipeline" and click the"Create" button


    • You will now get redirected to the modeler window for pipeline designing, As we chose the predefined template you will automatically get the pipeline as shown below

    • Take some time to explore the configuration of each operator and try to understand their parameter values as shown, Herewith provided some brief information about each of the operator

      • OpenAPI ServlowOperator that serves HTTP request

      • Model Serving Operator: Operator that deploys the sentiment model and starts serving



    • Save the graph and go back to the “Sentiment Scenario” in the “Scenario Manager” and click “Create Version” button to create a new version

    • Once the version has been created, select the "sentiment-deploy-serve" pipeline and  click “deploy” icon to run the pipeline

    • This will take you to the pipeline configuration screen where you can select the trained model in Step 4 (Pipeline Parameters), In our case this is called "sentiment", Select the model and click "Save"

    • This will trigger the deployment and take you to the Deployment screen where the status of the Deployment URL will be in Pending state initially and eventually will change to Running once the server is up in the back end

    • Now the model server got created and a service URL is published by which we can do prediction, Take note on the Deployment URL which we will be using in the next section for inference




In this section, We managed to successfully deploy the trained sentiment model, Let's continue with the prediction request in the next section

4. Inference Pipeline


In this section, We will create an inference pipeline which will read few input text from the S3 repo and do the prediction against the deployed model in the previous section

Below step by step approach will help you to successfully complete this section


  • S3 Prediction Data Repository



    • Bucket Name: sapdi

    • Prediction Data: path -> predict/ - all the reviews that we want to predict are kept here as txt file

    • Prediction Dataset can be downloaded from predict.zip

    • Unzip the file and setup the S3 data store as shown in the above screen shot.




  • Create Inference Pipeline



    • Start creating a new pipeline by clicking the “+” icon

    • In the “Create Pipeline” pop-up, provide name of the pipeline as “sentiment-inference” and leave the rest to default and click the “Create” button

    • You will now get redirected to the modeler window for pipeline designing

    • Before modeling the pipeline, we need to get the required python library to run our pipeline, As we are going to use "requests", "pandas", "keras", "tensorflow" as dependencies at the time of inference, We need to enable this as custom docker tag for successfully running the pipeline


    • From the modeler window, In the navigation pane, choose the "Repository" tab


    • Right-click the "dockerfiles" section and choose "Create Folder"


    • Name your folder as "inference" and click "Create"


    • Right-click the newly created folder (inference) and choose "Create Docker File"

    • Copy the below script and paste it in the docker file editor
      FROM $com.sap.opensuse.python36:2.7.9

      RUN python3.6 -m pip --no-cache-dir install 'requests==2.18.4' 'keras==2.2.4' 'tensorflow==1.11.0' 'pandas==0.25.1'









    • Click to open the docker configuration icon at the right hand corner of the window and add tags as shown below

    • Finally your folder structure, editor content and configuration Tags should look like as shown below


    • In the editor toolbar, click "Save" to save your changes.


    • In the editor toolbar, click "Build" to build a docker image for the Dockerfile, Wait until the "Build Status" turns to green as shown below

    • Now switch back to "Graph" tab

    • Enable "JSON" mode by clicking the JSON button located at the right hand side corner of the page

    • Delete the existing content and copy the content of inference-graph.json and paste in the editor and click save icon in the graph toolbar

    • Switch back to diagram mode and make sure your pipeline looks like below

    • Take some time to explore the configuration of each operator and try to understand the parameter values, Herewith provided some brief information about each of the operator

      • Read FileOperator that connects to the S3 datasource and read all the files in the given path, Take a look at the configuration for bucket, path and pattern that are set in the operator configuration

      • Python 3 OperatorOperator that receives the file content from Read File operator and makes the prediction request against the service end point that we got from the previous section

      • WiretapOperator to view the prediction logs



    • In the "Diagram" mode, Click the Python 3 Operator's and then click "script" icon which will open up the script editor

    • In the editor window, Replace the "<DEPLOYMENT URL>" with the actual URL from the previous section and replace the "<tenantid/userid:password>" with BASE64 encoded values in the shown format, Finally it should look something like below


    • response = requests.post("https://vystsem.ingress-hana.ondemand.com.com/app/pipeline-modeler/openapi/service/sdf3sf-asdf3asdfasd-asdf/", data=req,headers={"Content-Type": "application/json", "Authorization": "Basic asdfasdfasdfasdfasdfasdfasdfsaebsdfgvzxcv="})



    • Once modified, Save the graph and go back to the “Sentiment Scenario” in the “Scenario Manager” and click “Create Version” button to create a new version

    • Once the version has been created, select the “sentiment-inference" pipeline and  click “execute” icon to run the pipeline

    • From the modeler window, Once the pipeline is in the running mode, In the graph click open the “Wiretap Operator UI” to view the runtime prediction logs of the pipeline, It will look something like below

    • Finally, We made a real time prediction request on our trained sentiment model




In this section, We managed to successfully predict the sentiment of the given input which we read from the S3 repo

 

Conclusion: With Part 1 & Part 2 blog posts, we managed to build a complete end to end scenario of building a Machine Learning model, In the course of completion we have experienced the interfaces and core feature capabilities of SAP Data Intelligence, Hope you enjoyed !