# ML Visual Inspection for cookies in SAP DMC ## Step 1: Machine Learning Model Set up a machine learning model for the classification of images of cookies. There are three classes, effectively turning this into a multiclass image classification problem. * ANOMALY - There was a deviation in the quality of the cookie, e.g. the sugar on top was not placed right, it is broken, or other problems. * REVERSED - The cookie has been placed upside down. * OK - There are no inconsistencies in the quality of the cookie NOTE: SAP DMC allows for the following types of ML models ![](https://hackmd.io/_uploads/SJhebIoLn.png) It's also possible to consider this use case as a binary classification problem if we would merge the two nonconformance classes, though both work similarly in DMC. Additionally, the SAP DMC ML/AI Scenario only accepts Tensorflow Javascript ML models. In the following section, we will discuss how to build ML models that can be integrated with SAP DMC, though possibilities are not limited to these options. <!-- TODO: consider ook multilabel (reversed & anomaly is tegelijk mogelijk) en object detection (suikertje detecteren op het koekje en bv kijken is dit wel een cirkel? also - anomaly detection?--> ### Option 1: Binary/Multiclass Image Classification with Teachable Machine Train the ML model using [Google Teachable Machine](https://teachablemachine.withgoogle.com/train), by starting an image project with a standard image model and specifying the three classes mentioned above. ![](https://hackmd.io/_uploads/Bkwsh-oLn.png) The model can then be downloaded as Tensorflow.js, this will give us two important files: __weights.bin__ and __model.json__ <img src="https://hackmd.io/_uploads/SkY1I-jUh.png" width="70%"> <br> <br> Notice that (for now) the Teachable Machine is limited to binary/multiclass image classification. ### Option 2: Binary/Multiclass Image Classification with Python and Tensorflow In Python, using Tensorflow (Keras), we can build any model we need and also save it as a Tensorflow.js model. The following code snippet shows how to achieve this. This is also relevant for platforms/tools that enable building ML models with Python such as DataBricks, ... ```python import tensorflowjs as tfjs import tensorflow as tf # Set the image dimensions and number of classes img_width, img_height = 224, 224 num_classes = 3 # Set the batch size and validation split batch_size = 32 validation_split = 0.2 # Specify the classes you want to include included_classes = ['OK', 'REVERSED', 'ANOMALY'] # Normalize pixel values to [-1,1] train_data_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1. / 127.5, validation_split=validation_split) ... # Compile the model model.compile(...) # Train the model model.fit(...) # Save the model as a Tensorflow.js model tfjs.converters.save_keras_model(model, 'tfjs-model') ``` If the __weights.bin__ file is too big, it will automatically be split into multiple files __group1-shard1of3.bin__ and so on. ### Option 3: Binary/Multiclass Image Classification with Azure AI ### Other options Other options are also possible, as long as they result in a Tensorflow.js ML model. For example, you could [convert a PyTorch model to a Tensorflow model](https://medium.com/mlearning-ai/switching-between-tensorflow-and-pytorch-with-onnx-86f0b1b4cff9), ... <!-- TODO: test dit want zelfs met 98% model lijkt dit niet te werken --> <!---#### Option 3: Multilabel Image Classification with Python and Tensorflow ...--> ## Step 2: Configure nonconformances <img src="https://hackmd.io/_uploads/Sk7AX-sLn.png" width="40%"> Go to "manage nonconformance codes" and add the NC codes **REVERSED** and **ANOMALY** as follows: ![](https://hackmd.io/_uploads/rkX07ZjI3.png) ![](https://hackmd.io/_uploads/BkQA7-sIn.png) Next, go to "manage nonconformance groups" and create a NC group which will contain the previously created NC codes. In this case, the group will be called **DEFECT**. ![](https://hackmd.io/_uploads/rkTGGfoLh.png) ## Step 3: AI/ML Scenario Documentation is found [here](https://help.sap.com/docs/sap-digital-manufacturing/ai-ml-scenarios-management/manage-ai-ml-scenarios). Go to "manage AI/ML scenarios" to set up the machine learning scenario that will allow us to use the ML model and the related NC codes (NC codes should match the names of the classes within the ML model). <img src="https://hackmd.io/_uploads/HJ7Ambs8n.png" width="40%"> Create a new scenario by selecting the "Predictive Quality: Visual Inspection" scenario. <img src="https://hackmd.io/_uploads/rkXrmMsUn.png" width="50%"> Enter a name and description. ![](https://hackmd.io/_uploads/S1cgiGj82.png) Define the Scenario Available Combination by clicking "define". <img src="https://hackmd.io/_uploads/SyqeifiUh.png" width="30%"> After clicking on "step 2", we configure the scenario by adding the ML model files and specifying the inspection type and mode. You can only upload one JSON file (the model) and multiple BIN files (the weights), with a maximum weight of 10MB per file. Image width, height and scaling are dependent on the model, although SAP recommends height and width to be 224 pixels, and scaling in [-1,1]. ![](https://hackmd.io/_uploads/r1tsaGjLh.png) Go to "step 3", and add a conformance classes and nonconformance classes by clicking "add". We also need to specify class titles. NOTE: it's not possible to add a nonconformance group - bug? ![](https://hackmd.io/_uploads/Hk8RyXjL2.png) In "step 4", the scenario can be tested (advisable). Afterwards, the scenario can be saved and/or activated. ![](https://hackmd.io/_uploads/rkPr-mjL3.png) ![](https://hackmd.io/_uploads/rkDr-Xi83.png) ## Step 4: Test the scenario in a POD First, we need to create a POD and add the visual inspection plugin (if this hasn't happened yet). In the POD Designer (here we use *THBA_CUSTOM_WORKCENTER_POD*), add a new page for visual inspection. ![](https://hackmd.io/_uploads/ByCyQ4yD2.png) ![](https://hackmd.io/_uploads/ryR1Q4yP2.png) In the visual inspection tab, first add a "Plugin Container" from the controls list, by dragging it to the new page (1). Then, add the visual inspector plugin to this container by also dragging it. ![](https://hackmd.io/_uploads/rJDxBEJwh.png) Then click save (and preview). ![](https://hackmd.io/_uploads/SyELBVkwn.png) To actually use and test the visual inspection page, we need to get images in the POD. #### Option 1: Webcam To test the visual inspection, pictures can be taken with the camera of your workstation (e.g. laptop webcam). ![](https://hackmd.io/_uploads/BJV7o8gDh.png) #### Option 2: Postman To test predetermined pictures, they should first be converted to Base64, e.g. using an [online converter](https://codebeautify.org/image-to-base64-converter). Next, using Postman we can send a POST request to the POD containing the converted image file. 1. Go to the relevant subaccount in the SAP BTP Cockpit, in this case *DLWR_DMC_Discovery_2023*, and go to "Instances and Subscriptions" ![](https://hackmd.io/_uploads/HyK2MQCIn.png) 2. Here, you should find previously created service instances. If not, one should be created or your account might not have the necessary permissions. ![](https://hackmd.io/_uploads/rkYETgkD2.png) 3. Click on the instance, and find the "Service Key" tab. This is where you will find (or still need to create) a service key. If the service keys are empty (e.g. they only contain "{}"), your account also doesn't have the right permissions. ![](https://hackmd.io/_uploads/SkxslkWJwn.png) 4. The service keys contain a *client secret*, *client id* and *authentication URL* and the URL of the *public api endpoint*, which we will need for authentication and sending the POST request in Postman. ![](https://hackmd.io/_uploads/ryy1z-Jvn.png) 5. Download [the Postman Collection](https://github.com/SAP-samples/digital-manufacturing-extension-samples/blob/main/dm-ml-extensions/DMC_VisualInspection.postman_collection.json) and import it into Postman. It should contain POST and GET requests. ![](https://hackmd.io/_uploads/Byb8LW1vn.png) 6. First, we need to set up the authorization. Please look at the following screenshot closely to fill in the correct values (refer to your service key for filling in the parameters). Note that for the access token url, we need to append '/oauth/token'. ![](https://hackmd.io/_uploads/SkcKt-JP3.png) 7. Scroll down, and click "Get New Access Token", this will automatically fill in the current token with Header Prefix "Bearer". This token is valid for a couple of hours. ![](https://hackmd.io/_uploads/Hky8qZJwn.png) 8. Next, we need to fill in the body of our POST request. ![](https://hackmd.io/_uploads/HyTr2Zkw3.png) In the case of our cookies use case, this is the correct body. The material is the same as what you selected for the AI/ML Scenario. The operation parameter can be found in the operation activity, this is the value found at "Work Center". The SFC can be found in (for example) the work list you want to open a POD for (see step 10). ```jsonld= { "context":{ "plant": "1010", "sfc": "101028", "inspectionViewName": "default", "material": "DMC_COOKIE", "operation": "DMC_MIX", "source": "DME" }, "fileContent": <add base64 image string>, "fileContentType": "image/png", "scenarioID": "<>", "scenarioVersion": 1 } ``` 9. If everything was set up successfully, you can now send a POST request which will send an image to the relevant POD. The response status should be "200" or "OK". 10. Go to the POD (SAP DMC > POD designer > THBA_CUSTOM_WORKCENTER_POD (or your POD) > preview ) and open the relevant SFC. ![](https://hackmd.io/_uploads/Hk0jCWJPn.png) 11. Go the the Visual Inspector tab, you should now be able to see the picture you sent, and start logging nonconformances. It's possible that you need to leave the current POD (go back to the work lists) to see new images. ![](https://hackmd.io/_uploads/rJntGM1P3.png) ![](https://hackmd.io/_uploads/Bk3KGzJvn.png) #### Option 3: Python In the case that this is an important workflow that will be used many times, we set up a Python script that will automatically fix the authentication and send images from a specified folder to the POD. In the same directory as the Python script, create a *config.json* file that contains the following fields. ```jsonld= { "base_url": <api-public-endpoint>, "endpoint": "/aiml/v1/inspectionLog", "images_directory": <directory containing the images to send>, "context": { "plant": "1010", "sfc": "101028", "inspectionViewName": "default", "material": "DMC_COOKIE", "operation": "DMC_MIX", "source": "DME" }, "authentication": { "clientid": <clientid>, "clientsecret": <clientsecret>, "url": <url>/oauth/token }, "auth_token": "" // leave empty } ``` The script will now automatically set up authentication (and reset authentication if the token expires), and send images to the POD. Run the script: ```cli python pod_simulation.py ``` In the current simulation flow, you need to press Enter to send the next picture (which should be done before handling the current image). You will see the following output. ``` Sending Data/Test\2020-03-11_14-40-18.jpg to DMC... {'context': {'plant': '1010', 'sfc': '101028', 'material': 'DMC_COOKIE', 'operation': '1000389-0-0010', 'resource': 'DMC_MIX', 'routing': '1000389', 'source': 'DME', 'inspectionViewName': 'default'}, 'fileId': '101028_20230608_140131.png'} Press Enter for next image: Sending Data/Test\2020-03-11_14-58-50.jpg to DMC... {'context': {'plant': '1010', 'sfc': '101028', 'material': 'DMC_COOKIE', 'operation': '1000389-0-0010', 'resource': 'DMC_MIX', 'routing': '1000389', 'source': 'DME', 'inspectionViewName': 'default'}, 'fileId': '101028_20230608_140139.png'} Press Enter for next image: Sending Data/Test\2020-03-11_16-07-43.jpg to DMC... {'context': {'plant': '1010', 'sfc': '101028', 'material': 'DMC_COOKIE', 'operation': '1000389-0-0010', 'resource': 'DMC_MIX', 'routing': '1000389', 'source': 'DME', 'inspectionViewName': 'default'}, 'fileId': '101028_20230608_140144.png'} Press Enter for next image: ... ``` NOTE: The "image queue" only holds 2 images: the current one (on screen) and the next one. The next one is always the last image you sent (with the POST request). You can also send a GET request, this will ask for the status of the __last__ image, and not the current - on screen - image.