Creating a basic single ROI Segmentation and NODE RED logic
  • 30 Jul 2024
  • PDF

Creating a basic single ROI Segmentation and NODE RED logic

  • PDF

Article summary

Link for the recipe

Step 1 :

To start a new segmenter model, you need to click on the "new recipe" button. Once you've clicked this button, you'll be prompted to name the model and choose the recipe type to be "segmentation." After you've done this, you can click "ok" to create the recipe and start working on your new model. Naming the model is an important step, as it will help you identify and organize your models later on. Choosing the recipe type to be "segmentation" is also essential, as it will ensure that the model is trained specifically for image segmentation tasks.

Step 2 :

If you had a old recipe , you’d need to activate the new recipe which can be done by clicking on the activate button, if not then the newly created recipe would be active automatically.

Step 3 : Once you have the recipe active, you can open the editor to start making the change.

Step 4 :

You need to set your camera at the location the you seem fit, and start to configure the camera.

Step 5 :

When setting up the camera, it's essential to take the time to configure all of the camera settings correctly. This includes focusing the camera on the region of interest, which is the specific area in the image that contains the object or feature you want to analyze. You can adjust the focus using the slider or enter the value manually.

Another critical camera setting to get right is the exposure, which controls how much light enters the camera. You can adjust the exposure using the slider or enter the value manually.

Optimizing lighting conditions is also crucial for obtaining accurate and reliable results. You need to make sure that the lighting conditions are appropriate for the type of analysis you want to perform. For example, if you're analyzing a reflective surface, you may need to use the lighting to avoid glare or reflections. This can be selected under the LED light pattern. In addition to these camera settings, you can configure in-house designed lights for the camera and obtain various patterns to identify defects that may be visible under different reflective conditions.

Getting the gamma just right is also important. Gamma is a measure of the contrast between the light and dark areas of an image. Adjusting the gamma correctly can help you see more detail in the image and make it easier to identify defects or features of interest.

Once all of these settings are configured, simply hit "save" to apply them and start using the camera for your analysis.

Step 6 :

Clicking on the name of the recipe would take you back to this screen where you can then navigate to the alignment block to set up the alignment of the camera.

Step 7 :

Once you are on the alignment page, you can capture the latest image and align the page to your desired condition. However, since you don't require this step for your current task, you can skip it. Once you have made any necessary adjustments, simply click "save" to apply the changes and move on to the next step.

Step 8: For this particular case, the inspection will be focused on the sheet. However, you can select a different inspection type for your specific use case and adjust the region accordingly.

Once you have selected the appropriate inspection type, you can adjust the region of interest to ensure that the camera is focused on the correct area. This can be done by dragging the corners of the region of interest box to adjust its size and position. It's crucial to ensure that the region of interest is correctly aligned with the object you want to analyze to obtain the most accurate results.

Once you have adjusted the region of interest, simply hit "save" to apply the changes and continue with the inspection process.

Step 9 :

Once you have set everything up, you can go to the segmentation block and use the brush tool to paint the pencil mark. Make sure to paint only the pencil marks and nothing else. You need to do this for a minimum of 10 images to train your model accurately.

Steo 10 : Once you have at least 10 images, click on "train segmentation" and enter the appropriate epoch number. Keep in mind that the more epoch numbers you choose, the better the model's accuracy will be, but that also means it takes longer to train the model. So, it's important to balance the need for accuracy with the amount of time you have available for training the model. Once you've selected the appropriate settings, hit the "start training" button to begin the training process. You can monitor its progress and make any necessary adjustments as needed.

Step 11:

Clicking on the live preview , you can see the pencil mark being highlighted.

Step 12:
Congratulations on training your first segmentation model!

Set up pass/fail rules for a segmenter recipe using the node-red editor:

  1. In the recipe editor, click Configure IO

  1. Build the following node-red flow by dragging in the nodes and connecting them. All three of these nodes can be found on the left sidebar.

  2. Double-click the function block, copy the desired example code from the next page into it, and click done.

  3. Hit deploy in the upper right corner of the node-red editor before leaving the page.

  4. Return to the HMI and test your logic.

ONLY PASS IF THERE AREN’T ANY PIXELS DETECTED AT ALL

Logic :

cconst allBlobs = msg.payload.segmentation.blobs; // Extract the blobs from the payload's segmentation data

const results = allBlobs.length < 1; // Check if there are no blobs and store the result (true or false)

msg.payload = results; // Set the payload to the result of the check

return msg; // Return the modified message object

ONLY PASS IF ALL DETECTED BLOBS ARE SMALLER THAN THE THRESHOLD SIZE OF PIXELS DETECTED GREATER THAN THE THRESHOLD

Logic :

const threshold = 500; // Define the threshold value for pixel count

const allBlobs = msg.payload.segmentation.blobs; // Extract the blobs from the payload's segmentation data

const allUnderThreshold = allBlobs.every(blob => blob.pixel_count < threshold); // Check if all blobs have a pixel count less than the threshold

msg.payload = allUnderThreshold; // Set the payload to the result of the check

return msg; // Return the modified message object

PASS IF THE TOTAL NUMBER OF DETECTED PIXELS IS LESS THAN A DEFINED THRESHOLD

Logic :

const threshold = 5000; // Define the threshold value for the total pixel count

const allBlobs = msg.payload.segmentation.blobs; // Extract the blobs from the payload's segmentation data

const totalArea = allBlobs.reduce((sum, blob) => sum + blob.pixel_count, 0); // Calculate the total pixel count of all blobs

msg.payload = totalArea < threshold; // Set the payload to true if the total area is less than the threshold, otherwise false

return msg; // Return the modified message object


Was this article helpful?

ESC

Eddy AI, facilitating knowledge discovery through conversational intelligence