Script Control & Machine Vision

Hello,

I am currently writing my master´s thesis on remanufacturing of electronic waste. For the thesis, I develop a collaborative disassembly workstation to disassemble different remote controls. Therefore, I want to equip the UR5e with a camera. The camera should scan the remote control to be disassembled. Machine vision software should identify the type of remote control. If the remote control is known, the cobot puts it in the human workplace (Program 1). If the remote is unknown it gets discarded (Program 2).
After the disassembly, the camera inspects the PCB of the remote via anomaly detection. If the PCB is broken it gets discarded (Program 3). If it is okay, the PCB gets stored in the basket of Type A, Type B, Type C depending on which remote control it is.

My question is how I could implement this. Here are my ideas:

  1. Connect the camera/machine vision software with a computer. Link computer to UR5e via ROS(?) and run it with scripts:
    1.1 Can I program the robot with the teaching pendant and the program on the computer just tells the UR to run Program 1, Program 2, etc.? What software would I need? Is ROS able to do so?
  2. Connect the camera/machine vision software with a computer. Link computer with relay to send digital inputs to the robot:
    2.1 Since the number of digital inputs is limited the number of possible programs is limited. Can I use a combination of different digital inputs to tell the UR5e how to proceed?
  3. Use an existing URCap that allows for machine vision/anomaly and feature detection within the UR cap:
    3.1 Do you know of a URCap that provides that? Thus far I found Aruna Vision but I do not know how good the anomaly and feature detection is. Furthermore, my supervisor prefers open source programs.

How would you proceed/Which solution is the easiest to develop? I am industrial engineer and can program python a bit, but am no informatics professional.

A more in-depth description of the workplace can be found here:

Thank you for your input and help,
Felix

Hi Felix,

I’m speaking from a point of view that you have some good experience with getting the vision system to identify a remote and such, but cannot figure out how to get it to co-operate with your cobot.

Firstly, I would recommend you have UR Sim to practice in. It simulates Polyscope and your robot, allowing you to check new code works as expected without having to load it onto your own cobot. Save your cobot’s current installation and load it into UR Sim to have a replica available to debug code.

  1. If you are looking to use a gripper on the cobot to pick and place an item somewhere, you could write a UR script to open the grip, move to the position of the remote given by the camera, down until contact, close gripper, up and then to discard pile/basket. Have the machine vision software decide which program/script it wants to run, and just call it all within polyscope.

I feel like if the operations you are looking to have to cobot do is move a camera to above a remote and then pick up and move the remote/pcb, you will find things simpler just keeping the programming in UR script and have the machine learning communicate a yes/no to the robot, along with remote/pcb position and orientation relative to camera position, which then prompts the robot to so program 1/2/3/…

  1. You may be interested in using an ethernet cable and MODBUS to send signals without being limited to the digital/analogue input and output slots on the machine.

  2. Personally, I am also looking to implement some vision systems too. If you find one, please do share!

I may have the wrong end of the stick with what type of help you’re looking to get, but what you are describing sounds feasible. Good luck!