Hallo everyone,
In my project, I’m using UR5 to get the good tools that the camera classifies with AI.
Running out of ideas on how to combine UR5 and AI.
I need advice where to start.
my question is if the camera classifies the tools are good, how can I send a signal to the robot to understand that it will move the tools to the correct output positions??
Can this be done simply by connecting the camera as an input to the robot’s microcontroller and getting an output as a signal for the robot to move?
I have seen most of these projects in research-related. As far as I’m aware, the most common approach in these projects is with ROS. You connect a separate Linux computer to the UR, which hosts the AI and ROS. Then you install the ROS-driver URCap on the UR, so it can be moved through a ROS node.
If you have never used ROS, I would recommend you to go through a crash or small course, so you learn how to use the nodes, services and additional packages, like Move it!, very common for moving the robot from point A to point B.
Knowledge on C++ and Python are a mandatory requirement in advance.
Here you have a master thesis using AI for Human-Robot-Interaction, which moves a two-UR5 arms, which behave as a single physical robot.
Here you have a crash online course with ROS, using simulation to go from a basic hello world to a path-planning exercise.
@cgs
Thank you very much for your answer,
Do you think there are other ways to do this other than ROS.
I don’t know if it is possible to find multiple sources of ROS so that it can be learned.
I think there are other ways alternative to ROS, since AI and robotics are common nowadays. However, I don’t work in AI field professionally, so I cannot provide more information.
About learning ROS, it is open-source and a lot of AI projects research-related are also available under public licensing, so it is for sure easier to find tutorials/documentation/similar projects, there is a bigger community and their code can be re-used.
If you’re just looking to have your code talk to the robot, you probably don’t need to go the ROS route. The robot can talk to your code through a bunch of protocols including TCP socket (Most languages have socker servers), xmlrpc (super easy in python), plus most industrial protocols (I’m assuming you’re probably not dealing with these).
Does this sound like what you need?
Robot polls a camera/AI server asking for the okay to do something
Camera takes a picture, classifies object
On success, the AI server now allows the robot to continue to do whatever it is you need it to
If you’re running this stuff on a device with digital outputs, you could use the digital output of your device to trigger a relay which would connect the robots 24V into one of it’s digital inputs. In this case the robot just sits there waiting for a high signal on that input.
As terryc also mentions, most vision-based cameras have digital outputs.
It is definitely the easiest to connect two relays between the camera system and the robot, which the camera system controls. Here, you connect two digital inputs from the robot, which you can use in your robot program to monitor good or bad tools.
Create a ROS node to control the AI system. Does your AI system provide the point cloud as output which you need to analyse ? If so process it and extract the position and command the robot to that position.
Approach 2:
Using the UR Teach pendant and ur script programming using DI/DO interface to interact with the AI system and use UR move sequences