Replies: 1 comment 2 replies
-
Hello @naa1824, That would depend on the action space of your RL agent.
This project leverages the second approach in order to have a robot-agnostic action space that does not depend on the number of joints. For some tasks, this also simplifies the problem due to the reduced state space that the agent needs to explore prior to finding rewarding states. On the other hand, it also reduces the potential capabilities of the agent as it does not have full control of its kinematics. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What I understand is, in manipulation applications, Moveit can create the path planning and Motion planning to generate the motion.
But another approach to do so is to use machine learning: Reinforcement Learning to replace Moveit. so the robot will generate the motion based on the trained policy.
How does this Repo merging both together?
https://github.com/AndrejOrsula/drl_grasping
Can you teach me what is the role of each one in this project?
Beta Was this translation helpful? Give feedback.
All reactions