Open-X Embodiment
this is a collection of datasets.
Links
- https://robotics-transformer-x.github.io/
- https://arxiv.org/pdf/2310.08864
- Dataset overview: https://docs.google.com/spreadsheets/d/1rPBD77tk60AEIGZrGSODwyyzs5FgCU9Uz3h-3_t2A9g/edit?gid=0#gid=0
You can also look at OpenVLA for how they specifically downloaded OpenX dataset.
RT-1-X is trained on the open-x dataset.
“The robot action is a 7-dimensional vector consisting of x, y, z, roll, pitch, yaw, and gripper opening or the rates of these quantities. For data sets where some of these dimensions are not exercised by the robot, during training, we set the value of the corresponding dimensions to zero.”
Really interesting thing:
- Similarly, for the action space, we do not align the coordinate frames across datasets in which the end-effector is controlled, and allow action values to represent either absolute or relative positions or velocities, as per the original control scheme chosen for each robot. Thus, the same action vector may induce very different motions for different robots.