top of page

Learning from Symmetry: Meta-Reinforcement Learning with
Symmetrical Behaviors
 and Language Instructions
Xiangtong Yao, Zhenshan Bing, Genghang Zhuang, Kejia Chen, Hongkuan Zhou, Kai Huang, and Alois Knoll

Video of simulation and real-world experiments

Real-world experiments

scene.jpg

Real-world experiment scenario. We use Intel RealSense D435i cameras to locate the red object.

Push-right and Push-left

Push-right tasks

Task goal position: [0.1, 0.7, 0]

Task goal position: [0.05, 0.65, 0]

Push-left tasks

Task goal position: [-0.1, 0.7, 0]

Task goal position: [-0.05, 0.65, 0]

We adapt Panda robot to Meta-world environment. For more information, please visit 

Representation
Gallery
Showreels
About

Supplementary materials

AIRL.jpg

The recovered reward function of other task families

airl_door-close.gif

Door-close

airl_drawer-open.gif

Drawer-open

airl_faucet_open.gif

Faucet-open

airl_window-close.gif

Window-close

The visualisation of trained AIRL policies, each of which is trained by symmetrical trajectories generated from the Symmetric Data Generator.

meta-training-task.jpg
symmetry task.jpg

Language instructions of meta-training tasks and symmetry tasks

door-open.gif

Door-open

drawer-close.gif

Drawer-close

window-open.gif

Window-open

faucet-close.gif

Faucet-close

door-close.gif

Door-close

drawer-open.gif

Drawer-open

window-close.gif

Window-close

faucet-open.gif

Faucet-open

The visualisation of meta-training tasks and meta-test tasks. The setting of the above tasks are the same as those of the Meta-world benchmark.

bottom of page