Skip to content

Open-X-Humanoid/TienKung-Lab

Repository files navigation

TienKung-Lab: Direct IsaacLab Workflow for TienKung

IsaacSim Isaac Lab RSL_RK Python Linux platform License pre-commit


TienKung humanoid robot won the championship in the first Humanoid Robot Half Marathon

Overview

Motion AMP Animation Sensors RL + AMP Sim2Sim
Walk
Run

This framework is an RL-based locomotion control system designed for full-sized humanoid robots, TienKung. It integrates AMP-style rewards with periodic gait rewards, facilitating natural, stable, and efficient walking and running behaviors.

The codebase is built on IsaacLab, supports Sim2Sim transfer to MuJoCo, and features a modular architecture for seamless customization and extension. Additionally, it incorporates ray-casting-based sensors for enhanced perception, enabling precise environmental interaction and obstacle avoidance.

TODO List

  • Add motion dataset for TienKung
  • Motion retargeting support
  • Add more sensors
  • Add Perceptive Control

Installation

TienKung-Lab is built with IsaacSim 4.5.0 and IsaacLab 2.1.0.

  • Install Isaac Lab by following the installation guide. We recommend using the conda installation as it simplifies calling Python scripts from the terminal.

  • Clone this repository separately from the Isaac Lab installation (i.e. outside the IsaacLab directory)

  • Using a python interpreter that has Isaac Lab installed, install the library

cd TienKung-Lab
pip install -e .
  • Install the rsl-rl library
cd TienKung-Lab/rsl_rl
pip install -e .
  • Verify that the extension is correctly installed by running the following command:
python legged_lab/scripts/train.py --task=walk  --logger=tensorboard --headless --num_envs=64

Usage

Visualize motion

Visualize the motion by updating the simulation with data from tienkung/datasets/motion_visualization.

python legged_lab/scripts/play_amp_animation.py --task=walk --num_envs=1
python legged_lab/scripts/play_amp_animation.py --task=run --num_envs=1

Visualize motion with sensors

Visualize the motion with sensors by updating the simulation with data from tienkung/datasets/motion_visualization.

python legged_lab/scripts/play_amp_animation.py --task=walk_with_sensor --num_envs=1
python legged_lab/scripts/play_amp_animation.py --task=run_with_sensor --num_envs=1

Train

Train the policy using AMP expert data from tienkung/datasets/motion_amp_expert.

python legged_lab/scripts/train.py --task=walk --headless --logger=tensorboard --num_envs=4096
python legged_lab/scripts/train.py --task=run --headless --logger=tensorboard --num_envs=4096

Play

Run the trained policy.

python legged_lab/scripts/play.py --task=walk --num_envs=1
python legged_lab/scripts/play.py --task=run --num_envs=1

Sim2Sim(MuJoCo)

Evaluate the trained policy in MuJoCo to perform cross-simulation validation.

Exported_policy/ contains pretrained policies provided by the project. When using the play script, trained policy is exported automatically and saved to path like logs/run/[timestamp]/exported/policy.pt.

python legged_lab/scripts/sim2sim.py --task walk --policy Exported_policy/walk.pt --duration 10
python legged_lab/scripts/sim2sim.py --task run --policy Exported_policy/run.pt --duration 10

Tensorboard

tensorboard --logdir=logs/walk
tensorboard --logdir=logs/run

Code formatting

We have a pre-commit template to automatically format your code. To install pre-commit:

pip install pre-commit

Then you can run pre-commit with:

pre-commit run --all-files

Troubleshooting

Pylance Missing Indexing of Extensions

In some VsCode versions, the indexing of part of the extensions is missing. In this case, add the path to your extension in .vscode/settings.json under the key "python.analysis.extraPaths".

{
    "python.analysis.extraPaths": [
        "${workspaceFolder}/legged_lab",
        "<path-to-IsaacLab>/source/isaaclab_tasks",
        "<path-to-IsaacLab>/source/isaaclab_mimic",
        "<path-to-IsaacLab>/source/extensions",
        "<path-to-IsaacLab>/source/isaaclab_assets",
        "<path-to-IsaacLab>/source/isaaclab_rl",
        "<path-to-IsaacLab>/source/isaaclab",
    ]
}

Acknowledgement

  • Legged Lab: a direct IsaacLab Workflow for Legged Robots.
  • Humanoid-Gym:a reinforcement learning (RL) framework based on NVIDIA Isaac Gym, with Sim2Sim support.
  • RSL RL: a fast and simple implementation of RL algorithms.
  • AMP_for_hardware: codebase for learning skills from short reference motions using Adversarial Motion Priors.
  • Omni-Perception: a perception library for legged robots, which provides a set of sensors and perception algorithms.
  • Warp: a Python framework for writing high-performance simulation and graphics code.

Discussions

If you're interested in TienKung-Lab, welcome to join our WeChat group for discussions.

About

Tien Kung-Lab: Direct IsaacLab Workflow for Legged Robots

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages