Jinzhou Li

I am a first-year PhD student in Robotics at Duke, advised by Prof. Xianyi Cheng.

Prior to this, I worked with Prof. Hao Dong at Peking University and spent time as a research intern at AgiBot. I was fortunate to be working with Prof. Maha Haji at Cornell and Prof. Daniel Hastings at MIT. I obtained my master's degree from Cornell University and my bachelor's degree from the University of Vermont.

Scholar  /  X (Twitter)  /  Github  /  CV  /  LinkedIn  /  WeChat

profile photo

North Building, Room 265

News

  • [2025/08] Start my PhD journey at Duke University!
  • [2025/08] One paper accepted to CoRL 2025 and selected as Oral, see you in Seoul!
  • [2025/06] Two papers get accepted to IROS 2025 as Oral Presentations, see you in Hangzhou 🎉

Show More ▼

Research Interests

My research interests include multi-sensory learning, representation learning, reinforcement learning, and vision-language-action models to enable robots to act with human-like dexterity.

Publications

* Equal Contribution

TwinAligner: Visual and Physical Real2Sim2Real All-in-one for Robotic Manipulation

Hongwei Fan*, Hang Dai*, Jiyao Zhang*, Jinzhou Li, Qiyang Yan, Yujie Zhao, Yuxuan Lai, Hao Tang, Hao Dong
Paper Coming Soon

A novel Real2Sim2Real system addressing both visual and physics gaps.



ClutterDexGrasp: A Sim-to-Real System for General Dexterous Target Grasping in Cluttered Scenes

Zeyuan Chen*, Qiyang Yan*, Yuanpei Chen*, Tianhao Wu, Jiyao Zhang, Zihan Ding, Jinzhou Li, Yaodong Yang, Hao Dong
The Conference on Robot Learning (CoRL 2025) (Oral~5%)
[paper] [website] [code]

We propose the first close-loop sim-to-real system for general dexterous grasping in cluttered scenes.


AdapTac-Dex: Adaptive Visuo-Tactile Fusion with Predictive Force Attention for Dexterous Manipulation

Jinzhou Li*, Tianhao Wu*, Jiyao Zhang**, Zeyuan Chen**, Haotian Jin, Mingdong Wu, Yujun Shen, Yaodong Yang, Hao Dong
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025)
[paper] [website] [code]

A future force-guided attention fusion module that adaptively adjusts the weights of visual and tactile features.


SimLauncher: Launching Sample-Efficient Real-world Robotic Reinforcement Learning via Simulation Pre-training

Mingdong Wu*, Lehong Wu*, Yizhuo Wu*, Weiyao Huang, Hongwei Fan, Zheyuan Hu, Haoran Geng, Jinzhou Li, Jiahe Ying, Long Yang, Yuanpei Chen, Hao Dong
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025)
[paper] [video]

We combine the strengths of real-world RL and real-to-sim-to-real approaches to accelerate policy learning.


Canonical Representation and Force-Based Pretraining of 3D Tactile for Dexterous Visuo-Tactile Policy Learning

Tianhao Wu, Jinzhou Li*, Jiyao Zhang*, Mingdong Wu, Hao Dong
IEEE International Conference on Robotics and Automation (ICRA 2025)
[paper] [website] [code]

A novel 3D tactile data representation and force-based pretraining to enhance dexterous manipulation learning.



Invited Talks

  • [2025/08] 3D Vision Workshop: Visuo-Tactile Fusion with Future-Force-Guided Manipulation Policy

  • [2025/04] Peking University: AdapTac: Adaptive Visuo-Tactile Fusion with Predictive Force Attention for Dexterous Manipulation

Professional Activities

  • Conference Reviewer: ICRA (2024, 2025), RSS (2025)


Last updated: August, 2025.

Design and source code from Jon Barron's website.