Spatial-Temporal Aware Visuomotor Diffusion Policy Learning

1Fudan University, 2Shanghai Innovation Institute, 3Nanyang Technological University, 4NeuHelium Co., Ltd

Spatial-Temporal Aware Visuomotor Diffusion Policy Learning (4D Diffusion Policy)

  • Spatiotemporal-centric visual imitation learning: Unlike existing methods that overlook 3D perception and dynamic interactions, DP4 explicitly models 3D spatial structures and 4D temporal dynamics, improving generalization in dynamic environments.
  • 4D Diffusion Policy with structured awareness: We introduce a diffusion-based visual imitation learning framework that generates trajectories based on learned spatial-temporal world representations.
  • Dynamic Gaussian World Model for structured supervision: Our model actively learns from real-world interactions, ensuring the embedding of 3D spatial and 4D temporal reasoning into policy learning.
  • State-of-the-art performance across diverse tasks: DP4 sets unprecedented success rates across 17 simulated and real-world robotic tasks, setting a new benchmark for imitation learning in dynamic environments.
Teaser Image

4D Diffusion Policy Spatial-temporal awareness in the 4D Diffusion Policy (DP4). Previous methods train perception and decision-making with trajectory supervision, but trajectory cloning fails to capture the 3D spatial and 4D spatiotemporal relationships. In contrast, DP4 constructs the current 3D scene with 3D spatial supervision from a single RGB-D view and predicts future 3D scene candidates using 4D spatiotemporal supervision, optimizing trajectory generation by effectively capturing both 3D structures and 4D dependencies.

Abstract

Visual imitation learning is effective for robots to learn versatile tasks. However, many existing methods rely on behavior cloning with supervised historical trajectories, limiting their 3D spatial and 4D spatiotemporal awareness. Consequently, these methods struggle to capture the 3D structures and 4D spatiotemporal relationships necessary for real-world deployment. In this work, we propose 4D Diffusion Policy (DP4), a novel visual imitation learning method that incorporates spatiotemporal awareness into diffusion-based policies. Unlike traditional approaches that rely on trajectory cloning, DP4 leverages a dynamic Gaussian world model to guide the learning of 3D spatial and 4D spatiotemporal perceptions from interactive environments. Our method constructs the current 3D scene from a single-view RGB-D observation and predicts the future 3D scene, optimizing trajectory generation by explicitly modeling both spatial and temporal dependencies. Extensive experiments across 17 simulation tasks with 173 variants and 3 real-world robotic tasks demonstrate that the 4D Diffusion Policy (DP4) outperforms baseline methods, improving the average simulation task success rate by 16.4\% (Adroit), 14\% (DexArt), and 6.45\% (RLBench), and the average real-world robotic task success rate by 8.6\%.

The framework of our 4D Diffusion Policy (DP4)

Introduction Image

From a single-view RGB-D observation, we construct 3D point clouds and extract global and local features to enrich both holistic and focused perceptions. These multi-level representations condition the diffusion policy model to generate trajectories based on current robot states. We introduce a Gaussian world model in DP4 to capture 3D structures and 4D spatiotemporal relationships. The current observation’s 3D Gaussian Splatting (3DGS) is derived via a generalizable Gaussian regressor from point clouds and multi-level features. By enforcing consistency between ground-truth and rendered RGB-D images from this 3DGS, we enhance 3D spatial awareness. Additionally, future 3DGS are predicted from current states using policy-generated trajectories, with rendered RGB-D consistency fostering 4D spatiotemporal awareness. This improved spatiotemporal representation significantly benefits complex tasks such as object grasping and dexterous manipulation.

Results

Result Image 1
Result Image 2
  • The red mark indicates a pose that significantly deviates from the expert demonstration, while the green mark denotes a pose that aligns with the expert trajectory. The 4D Diffusion Policy (DP4) integrates 3D spatial and 4D spatiotemporal awareness with diffusion policies, successfully completing the tasks.
  • Visualization of DP4 performance on three real-world robotic tasks. DP4 demonstrates strong performance in real-world settings and effectively handles a variety of common tasks with a single view.