Visual imitation learning is effective for robots to learn versatile tasks. However, many existing methods rely on behavior cloning with supervised historical trajectories, limiting their 3D spatial and 4D spatiotemporal awareness. Consequently, these methods struggle to capture the 3D structures and 4D spatiotemporal relationships necessary for real-world deployment. In this work, we propose 4D Diffusion Policy (DP4), a novel visual imitation learning method that incorporates spatiotemporal awareness into diffusion-based policies. Unlike traditional approaches that rely on trajectory cloning, DP4 leverages a dynamic Gaussian world model to guide the learning of 3D spatial and 4D spatiotemporal perceptions from interactive environments. Our method constructs the current 3D scene from a single-view RGB-D observation and predicts the future 3D scene, optimizing trajectory generation by explicitly modeling both spatial and temporal dependencies. Extensive experiments across 17 simulation tasks with 173 variants and 3 real-world robotic tasks demonstrate that the 4D Diffusion Policy (DP4) outperforms baseline methods, improving the average simulation task success rate by 16.4\% (Adroit), 14\% (DexArt), and 6.45\% (RLBench), and the average real-world robotic task success rate by 8.6\%.
From a single-view RGB-D observation, we construct 3D point clouds and extract global and local features to enrich both holistic and focused perceptions. These multi-level representations condition the diffusion policy model to generate trajectories based on current robot states. We introduce a Gaussian world model in DP4 to capture 3D structures and 4D spatiotemporal relationships. The current observation’s 3D Gaussian Splatting (3DGS) is derived via a generalizable Gaussian regressor from point clouds and multi-level features. By enforcing consistency between ground-truth and rendered RGB-D images from this 3DGS, we enhance 3D spatial awareness. Additionally, future 3DGS are predicted from current states using policy-generated trajectories, with rendered RGB-D consistency fostering 4D spatiotemporal awareness. This improved spatiotemporal representation significantly benefits complex tasks such as object grasping and dexterous manipulation.