Overview of the Adaptive Diffusion Constrained Sampling (ADCS) training architecture. The model consists of three components: a constraint feature encoder, an energy network (MLP), and the Compositional Weighting Transformer (CWT). During training, noisy poses in \(\mathrm{SE}(3)^n\) are used to learn constraint-aware energy functions. At inference, sampling is performed in joint space using Langevin dynamics guided by the learned energy landscape.
Coordinated multi-arm manipulation requires satisfying multiple simultaneous geometric constraints across high-dimensional configuration spaces, which poses a significant challenge for traditional planning and control methods. In this work, we propose Adaptive Diffusion Constrained Sampling (ADCS), a generative framework that flexibly integrates both equality (e.g., relative and absolute pose constraints) and structured inequality constraints (e.g., proximity to object surfaces) into an energy-based diffusion model. Equality constraints are modeled using dedicated energy networks trained on pose differences in Lie algebra space, while inequality constraints are represented via Signed Distance Functions (SDFs) and encoded into learned constraint embeddings, allowing the model to reason about complex spatial regions. A key innovation of our method is a Transformer-based architecture that learns to weight constraint-specific energy functions at inference time, enabling flexible and context-aware constraint integration. Moreover, we adopt a two-phase sampling strategy that improves precision and sample diversity by combining Langevin dynamics with resampling and density-aware re-weighting. Experimental results on dual-arm manipulation tasks show that ADCS significantly improves sample diversity and generalization across settings demanding precise coordination and adaptive constraint handling.
We have designed eight different tasks that integrate our ADCS with motion planning. Under various specified constraints, these eight tasks involve stippling operations, i.e. two Franka robots collaboratively grasp a pen to perform different stippling tasks. In addition, we conducted tests on the TIAGo robot, allowing it to carry objects using both hands.