DORSal: Diffusion for Object-centric Representations of Scenes et al.

Allan Jabri‡*,  Sjoerd van Steenkiste*,  Emiel HoogeboomMehdi S. M. SajjadiThomas Kipf

Google Research

‡work done while interning at Google.
*equal contribution
Correspondence to: svansteenkiste@google.com, tkipf@google.com

Figure: Model overview. (a) OSRT is trained to predict novel views through an Encoder-Decoder architecture with an Object Slot latent representation of the scene. Since the model is trained with the L2 loss and the task contains significant amounts of ambiguity, the predictions are commonly blurry. (b) In DORSal, we combine Object Slots from OSRT with the target Poses to be used as conditioning. Our Multiview U-Net is trained in a diffusion process to denoise novel views while cross-attending into the conditioning features. This results in sharp renders at test time, which can still be decomposed into the objects in the scene to support edits.

Abstract


Recent progress in 3D scene understanding enables scalable learning of representations across large datasets of diverse scenes. As a consequence, generalization to unseen scenes and objects, rendering novel views from just a single or a handful of input images, and controllable scene generation that supports editing, is now possible. However, training jointly on a large number of scenes typically compromises rendering quality when compared to single-scene optimized models such as NeRFs. In this paper, we leverage recent progress in diffusion models to equip 3D scene representation learning models with the ability to render high-fidelity novel views, while retaining benefits such as object-level scene editing to a large degree. In particular, we propose DORSal, which adapts a video diffusion architecture for 3D scene generation conditioned on object-centric slot-based representations of scenes. On both complex synthetic multi-object scenes and on the real-world large-scale Street View dataset, we show that DORSal enables scalable neural rendering of 3Dscenes with object-level editing and improves upon existing approaches.

Qualitative Results


Figure: Novel View Synthesis. Comparison of DORSal with the following baselines: 3DiM, SRT, and OSRT on the MultiShapeNet (only 2/5 views shown) and Street View datasets.

Videos


Figure: Novel-view synthesis along a camera path of DORSal on MultiShapeNet.

Figure: Novel-view synthesis along a camera path of DORSal on Street View.

Reference


@article{jabri2023dorsal, author = {Jabri, Allan and van Steenkiste, Sjoerd and Hoogeboom, Emiel and Sajjadi, Mehdi S. M. and Kipf, Thomas }, title = {{DORSal: Diffusion for Object-centric Representations of Scenes et al.}}, journal = {arXiv}, year = {2023} }