Object Scene Representation Transformer

Mehdi S. M. Sajjadi‡,  Daniel Duckworth*,  Aravindh Mahendran*,  Sjoerd van Steenkiste*,
Filip PavetićMario LučićLeonidas J. GuibasKlaus GreffThomas Kipf*

Google Research
NeurIPS 2022

‡correspondence to: osrt@msajjadi.com
*equal technical contribution

Figure: OSRT is a 3D scene representation learning method that decomposes scenes into individual objects without supervision.

A compositional understanding of the world in terms of objects and their geometry in 3D space is considered a cornerstone of human cognition. Facilitating the learning of such a representation in neural networks holds promise for substantially improving labeled data efficiency. As a key step in this direction, we make progress on the problem of learning 3D-consistent decompositions of complex scenes into individual objects in an unsupervised fashion. We introduce Object Scene Representation Transformer (OSRT), a 3D-centric model in which individual object representations naturally emerge through novel view synthesis. OSRT scales to significantly more complex scenes with larger diversity of objects and backgrounds than existing methods. At the same time, it is multiple orders of magnitude faster at compositional rendering thanks to its light field parametrization and the novel Slot Mixer decoder. We believe this work will not only accelerate future architecture exploration and scaling efforts, but it will also serve as a useful tool for both object-centric as well as neural scene representation learning communities.

Qualitative Results


OSRT is able to learn object-decomposed 3D scene representations without supervision on complex multi-object scenes with a large variety of objects and textured backgrounds. Below, we show example qualitative results on 3D scenes not seen during model training.

Figure: Qualitative scene decomposition and novel view synthesis results on unseen 3D scenes.

Code


An independent implementation of the improved SRT model and OSRT is available at github.com/stelzner/osrt.

Dataset


We use a new MSN-Hard dataset that is very similar to SRT's MultiShapenet dataset, though our dataset contains instance labels required for evaluating OSRT. The path to our version is the following: builder = sunds.builder('kubric:kubric/multi_shapenet_conditional'). Please see SRT Website (Dataset) for detailed instructions on how to load the dataset.

Reference


@article{sajjadi2022osrt, author = {Sajjadi, Mehdi S. M. and Duckworth, Daniel and Mahendran, Aravindh and van Steenkiste, Sjoerd and Paveti{\'c}, Filip and Lu{\v{c}}i{\'c}, Mario and Guibas, Leonidas J. and Greff, Klaus and Kipf, Thomas }, title = {{Object Scene Representation Transformer}}, journal = {NeurIPS}, year = {2022} }