In this talk, Professor Andreas Geiger will show several recent results of his group on learning neural implicit 3D representations, departing from the traditional paradigm of representing 3D shapes explicitly using voxels, point clouds or meshes. Implicit representations have a small memory footprint and allow for modeling arbitrary 3D topologies at (theoretically) arbitrary resolution in continuous function space. The speaker will also show the ability and limitations of these approaches in the context of reconstructing 3D geometry, texture and motion. Professor Geiger will further demonstrate a technique for learning implicit 3D models using only 2D supervision through implicit differentiation of the level set constraint. Finally, he will demonstrate how implicit models can tackle large-scale reconstructions and introduce GRAF and GIRAFFE which are generative 3D models for neural radiance fields that are able to generate 3D consistent photo-realistic renderings from unstructured and unposed image collections.