Occupancy Network
This is the upgrade to Occupancy Grid.
An occupancy network is a continuous representation of the 3D space using a neural network to model the implicit surface of objects within the environment. It represents the occupancy as a continuous function over space.
But these are not really doing any object detection, simply Mapping.
Original paper: https://arxiv.org/abs/1812.03828
Resources
Cool stuff
Why are occupancy networks?
Because classical object detection methods are limited… if the object is not in the dataset, the object is not detected. So there are always edge cases. Thus, we can instead train an occupancy network!!
Walkthrough (CS231n 2025 Lec 15)
As a “deep implicit function”
Instead of voxelizing or meshing a shape, train an MLP that answers “is this point inside the object?” for any query . The surface is the level set . Memory is , independent of resolution — you extract geometry at inference time via marching cubes as fine as you want.
Related implicit approaches
- Occupancy Networks (Mescheder CVPR 2019, Chen & Zhang CVPR 2019) — regress occupancy .
- DeepSDF (Park CVPR 2019) — regress signed distance instead. Smoother supervision (dense gradients everywhere, not just near the surface) generally yields better geometry than binary occupancy.
- LDIF — Local Deep Implicit Functions (Genova CVPR 2020) — decompose the shape into a structured set of local implicit elements (colored ellipsoids + per-element latents). Reduces the burden on one monolithic MLP and encourages part-aware structure.
These are the implicit, non-parametric, learned corner of the 3D Representation taxonomy.
Source
CS231n 2025 Lec 15 slides ~80–88 (Occupancy Networks, DeepSDF, LDIF — implicit-function family for learned 3D shape).