SceneVerse++: Lifting Unlabeled Internet Videos into 3D Scene Understanding Training Data
Introduction The central paradox of 3D scene understanding — the task of enabling machines to perceive, reason about, and interact with three-dimensional environments — is that while the internet provides an effectively unlimited supply of video data depicting real-world indoor scenes, existing annotated datasets remain bottlenecked at a scale of thousands of scenes collected through expensive, instrumented capture pipelines. ScanNet, the de facto benchmark for 3D perception, has stagnated at ~1,500 scenes since 2017. ARKitScenes, despite leveraging consumer-grade depth sensors, covers only single-room apartments captured under constrained protocols. This data scarcity fundamentally limits progress: models trained on small datasets overfit to domain-specific biases, fail to generalize across scene types, and cannot leverage the scale advantages that have driven breakthroughs in 2D vision and NLP. ...