Meta’s “Goal” is to Deliver Automated Room Scanning for Full Mixed Reality
Meta states that its “goal” is to create an “automated room scanning for mixed reality.”
Months ahead of the release of the high-powered Meta Quest Pro headset, Zuckerberg had revealed that the headset will have an IR projector to provide active depth sensing.
Digital devices equipped with depth sensors such as the iPhone Pro, HoloLens 2, iPad Pro, and iPhone Pro can automatically scan the room and generate a mesh of the furniture and walls in the room. This enables virtual objects to appear behind real objects and even collide with physical objects.
Facebook first showcased its research on semantic room scanning at the Oculus Connect 5 conference that took place in 2018. The device shown at the event seemed to have a depth sensor.
However, Meta did away with the depth sensor from the Quest Pro headset in the run-up to its launch. In all the current Quest headsets, users must manually mark out the walls and furniture in the play space to enable the virtual objects to collide with the physical objects. This is, invariably, a very cumbersome setup process that still doesn’t guarantee excellent results. This process of manually marking out physical objects also adds to the difficulty of setting up VR experiences and has prompted many VR app developers to avoid supporting full-fledged mixed reality experiences.
Meta published a blog yesterday where its color passthrough tech stack was branded as ‘Meta Reality’ and where it stated that in the future, “our goal is to deliver an automated version of Scene Capture” where people aren’t required to ‘manually capture their surroundings.”
Research papers, some smartphone apps, and even state-of-the-art computer vision machine learning techniques have already demonstrated that this is realizable without hardware-level depth sensing.
However, the systems shown in these papers are powered by powerful desktop GPUs while the smartphone apps consume all the power of the mobile chip. Thus, there will be little to nothing left for rendering and performing other tasks.
However, it will be more challenging to ship a reliable and performant room scanning system powered by the current generation of chips.