According to analyst Ming-chi Kuo, whose recent research note has been reported on both MacRumors and 9to5Mac, Apple’s widely speculated future mixed reality headgear would feature 3D sensors for improved hand tracking. The headset is claimed to include four sets of 3D sensors, compared to the iPhone’s single unit, giving it better accuracy than the TrueDepth camera array currently utilised for Face ID.
The structured light sensors, according to Kuo, can detect objects as well as “dynamic detail change” in the hands, similar to how Face ID can determine face expressions to make Animoji. He argues, “Capturing the details of hand movement can provide a more intuitive and vivid human-machine UI,” citing the example of a virtual balloon in your palm flying away once the sensors detect that your fist is no longer clenched. The sensors, according to Kuo, would be able to identify objects up to 200 times further away than the iPhone’s Face ID.
Hand tracking is possible with Meta’s Quest headsets, but it isn’t a major element of the platform, which depends on monochrome cameras. Apple’s headgear will employ physical controllers in addition to hand tracking, according to Kuo’s remark. Apple was reportedly testing hand tracking for the smartphone in January, according to Bloomberg.
This week, Kuo also revealed some insights about what might come after Apple’s first headgear. While the first model is expected to weigh roughly 300-400 grammes (0.66-0.88 pounds), a “significantly lighter” second-generation model with an improved battery system and quicker processor is expected in 2024, according to him. According to Kuo, the first model will be released next year, and Apple estimates that three million units would be sold by 2023. As a result, the original product may be pricey and targeted at early adopters.