Avatars 2.0 codec: Meta strives for realism in the metaverse

Meta-engineers are approaching some form of realism for the metaverse with Codec Avatars 2.0, a prototype VR avatars using advanced machine learning techniques.

Three years ago, Meta’s subsidiary Facebook unveiled its work with Codec Avatars. Supported by multiple neural networks, avatars are generated using a specialized capture device with 171 cameras. Once created, they can be controlled in real time using a virtual headset prototype with five cameras (two internal for each eye and three external for the lower face). Since then, the teams behind the project have introduced several improvements to the system, including more realistic eyes and a version requiring only eye tracking and microphone input.

Last April, at the MIT Seminar “Virtual Creatures and Being Virtual”, Yaser Sheikh presented a new version of Codec Avatars. Despite major and important developments, there is still a long way to go: “I would say that one of the big challenges of the next decade is to see if we can provide remote interaction that is indistinguishable from face-to-face communication.” During the workshop, he noted that it is impossible to predict how soon the Codec Avatars will actually be released. However, he said that when the project started, there were ten miracles to perform, and now he thinks there are five.

Realistic Avatars with Avatars Codec 2.0 for Meta

However, Codec Avatars 2.0 may be a standalone solution and not a direct update to the current cartoon avatars. In an interview with Lex Friedman, CEO Mark Zuckerberg described a future where users could use a cartoon-style avatar in casual games and a realistic avatar in business meetings.

CDN CTB