Technology

Meta unveils new research: avatars, AR and brain-computer interface

Meta unveils new research: avatars, AR and brain-computer interface
Written by adrina


Image: Meta

The article can only be displayed with activated JavaScript. Please enable JavaScript in your browser and reload the page.

At Meta Connect 2022, Meta presented new research results in the field of virtual and augmented reality. An overview with video examples.

Meta’s research is designed to last ten years or more, pushing the boundaries of what’s possible today in technologies like virtual reality, augmented reality, and artificial intelligence. At Meta Connect 2022, the company gave an overview of research in many areas, from Meta’s AR headset to neural interfaces and 3D scanning to photorealistic codec avatars.

expanded reality

Metas intends to launch an elegant, visually appealing, yet powerful AR headset in the years to come. Since the technical challenges regarding miniaturization, performance, battery capacityand waste heat are great, Meta pursues a dual strategy in development.

“Glasses have to be relatively small to look good and feel good. So we approach the construction of augmented reality glasses from two different perspectives. The first is to build on top of all the technology we need for full AR glasses and then work to integrate it into the best possible glasses form factor. The second approach starts with the ideal form factor and works to fit more and more technology into it over time,” said Mark Zuckerberg in the keynote.

The former is codenamed Project Nazare, while the latter is a joint project between Meta and EssilorLuxottica, the world’s largest eyewear manufacturer. One product has already emerged from this partnership: the Ray-Ban Stories, which offers several intelligent functions but has no built-in display.

At Meta Connect 2022, Meta and EssilorLuxottica gave an update on their data glasses project and the cooperation:

  • That Ray-Ban Stories will soon be able to call contacts hands-free or send a text message via a software update.
  • Also new is a feature called Spotify Tap. “You just tap and hold the side of your glasses to play Spotify, and when you want to hear something else, tap and hold again and Spotify recommends something new,” Meta writes.
  • Rocco Basilico, EssilorLuxottica’s wearables boss, announced during the keynote that his company and Meta are dating working on a new headset this will open a “portal into the metaverse”. Will the next generation of Ray-Ban Stories come with a display? Zuckerberg and Basilico left this open.

How about Project Nazare?

At Meta Connect 2021, Meta simulated what a view through Project Nazare might look like. This year, Zuckerberg provided another teaser of the AR headset without showing it.

Meta’s CEO walks down a hallway with the device, controlling it with an EMG wristband. Apparently you can see a Looking through the Nazare project.

Zuckerberg texts Meta’s research director, Michael Abrash, and records video, both of which he takes with him microgestures. This is made possible by the EMG bracelet, which intercepts motor brain signals on the wrist and converts them into computer commands with the help of AI. Meta sees this type of interface as the most important AR operating concept of the future alongside voice control and hand tracking.

Zuckerberg didn’t say when Project Nazare might appear. According to a report, Meta plans to reveal it in 2024 and Commercialization in 2026.

Neural Interface

Another block in Meta’s research update concerns the aforementioned EMG bracelet. Meta relies on a combination of this technology and personalized AI support for the AR interface of the future that recognizes the context of a situation and action and proactively supports people who wear glasses in everyday life. This is intended to enable an intuitive, almost frictionless interface between humans and computers.

“By combining machine learning and neuroscience, this future interface will work for different people while accommodating their differences in physiology, size, and more through a process known as ‘co-adaptive learning,'” writes Meta.

A video illustrates this. It features two meta-employees playing a simple arcade game using an EMG bracelet and finger gestures. Note that they use slightly different gestures – the artificial intelligence learns from the signals and movements and generates an individual model.

“Every time one of them makes the gesture, the algorithm adapts to interpret that person’s signals, so that each person’s natural gesture is recognized quickly and with high reliability. In other words, the system understands them better and better over time,” Meta writes.

The better the algorithm is trained, the fewer hands and fingers have to be moved. The system recognizes the person’s already decided actions by decoding the signals on the wrist and converting them into computer commands.

AR navigation for the visually impaired

Meta is collaborating with Carnegie Mellon University (CMU) on a research project to help the visually impaired navigate complex interiors.

The university researchers used Metas Project Aria sensor glasses to do this Scan Pittsburgh Airport in 3D. They used this 3D map of the environment to train AI localization models. The smartphone app NavCog developed by CMU can guide users more safely through the airport by forwarding them audio instructions. The video below explains the technique.

Easy 3D scanning

Mixed reality headsets like Meta Quest Pro display the physical environment inside the headset. You cannot yet scan objects and save them as a 3D model. If this were an option, it would be possible to bring real objects into virtual environments.

“Creating 3D objects from scratch is difficult, and using physical objects as templates could be easier and faster. But there is no seamless way to do that today, so we are exploring two different technologies to solve this problem,” writes Meta.

The first uses machine learning, called Neural Radiance Fields or NeRFs in short, to create an enormously detailed 3D object from a few photos.

The second technology is called Inverse Rendering. Objects digitized using this method dynamically respond to lighting and physics in VR environments.

A disadvantage of both technologies is that they do not yet work in real time. However, Meta sees them as important steps towards easy 3D scanning of physical objects.

Codec Avatars

Photorealistic digital encounters – for Mark Zuckerberg the killer app of virtual and augmented reality.

To this end, Meta has been working on so-called codec avatars for many years: digital alter egos that hardly differ from the human original on the outside.

At Meta Connect 2021, Meta showed second generation codec avatars and demonstrated full body avatars. This year saw another update on the technology.

Codec Avatars 2.0 can now Switch between virtual outfits and are even more expressive. To demonstrate the improved expressiveness, Mark Zuckerberg had a codec avatar made of himself. The video below shows what the technology is doing now.

One of the biggest problems with the marketing and appropriation of codec avatars is their time-consuming creation: users would have to be scanned in a special 3D studio.

To make it easier to generate a personal codec avatar, Meta is working on it Instant codec avatars. A two-minute facial scan with the smartphone is sufficient. The following video demonstrates the recording process.

The downside to this process is that the finished avatar doesn’t look quite as realistic as Zuckerberg’s, and it still takes hours to get the avatar created and ready to use. However, Meta is working to speed up the process.

Meta Connect 2022: Watch Research Update on Youtube

Meta emphasizes that the projects show research and the technologies do not necessarily have to find their way into products. “Nevertheless, it’s a taste of where the technology will go over the next five to 10 years,” writes Meta. Below is the video excerpt presenting the innovations presented in this article.


#Meta #unveils #research #avatars #braincomputer #interface

 







About the author

adrina

Leave a Comment