Facebook’s vision of the future of human-computer interaction (HCI) is slowly taking shape. In a series of long blog posts, Facebook unpacks what it describes as a “10-year vision of a contextually-aware, AI-powered interface for augmented reality (AR) glasses” that will use the information that a user chooses to share to infer what that particular user wants to do when they want to do it.

Facebook Vision for Future Human Computer Interfaces
Facebook Vision for Future Human-Computer Interfaces

According to the blog post, Facebook’s vision of a future AR device will rely on a trio of AR glasses, neural armbands, and haptic gloves.

Facebook kicks off the series of blogposts by providing an insight into the company’s XR research and the evolution of future human-computer interactions. Facebook describes an interface that leverages artificial intelligence (AI) to understand situations and contexts and proactively supports the user in their everyday life.

According to Facebook, such an interface should emerge in the next 10 years and will form the basis for its advanced AR glasses that will completely replace smartphones.

The company says that next week, it will share details on its nearer-term research effort that is focused on a wrist-based input that works in conjunction with a usable but limited contextualized artificial intelligence and which can dynamically adapt to the user and their environment. Later in the year, Facebook will also reveal details on the groundbreaking work it has done on soft robotics to develop comfortable and all-day wearables as well as an update on its haptic glove research.

Facebook is envisioning a future where stylish and lightweight smart glasses will eventually replace our need for smartphones or computers. Users wearing these glasses will be able to feel physically present with their friends and family irrespective of their location in the world and a contextually aware Artificial Intelligence will help them to navigate the world around them and provide them with rich 3D virtual information on the fly. Facebook describes it as a device that will enable the wearer to look up and stay present in the world around them and one that won’t force them to choose between the real and the digital world.

A New Computing Paradigm

In its first blog post in the series, Facebook describes a device that would usher in a new paradigm in computer interactions. If the Augmented Reality glasses are to gain general acceptance and some traction, they will need a new tailor-made interface. Input formats such as a mouse, a keyboard, or even touch displays would be unsuitable for a device that sits on the wearer’s face. Instead, the device would need a combination of new and more natural types of interaction such as gaze, gestures, and voice control.

The basics of today’s interfaces like mouse control and the graphic 2D user interfaces were developed in the 1960s and they shape human-computer interactions to date. For augmented reality, new interfaces have to be reinvented. The mouse was invented in the 1960s by Doug Engelbart and has gone to shape the human-computer interactions and the graphical user interfaces (GUIs) that have shaped our world for close to six decades now.

According to Facebook, the Augmented Reality glasses of the future should work in similar ways to the human mind. For these interfaces to be helpful, they will need to understand everyday situations and be able to respond to them dynamically and proactively.

Michael Abrash, the Chief Scientist at Facebook Reality Labs Research says that Augmented Reality requires a smoothly functioning and always available technology that can be used so intuitively that it becomes an extension of the wearer’s own body. According to Abrash, a completely new interface has to be invented and it should be an interface that will put people at the center of computer interactions. He describes Augmented Reality interaction as “one of the hardest and most interesting multi-disciplinary problems around” as it will completely turn on its head decades of a human-computer interaction paradigm.

Facebook says an all-day wearable will need a new paradigm as they will have to function in every situation that a user encounters in the course of their day. They will need to work in the same way that our minds work. They should do what a user wants them to do and tell the user what the user wants to know at the exact time that they want to know it. They should seamlessly share information and take action when a user wants them to and not get in the wearer’s way.

Abrash says that for Augmented Reality to be truly ubiquitous, there is a need for low-friction and an “always-available technology” that will be so intuitive that it will almost be an extension of the wearer’s body. This will be something far more advanced than the HCIs in existence today. Facebook, therefore, has to invent a completely new interface that puts users at the center of the experience.

Facebook says that such as human-computer interaction will need to be proactive rather than reactive. It should be able to seamlessly turn intention into action and create a greater sense of urgency in our lives by helping us stay present with the people around us.

The interface will also need to be socially acceptable in every aspect – secure, unobtrusive, private, easy to use, easy to learn, reliable, effortless, and comfortable/all-day wearable.

A Bracelet as a Neural Interface

The Facebook team developing the new interface consists of several hundred interdisciplinary specialists and is headed by Sean Keller.

Elaborating on Abrash’s idea of a user-centered AR interface, Keller explains that the company’s interface will turn our perception of input and output on its head as AR interaction entails a portable computer that can perceive, learn and act in concert with the user all day.

The Facebook blog defines two important foundations for the Augmented Reality interfaces of the future. The first basis is that the interactions ought to be as simple as possible. Facebook is currently researching different kinds of interactions and is working on gaze, gesture, and voice control.

Facebook says that an even more intentional form of interaction is provided by the neural interfaces that analyze muscle activities using Artificial Intelligence and translates these into computer commands. One of the most promising variants of such a neural interface is a bracelet. The signals on the wrist are so well defined that the finest finger movements are recognizable. Entries can thus be made effortlessly and they will be practically invisible to third parties.

To advance the development of wrist-based neural interfaces, Facebook partnered with the startup CTRL-Labs in 2019 which was already working on such a bracelet. A report states that the technology could appear in a Facebook smartwatch as early as 2022.

A Context-Aware AI

Facebook named Artificial Intelligence, context, and personalization as the second basis of the AR interface of the future. Augmented reality glasses have to adapt to their users and be able to grasp contexts in order to be beneficial to users. For example, the AR glasses could recognize when a wearer leaves the house and display the outside temperatures or the calorie content of a product they are able to pick in a supermarket.

Artificial Intelligence is needed for such a system to determine environmental data. Facebook’s Project Aria sent out over 100 select employees and temporary staff out onto the street while wearing augmented reality research glasses that gather data on how the glasses perceive the world and the privacy considerations these glasses may need so as to make people feel comfortable around them.

Back then, it was reported that these research glasses would help Facebook develop AR glasses that overlay 3D graphics and information onto the wearer’s view of the real world. The analysis of the data gathered from that research should flow into Facebook’s AI-powered glasses capable of dynamically reacting to everyday situations.

The CTRL-Labs Bracelets Can Also Provide Haptics

Advanced Augmented Reality glasses need a context-sensitive interface. Haptics allows users to interact with computers more intuitively than is possible with current interfaces such as the mouse, keyboard, and touch displays.

As a result, Keller’s team is working on haptic electronics that have a soft texture and which can be worn all day such as in the form of a bracelet.

There is no doubt that Facebook’s haptic technology will likely go into the same bracelet which measures muscle activity and translates it into computer commands. Even before Facebook’s acquisition of CTRL-Labs, the company was already working on a bracelet known as Tasbi that allows for fairly sophisticated haptic effects.

In its second blog post, Facebook will showcase the concrete results of nearer-term research that is focused on wrist-based input. This will cover Facebook’s armband, its neural interface as well as the Artificial Intelligence that reacts to the user and the environment.

The third blog post to be released later this year will feature comfortable and all-day wearable devices along with an update on research in the field of haptic gloves.

Facebook’s AR Interface in 2030

Facebook wants to bring all these elements together and to realize a concrete AR HCI within the next decade. To enable users to visualize this abstract technology in use, Facebook provided an example of a visit to a coffee shop in 2030 with its AR wearable.

A user with the AR glasses and comfortable bracelet visits a coffee shop to do some work. As they walk out the door, the device asks them if they wish to listen to the latest episode of their favorite podcast. With a small movement of their hands, they will click ‘Play’.

When they enter the café, their high-tech AR assistant asks: “Should I order an Americano?” If the user isn’t in the mood, they can slide their fingers and click “No”.

The user then walks to a table and instead of pulling out their laptop, they take out their soft and lightweight haptic gloves. When the user wears the gloves, a virtual screen and keyboard appear in front of them and they begin to edit a document. Because the haptic technology will by that time have evolved into a high level of realism and immersion, the user is able to type as if they are doing it on an actual physical keyboard. However, because the café is too noisy and the user is unable to concentrate, the Assistant deploys special in-ear monitors (IEMs) along with active noise cancellation in order to soften the background noise. Now the user is able to focus more easily. A server at the café passing by the user’s table asks them if they want a refill and the AR glasses know to let their voices through the IEMs and active noise cancellation even though it has muted the ambient noise. It therefore subsequently proactively enhances the server’s voice using beamforming. The user is able to have a normal conversation with the server as they refill their coffee in spite of the noisy environment. All of this is automated.

While still at the café and speaking to the server during the refill, a friend calls and the Assistant automatically sends the call to the voicemail to avoid interrupting your current conversation. If the user is about to leave the café and pick up their kids according to a calendared event, the Assistant sends them a gentle reminder to ensure they won’t be late as a result of the current traffic conditions.

https://virtualrealitytimes.com/wp-content/uploads/2021/04/Facebook-Vision-for-Future-Human-Computer-Interfaces-600x336.pnghttps://virtualrealitytimes.com/wp-content/uploads/2021/04/Facebook-Vision-for-Future-Human-Computer-Interfaces-150x90.pngSam OchanjiComputingTechnologyFacebook’s vision of the future of human-computer interaction (HCI) is slowly taking shape. In a series of long blog posts, Facebook unpacks what it describes as a “10-year vision of a contextually-aware, AI-powered interface for augmented reality (AR) glasses” that will use the information that a user chooses to...VR, Oculus Rift, and Metaverse News - Cryptocurrency, Adult, Sex, Porn, XXX