Insights From The Blog

How Artificial Intelligence Intersects with Extended Reality

It seems that the world is filled with talk of Artificial Intelligence (AI) with all manner of chatbots and content construction Apps being readily available. We have Deepfake videos purportedly showing people – including celebrities – doing and saying things that they actually never have, overly-animated singing babies and engaging art that has never seen human hands. AI isn’t perfect and it is still easy to spot all of these things – including AI-constructed writing – but AI is getting stronger and in doing so could have a major impact on XR; but is it likely to be positive or negative in nature?

While the whole issue and notion of AI is complex and difficult to fully define, there are six distinct areas of AI that are used to explain it, its forms and its impacts:

  • Machine learning. This is defined as the approach that allows computers to learn without being taught; it is actively employed in machine learning applications in daily life, even if the user is unaware of it. It is fundamentally the science that allows machines to translate, execute, and examine data in order to solve real-world issues.
  • Neural Network integration. A subfield of AI that draws inspiration from neuroscience, neural networks combine cognitive science with the power of machines to complete tasks. The human brain has an endless number of neurons, and a neural network’s purpose is to translate that infinite number of brain-neurons into a machine’s executable code.
  • Fuzzy Logic. Fuzzy logic, in its simplest form, is a method for representing and adjusting information that is ambiguous due to the fact that it uses a degree of correctness metric. Using fuzzy logic, we can reason about ideas that are inherently hazy. For the purposes of machine learning and convincingly simulating human logic, fuzzy logic is both practical and adaptable.
  • AI Robotics. An output of machine learning, common applications for AI robots include carrying out tasks that would be too taxing for humans to carry out consistently. The automotive industry uses robotic production lines, and NASA uses robots to move heavy equipment around in orbit. Researchers in artificial intelligence are also working to program social interactions into robots built with machine learning.
  • Expert Systems. An expert system is a type of AI technology that uses distinct algorithms to make judgments and predictions that a human expert would make. It does this by applying rules based on logic and insights to the information in its knowledge base and then applying those rules to the user’s queries.
  • Natural Language Processing. Often seen as the centre-piece of AI, simply put, natural language processing (NLP) is the subfield of AI that enables machines to understand and respond to human speech. It’s a method for using computers to analyse human speech. It allows a machine to read and comprehend information in a manner similar to how a human would. NLP is a methodology focused on generating information from textual input through search and analysis. NLP libraries are used by programmers to train computers to parse text for useful information in a natural way, while spam filtering is a popular application. Using NLP, computers can determine whether an email is spam by analysing its text and subject line.

So, obviously AI is a multi-faceted entity that is prevalent in many different areas of our lives, and it is only natural that it would start to become a factor in XR. Because of the more visual and constructional nature of XR and its variants, AI interactions are usually confined to three well-defined areas:

  • Computer Vision. When applied to augmented reality, computer vision enables apps’ recognition and interpretation of physical world features. This allows for the incorporation of virtual elements into the environment and their possible interaction with that environment. More and more often, computer vision is used to speed up the process of making digital replicas of real-world objects or spaces.
  • Large Language Models. The purpose of a large language model is to receive language input and generate language output. Ideally, people won’t even notice they’re dealing with AI because this is one of the exchanges that happens behind the scenes. The future of non-player character interactions and “non-human agents” that guide us through massive virtual environments may lie, for example, in the use of enormous language models.
  • Generative AI. Artificial intelligence in its generative form can take a specific trigger and generate an image, a short film, or a 3D item. In XR experiences, “skyboxes”—the flat image superimposed on the virtual world where players conduct their actual interactions—are commonly created with the use of generative AI for a seamless perspective. The potential for cutting-edge XR experiences, however, is rapidly expanding as AI is increasingly employed to build virtual materials and settings themselves.

The would be foolish to think that these many facets of AI could not impact and enhance XR in some way, and this is exactly what we are now seeing. However, conventional and non-AI methods can only take AR so far. These methods can be considerably enhanced by adding deep learning and ontology, two components of artificial intelligence, into augmented reality. The AI technique, for instance, can be implemented in AR to provide better in-person experiences to users. The AI algorithm can be used to gather more specific information from a variety of augmented reality sensors that work together, such as gyroscopes and GPS.  Moreover, AI can be combined with AR to provide more exciting and dynamic mobile app experiences for users. Users of augmented reality technology are able to interact with their surroundings and control virtual things through the use of artificial intelligence. Artificial intelligence (AI) solutions can help improve robot motion intent and facilitate intuitive control. The relationship between humans and robots can be considerably enhanced by the incorporation of AI and AR into robotics.  Speech recognition, picture tracking, and object detection are all areas where smart devices might benefit from the addition of AI features. Overall, AI is critical for developers seeking to create more engaging applications for AR devices.

As AI becomes increasingly powerful, it will undoubtedly have a greater impact on XR systems and enhance the way that we interact with them. While AI and XR have been treated as separate developments in the past, it’s now obvious that these are more closely related than it first seemed, and each of these technologies can help the other grow. Machine learning and AI are two of the most important enablers of extended reality’s widespread availability. Using intelligent systems, software can more easily capture gestures and eye movements, which could improve the realism of augmented and mixed reality experiences. Holographic pictures, for example, may usher in a new era of teamwork made possible by artificial intelligence.

Many augmented and virtual reality solutions include speech and image recognition as well as computer vision for tracking visual data. AI may aid workers in any setting by collecting data and determining which content should be displayed under what conditions. The use of an expert to guide a professional while they use an AR headset is one of the most discussed use cases for smart glasses. However, the advisor may not need to be a human being. 

Because AI has now reached a point where we have difficulty in distinguishing between what is human and what is not, it becomes obvious that AI should be a fundamental part of the XR world, and thereby enhance it a thousand-fold. With AI integrated into XR systems they will become increasingly powerful and that in turn will fuel whole new uses for virtual systems. Who knows where this is going to end, but it now can’t be stopped (nor should it) and we are entering the next level of XR.