Insights From The Blog
AI as the Backbone of Immersive Worlds: Revolutionizing XR Development and Experience
Up until now, computing, in all its forms, has lacked coherence, to the extent where we still have a range of different hardware in terms of chip design and system architecture and a whole host of Operating Systems (OS) like Windows iOS, Android, Linux etc that don’t play well together. Hardware and software concepts are still very much stuck in the 1990’s and it’s of no benefit to the people who support the different companies – the user.
There also hasn’t been a central concept that has connected the many diverse areas in which computers find use – such as accounting, writing and content creation, research, marketing, design, game development, coding – and that’s made each of them a little poorer because of it. Many of us have to create designs with a project breakdown included and add financials to justify the work, but that takes time to do well.
But now, with reliable and powerful Artificial Intelligence (AI), all of the different fields of computing can be made to work together and make the experience richer and more productive. Humans can do the exciting front-end work while AI works reliably in the background. We are only still in the early days of what AI can do but even early indications show that this is going to be the great leveller: AI has the potential to connect everything everywhere, all at once!
But what does that mean for the creative industry, and how will AI positively impact the development of XR worlds and environments? Let’s dive in and have a look.
Immersive Environments: New Worlds for the 21st Century
Immersive environments aren’t actually that new. Some would claim that the story-telling Great East windows of York Minster, where these enormous stained-glass picture windows, together with religious ceremonies, music, and choral singing, served as an immersive storytelling and worship experience for audiences of the time. People were drawn into the whole experience and felt connected with the content matter – something that is essential for real immersion. In terms of computer-generated worlds, we have had increasingly complex models from the original Multi-User Dungeons (MUD’s) from 1978 to through the Doom franchise and on to truly immersive worlds like Wallace & Gromit In the Grand Getaway.
Technology was always the weak link in early computer-generated immersive environments and the fact that, until VR headsets became commercially available, they were all 2-dimensional and not very immersive. Proper 3D worlds required the right technology to really showcase what could be achieved, and massed-market VR headsets were just that. Now, we could construct immersive experiences that were really believable. Vibrant and engaging, the virtual worlds that we can now access are as close to reality as we always wanted, allowing us to become something else – something truly exceptional – within a jaw-dropping immersive environment.
But that is as far as it goes, and while today’s virtual experiences are wonderful, they simply mimic real life and can still be a little, well, flat. A real virtual experience should be mind-blowing and something that leaves you open-mouthed with its wonder, but that’s not what we are getting. Sure, roaming the universe in your own USS Enterprise is fun, but it hardly takes your breath away.
So, why is that? We have the means to construct exciting worlds with software that can be used to make anything, but it all reaches a peak with the designer’s imagination. It doesn’t matter how good a designer or graphic artist is at creating virtual worlds if all they can imagine is variations of what they have seen or have read in books as drivers for the artistic process. We are the sum of our experiences and that becomes the limiting factor. To truly create jaw-dropping experiences, we need something more, and that is where AI comes in.
Imagining More
As a human designer, imagine a truly alien world and create a virtual experience around it. You will be limited by your own experiences because it is very difficult for humans to step outside what they know. Sure, human-designed aliens might have an array of tentacles, purple blotchy skin and weird facial features but those kinds of differences aren’t too unusual. Humans tend to have a conceited mindset that we have just the right balance of features – two arms for reaching, two legs for ambulation, and a couple of eyes at the top for security and hunting – so why would aliens need anything different. We aren’t very good at creating anything other than those similar to our own understanding – that’s why so many aliens throughout cinematic and gaming history have looked distinctly humanoid.
AI, on the other hand, doesn’t have the constraints that we seem to saddle ourselves with and is able to imagine worlds and scenarios that often elude human designers. Ask AI what an alien would look like and it is likely to create something that we don’t understand; it could be completely alien to our understanding of living creatures. But that is a good thing, and there are other advantages too.
With its superior speed in analysing massive datasets, real-time personalisation, complicated virtual content generation, and ability to construct more realistic and interactive settings than humans, AI is a game-changer when it comes to XR design. AI accomplishes this by analysing user actions to personalise stories, creating 3D objects based on text commands, increasing spatial audio for a more immersive experience, and developing natural user interfaces through gesture and speech recognition.
With that said, and acknowledging AI’s strengths in data processing and generation, XR design still relies heavily on human designers’ intuition, ethical judgement, and strategic forethought. We are not completely out of the loop (yet) but as AI develops further, the potential for it to become fully autonomous in its many fields of application – including game design and the development of immersive worlds – grows.
How AI Can Help Create Immersive Experiences
AI is already immensely powerful and is growing more capable by the day. It wasn’t that long ago that AI was being used as just another tool by XR designers, but it has now overtaken them to become a developer itself. But how exactly can AI enhance immersive experiences?
AI has the potential to improve virtual world design in a number of ways, including making them more dynamic and responsive, generating realistic graphics and content, tailoring experiences to individual users, and allowing Non-Playable Characters (NPCs) to be suitably intelligent and adaptive within an experience. Using real-time data processing and user interactions, AI can quickly learn and personalise worlds to the player’s style, including bespoke challenges, and narratives to create more immersive, engaging, and generate lifelike experiences in XR worlds. With AI, the same game could be a different experience for each player.
The Rise of Vibe Coding
But those central features are really only the tip of the AI iceberg. Earlier this year, AI researcher Andrej Karpathy developed a new way of working with LLM’s, which he termed ‘vibe coding’. This growing software development practice that uses artificial intelligence systems to generate functional code from natural language prompts, accelerating development, and making app building more accessible. This is especially so for those designers with only limited programming experience, who will see huge benefits from using this kind of ‘learning’ development tool. Vibe coding is relatively easy to use, and has two distinct levels of operation:
Code-level. The developer uses a conversational loop to create and perfect a specific piece of code. The specific steps for this are:
- Describe the goal: The user starts with a high-level prompt in plain language such as “Create a playable character”.
- The LLM generates the necessary code.
- The user runs the code and checks the result.
- The user provides feedback and refines the original code.
- The user keeps checking and refining until the LLM develops what is required from the prompt.
Application level. This broader process takes a high-level idea from the concept to a fully-deployed application. This process also has a number of iterative steps.
- Create the idea. The user describes the entire application they want in a single, high-level prompt in the appropriate tools.
- AI generation. The LLM generates the initial version of the full application, including the UI and file structure.
- Refinement. The user tests the code and uses follow-up prompts to add new features or change existing ones.
- Validation. The code is checked by a human for both completeness and functionality.
- Deployment. The code is judged to be a valuable asset and is released.
The emphasis in traditional programming is on the implementation details, such as writing down each language’s unique set of instructions, keywords, and punctuation. By using simple language like “create a playable character“, vibe coding frees the user to concentrate on the end result, as the AI takes care of the base coding.
Using these highly structured processes, AI can be used to generate reliable coded functions with only a limited interaction with human developers. Of course, given time – which could be just a few months at the present rate of learning – AI could complete the entire task itself, removing the need for humans at all. That is a scary scenario, as the human element is lost, but it could be argued that we don’t need to hold the reins if the end product fits our needs.
Low Code Instead?
Many designers might feel unsafe in relinquishing all control to AI, but there is always the low-code option, where the developer still gets to make some of the choices and AI does the heavy lifting. Low-code platforms enable developers to design applications in a more expedient manner and offer a library of straightforward pre-built modules and components that can be assembled a little like Lego, hence reducing the amount of time required for development. The use of drag-and-drop capabilities within user-friendly visual interfaces allows developers to design programs more quickly, which in turn speeds up the development cycle.
Some argue that pick and place software is diluting the programming process and reducing the role of trained programmers to little more than overseers for AI. That may be the case but if it leads to better Apps and software than what is the problem?
AI is a powerful tool, and its intelligence has the potential to make immersive worlds and XR experiences so much deeper and impactful. All we need to do is embrace it and let it take the lead.
Unity Developers are dedicated to the advancement of AI but we also have the human touch with a fantastic team of developers and artists. If you have an idea for an App, game, or XR development that you are struggling with, come and chat and see how we can help you