Insights From The Blog
Protecting the Young: Addressing Children’s Privacy in the Immersive Metaverse
Online safety has been an issue almost since the inception of the Internet, but as the power of this unique media grows to incorporate both AI and XR, the problem has become even greater. With mobile phones now being essential for anyone over eight years old, and powerful XR headsets relatively inexpensive, getting online has never been easier, but is that a good thing? To paraphrase Spiderman, with great power comes great responsibility, but are we as a race responsible enough to ensure that the vulnerable in our society are protected?
In this article we look at the risks that the Internet, XR, and AI in the Metaverse pose for vulnerable people, whether the controls that we have in place are sufficient, and what more can be done, if needed?
Safeguarding the Internet
The main problem with safeguarding the Internet is that we didn’t really know what we were getting in the first place, and when it became available, it grew at a staggering rate and with no real regulation. It was a completely unknown quantity and no one really knew how to deal with it, let alone police it – if that was even possible.
The Internet has grown phenomenally, from a niche research tool to a worldwide infrastructure that touches almost every area of modern life. Initially, growth was gradual, but it skyrocketed in the 1990s with the introduction of the World Wide Web. Today, billions of people use the Internet every day, and its impact is growing.
The Internet began as ARPANET, a US Department of Defence experiment in the late 1960s. For the first few decades, it was mostly utilised by a few researchers and academics, and with a small number of physically connected computers. Tim Berners-Lee established what we now call the World Wide Web, which introduced a user-friendly interface with hyperlinks, making the Internet more accessible to a larger audience.
The 1990s saw an increase in Internet usage, fuelled by user-friendly browsers, the rise of e-commerce, and the introduction of search engines such as Yahoo! and Netscape. Cheaper micro-electronics meant that personal computers grew increasingly prevalent, and broadband Internet connections became widespread, accelerating growth.
Within a few years, we went from almost nothing to massed access to any information – including images – that we could imagine. And it was almost completely unregulated; anyone with the hardware could place whatever they wanted on the Internet, and anyone could download it. With humans being humans, that meant there was some pretty unsavoury stuff to be had.
However, since Internet Service Providers (ISP’s) like MSN, Google, and Yahoo took over and effectively channelled the content through their controlled servers, some of the more ribald parts had been filtered out and unavailable to anyone without the knowledge to get on the so-called Dark Web. But that doesn’t mean that the connected world is all knowledge, music downloads, and pictures of funny cats.
The latest estimates are that there are around 25 million websites that deal with ‘adult’ themes, representing around 2.5% of overall web content. However, these attract a significant portion of Internet activity from users. It is estimated that, in the United States alone, every day:
- 508 feature-length pornographic videos are created.
- 2.5 billion emails containing porn are sent or received.
- 68 million search queries related to pornography- 25% of total searches- are generated.
- 116,000 queries related to child pornography are received.
Those numbers are worrying enough, but these are not static figures, and are rising every year. In response, there have been a number of pieces of legislation such as COPPA in the US and GDPR in Europe and the UK, but they have not been seen as being totally effective.
But protecting young people isn’t just about preventing access to mainstream pornography alone. Recent research indicates a direct correlation between self-harm among adolescents and their Internet usage. Research conducted by Cardiff University has revealed that many young people not only utilise social media to disseminate photos of self-harm, but some also incorporate these images into their own self-harming practices. The UK possesses one of the highest self-harm rates in Europe, with an estimated one in every 130 individuals having intentionally inflicted harm upon themselves at some stage.
Dr Nina Jacob, who carried out the qualitative research says “We found that the picture or image of self-harm suddenly becomes part of their ritual almost, so they seek out that picture to almost get a high or a trigger and then they will go off and self-harm.”
Prevention of these kinds of harmful images is just as important as preventing children accessing pornography, and it is agreed that any measures must be holistic and effective.
Ease of Access to Unsuitable Materials
According to the UK Parliament, nine in 10 children own a mobile phone by the time they reach the age of 11, and three-quarters of social media users aged between eight and 17 have their own account or profile on at least one of the large platforms. Outside of business, children are the largest single group of Internet users, and with few controls in place, they have the means to routinely access hard pornography.
Plainly, children need to be protected but who does that and how is a major concern. The question of who does this would seem to be obvious, and it is a parental responsibility to prevent children from accessing damaging online materials, but this falls down on two fronts; one, many parents are unable to properly control their children and, secondly, it is a mammoth task to exert that control when the child is not with the adult, such as being at school or out with their friends.
There are significant cultural divides when it comes to parenting and, undeniably, Oriental families typically demonstrate stricter control on their children that Occidental families. Social research has shown that many Western parents usually try to be friends with their children rather than being the setter of standards, while Oriental families tend to consider respect important. This general lack of respect that can be seen in Western societies compared to Eastern ones is a major reason why Western children have less fear of accessing unsuitable materials, even when told not to.
Stemming the Flow
So, what’s to be done? As XR devices grow in popularity and become cheaper to own, they become another way for people to access unsuitable and pornographic materials. In the past several years, the market for virtual reality adult content has undergone significant expansion, reaching a value of around $716 million in the year 2020. This industry is expected to reach a massive $19 billion by the end of 2026, according to these forecasts. This massive growth is likely to be driven by the increased availability of XR devices that can access the Metaverse.
To satisfy the ever-increasing demand, numerous adult entertainment firms have made investments in virtual content development, resulting in an increased quantity of VR adult content than ever before. This is generally driven by demand, and Virtual Reality pornographic consumers spend a greater amount of time engrossed in Virtual Reality encounters than opposed to more conventional, two-dimensional forms such as video or plain old images.
As XR becomes the dominant form of virtual experience, developers will undoubtedly seek out increasingly realistic environments, and these will be far more damaging to young minds than simple 2D pictures or videos. If you thought it was bad now, it’s only going to get worse.
But technology can be used to help protect the vulnerable from harmful experiences and the market to do that is also growing. Many see the means to doing this as being a two-pronged approach, with age checks and blockers being bought into force. On the 25th July, the UK Government required registered adult sites to require age verification through credit card ownership – something that you have to be 18 or older to own. Analysis of the market showed that visits to adult content sites from the UK dropped by as much as 47% overnight, however there is no suggestion that this was all under-18’s, and would also include adults who don’t own a credit card, along with credit card owners who don’t want mention of such sites appearing on their monthly bills.
In addition, Ofcom, the communications watchdog in the United Kingdom, has supported a variety of age assurance measures. These measures include AI-driven facial age estimation, which determines a person’s likely age based on a live photo or video; checking a person’s age through their credit card provider, bank, or mobile phone network operator; and facial age estimation. Through the utilisation of this photo ID matching technology, an individual who wants to obtain access to adult content might use a passport or other similar identification to be compared against a selfie, so potentially eliminating the requirement for the credit card check.
In a world first, UK Smartphone manufacturer HMD have developed a smartphone that uses AI to identify and block out harmful images. The AI can also identify degrading or adult content images that the child themselves may take and block those from being sent on too. Unlike traditional parental controls, which block access to entire websites and can be evaded, the SafeToNet technology, developed by UK AI specialists, cannot be turned off since it is embedded in the operating system. It allows a child to continue using social media or other platforms while blocking hazardous content. The AI technology also applies to the phone’s camera and video, stopping users from taking sensual “selfies” or sharing photographs with a predator or pals.
The phone’s AI technology, known as HarmBlock, has been trained to prevent children from viewing adult content, but it will be expanded to prevent kids from accessing “gore”, severe violence, self-harm, and even suicide content. The system operates across the camera and has been ethically trained on 22 million bare and unsavoury photos, making it the first kid safety solution that cannot be bypassed.
AI to the Rescue?
XR and the Metaverse linked to AI are incredibly powerful tools, but as they become more mainstream, the potential for them to be used in the creation of unsavoury materials that can be accessed by children grows. Age verification via official documentation is one method of protecting the young, but isn’t perfect, and could even be exploited by adults to explicitly allow children access to adult content. AI tools that automatically block harmful images are much more promising, and have the potential to prevent any salacious sidestep by unscrupulous adults. Check back to these pages regularly to see how that kind of technology is progressing.
If you have an idea for an XR experience but don’t know how to take it forward, why not chat to us at Unity Developers and see how we can help you make your virtual dreams become reality.