Digital Gatekeepers: Platforms, Privacy, and the Price of Protection

New regulations aimed at protecting minors are encouraging platforms to tighten content controls, raising concerns about free speech and digital freedom. 

Recently, YouTube has implemented an AI program to estimate users’ ages and restrict access if they are not 18 years old. This was a response to the long-standing movement to keep the internet safe for minors. YouTube believes this is the right approach to protecting children, although the users beg to differ.

The specifics of how this is to be done are a bit vague, as the only information YouTube has provided regarding how the AI will determine who is and isn’t a minor is that it will be based on a variety of signals, including a user’s YouTube activity and account longevity. The comments criticize this part of the policy.

“You're being so insanely vague about the ‘variety of signals’. What type of ‘activity’? What does ‘longevity of the account’ even mean?” one comment from a user named Willow Mermaid said.

Another major issue that may appear from this policy is the methods of age verification if an account does happen to be flagged by the AI. According to the post, YouTube intends for users who are flagged by the AI to use biometric data such as their face, their government ID, or their credit card to prove they are 18 or older. Many users also criticize this, citing Google’s history of data breaches and the potential for identity theft if the data is leaked.

“Requiring adults to show proof of their age by revealing sensitive information that threatens their anonymity is beyond the pale and runs counter to my--and many others'--principles,” a comment made by a user named Adult YouTube User said.

This relates to a bigger censorship problem on the internet, as YouTube is not the only company doing this. Recently, big companies like TikTok and Spotify have also been dabbling in ways to censor their platform for minors. While some of these companies are doing this to protect children, payment companies are seeing this as an opportunity to force their policies onto other platforms. An example of this would be how Visa, Mastercard, and PayPal are removing their payment methods as an option from the platforms Itch.io and Steam if they do not remove certain games with controversial content that includes nudity, violence, drug use, etc. This action has led to many game developers losing their source of income.

“Itch.io hasn’t paid me in 30+ days — I lost my housing because of it,” user IndependentClub1117 said on Reddit.

This raises concerns about the future of the internet’s accessibility and freedom. At this current rate of censorship, the internet may be going into a new age of censorship with much less access to content that companies and governments don’t find fit to be shared, forcing creators to censor their content to abide by new policies.

“Let us be clear: censorship is cowardice. ... It masks corruption. It is a school of torture: it teaches and accustoms one to the use of force against an idea, to submit thought to an alien ‘other.’ But worst still, censorship destroys criticism, which is the essential ingredient of culture,” political activist Pablo Antonio Cuadra said in A-Z Quotes.

With all these new rules and regulations enforced by AI, it’s not far-fetched to assume other companies and sites may mimic these changes in the future. The big question seems to be whether this is really a necessary change for the safety of children, or are these new processes a disguised attempt to control and censor internet users and their ability to freely express themselves?

Previous
Previous

How Teacher Vacancies Affect Students

Next
Next

Is Social Media Safe for Your Children?