Meta and Teenage Privacy Updates
Meta rolled out a series of updates aimed at protecting teens who are using their popular Instagram and Facebook platforms. They were aimed at reiterating Meta’s ongoing commitment to protecting younger users and addressing concerns about online safety and privacy that have come to the forefront.
One major update is content control and moderation. Meta implemented a control on what it has defined as high-risk content for teens including. The “moderation tools” use a combination of A.I. and humans to identify and limit the exposure of this type of content to teenage users. This includes the proactive detection and removal of content that violates community standards.
The updates also included changes to privacy settings for teen accounts – any user between the ages of 13 to 17. Meta automatically sets the most restrictive content controls on pages for users in this age group. But this isn’t just for new pages, they rolled it out to all pages so that profiles are set to private by default.
The aim is to limit exposure to potential online risks or predators and ensure a safer social media experience.
Additionally, improved age verification methods are now in place to more accurately detect the age of users. This is powered by A.I. and will help in the enforcement of content moderation and restrictions.
New parental control features give parents/guardians more oversight of teen social
media usage. The tools monitor activity including searching, comments and what type of content your teen is looking at. It also allows parents to set usage limits, review friends lists and requests and view messaging.