To protect underage users from harmful content, Instagram is introducing new restrictions for teen accounts. By default, users under 18 will only see content that adheres to the PG-13 movie rating, avoiding themes such as extreme violence, sexual nudity, and graphic drug use.
If you are under 18 years of age, you may not change this setting without your parent or guardian’s explicit approval.
Instagram is also introducing stricter content filters called “restricted content,” which will prevent teens from viewing or commenting on posts with the setting turned on.
The company announced that starting next year, it will apply further restrictions on the types of chats teens can have with AI bots that have restricted content filters turned on. We’ve already applied the new PG-13 content settings to AI conversations.

The move comes as chatbot makers such as OpenAI and Character.AI face lawsuits for allegedly harming users. OpenAI last month introduced new restrictions for ChatGPT users under 18 and said it was training its chatbots to refrain from “flirty conversations.” Earlier this year, Character.AI also added new restrictions and parental controls.
Instagram, which has built tools around teen safety across accounts, DMs, search and content, is expanding its controls and restrictions on underage users in a variety of areas. The social media service does not allow teenagers to follow accounts that share age-inappropriate content. Additionally, if you follow such an account, you will no longer be able to view or interact with that account’s content, and vice versa. The company also removes such accounts from recommendations, making them harder to find.

The company also prevents teens from viewing inappropriate content linked to DMs.
tech crunch event
san francisco
|
October 27-29, 2025
Meta already restricts teen accounts from finding content related to eating disorders and self-harm. The company said it is currently blocking words like “alcohol” and “gore” to prevent teens from misspelling these terms and finding content in these categories.

The company said it is testing a new way for parents to use monitoring tools to flag content that should not be recommended to young people. Flagged posts are sent to our review team.
Instagram is rolling out these changes in the US, UK, Australia, and Canada starting today, and globally next year.
Source link