Instagram announced new safeguards for accounts primarily featuring children, addressing concerns about online safety for “kidfluencers” and family vloggers. It’s a platform at the heart of youth culture, is rolling out a significant wave of updates to protect its youngest users. Announced on July 23, 2025, these new safeguards are a direct response to growing concerns about online safety for children and teenagers, particularly in the context of “kidfluencers” and family vlogs. Meta’s official statement clarifies the updates are designed to “protect young users and give parents more control,” marking a proactive shift to foster a safer digital environment.
Protecting Child-Focused Accounts with Stricter DM Settings
For accounts run by adults that prominently feature children, such as those managed by parents or talent managers, Instagram is now enforcing its most stringent direct message (DM) settings. This change is a crucial step to prevent unwanted and potentially harmful messages from reaching these accounts. The platform’s “Hidden Words” feature, which automatically filters offensive comments, is also now a default setting. This pre-emptive measure aims to create a more positive interaction space and directly addresses the criticisms and risks associated with sharing children’s lives online.
New Safety Tools for Teen Accounts
In parallel with these protections, Instagram is also arming teen users with a suite of new safety tools. These features are designed to empower teens to make safer choices in their online interactions:
- Safety Prompts: Teens will now see prompts encouraging them to verify new profiles and to be cautious about sharing personal information.
- Combined Block-and-Report: A new, streamlined option allows teens to block and report suspicious accounts in a single, swift action.
- Account Transparency: Chats will now display the account creation date, providing a helpful context to identify potential scammers or fake profiles.
These updates build upon existing features like the nudity protection filter, which has been embraced by 99% of users, including teens, reflecting a widespread desire for a safer online space.
Meta’s Proactive Crackdown on Harmful Accounts
Meta is reinforcing its child safety policies with a firm and aggressive stance. In 2025 alone, the company has taken down a staggering number of accounts—nearly 135,000 on Instagram for sexualizing content related to children, and an additional 500,000 associated accounts across Instagram and Facebook. Meta’s commitment is clear: “We’re using advanced technology to identify and restrict harmful accounts before they can cause harm.” This proactive use of technology, which restricts accounts previously blocked or reported by teens, is a powerful move to prevent dangerous interactions.
“While these accounts are overwhelmingly used in benign ways, unfortunately, there are people who may try to abuse them,” Meta stated. “Today we’re announcing steps to help prevent this abuse” – Meta 2025
Empowering Parents with Greater Supervision
The new updates also provide parents with enhanced tools for oversight, fostering a collaborative approach to online safety. While respecting teen privacy by not allowing parents to read messages, these tools offer meaningful insights:
- Time Management: Parents can now set daily time limits for app usage and block access during specific times, such as at night.
- Activity Oversight: The tools allow parents to see who their teens have recently messaged and what topics they are exploring, enabling informed conversations about online behavior.
These features are a natural extension of Instagram’s commitment to Teen Accounts, which already have restrictive settings in place to limit sensitive content and interactions by default.
“We’ve removed nearly 135,000 Instagram accounts that were sexualizing child-centric profiles, along with 500,000 associated accounts across Instagram and Facebook,” Meta disclosed.
Balancing Safety and Freedom: The Evolving Digital Landscape
These changes are not happening in a vacuum. They come amid mounting legal and public pressure, including lawsuits from several U.S. states accusing Meta of prioritizing engagement over user safety. While critics argue that more comprehensive regulations are still needed, these proactive measures, from age verification technology to stricter account settings, demonstrate Instagram’s evolving response to the complex challenges of digital safety. The updates aim to strike a better balance between empowering users and ensuring a secure environment for its younger demographic.
Conclusion
Instagram’s new safety features for accounts featuring children and teens mark a significant step toward a safer online experience. By implementing stricter messaging controls, enhancing parental supervision, and cracking down on harmful accounts, Meta is addressing critical safety concerns. Stay informed about the latest tech developments by following our blog for more updates on social media safety and innovation.
Visit for more tech insights. Subscribe to our blog for the latest tech news!