Meta unveiled a suite of safety enhancements for teen accounts on Instagram this week, highlighting recent data showing the impact of its initiatives to shield younger users from harmful content. The company disclosed that over 635,000 Instagram accounts were removed earlier this year as part of broader efforts to curb risks for adolescents. Newly introduced tools allow teens to access safety advice, block or report accounts with a single click, and view when other users joined the platform — features Meta claims are tailored to minimize unwanted interactions and foster age-appropriate engagement.
In a statement, Meta emphasized its “sophisticated technology” for identifying exploitative content and its ongoing focus on preventing “direct and indirect harm” to young people. Recent statistics revealed that teens blocked or reported accounts one million times each in June alone after encountering safety prompts. A nudity-blurring tool, launched in 2023, remains active for 99% of users who enabled it, with over 40% of obscured images never viewed in June. Warnings against forwarding suspected explicit content, introduced this year, dissuaded 45% of recipients from sharing such material in May.
Additional protections now extend to adult-managed accounts featuring minors. These include safeguards against recommendations to suspicious adult profiles and automated filtering of sexualized comments via the Hidden Words feature. Meta reported removing 135,000 accounts that sexualized children, plus 500,000 linked accounts, since refining these measures.
Critics, however, argue the steps are insufficient. Common Sense Media CEO James P. Steyer dismissed the updates as “too little, too late,” accusing Meta of prioritizing profit over youth safety for years. His remarks followed reports that Meta lobbied to delay the Kids Online Safety Act (KOSA), which resurfaced in Congress this year despite what Politico described as Meta’s “concerted campaign” against it. The company contends the bill threatens free speech, though opponents allege financial motives underpin its resistance.
The announcement coincides with Meta’s recent removal of 10 million fake Facebook profiles impersonating creators, part of a wider crackdown on inauthentic activity. While the latest policies signal intensified efforts to protect young users, they arrive amid ongoing scrutiny over the platform’s accountability and the effectiveness of self-regulation in a rapidly evolving digital landscape.