CDD

Despite Flurry of New Safety Features, Social Media Platforms Still Not Doing Enough to Protect Children, CDD Report Finds

Jeff Chester

Government Needs to Step up its Efforts to Provide Meaningful and Effective Regulation.

Under intensifying pressure from Congress and the public, top social media platforms popular with young people – Instagram, Snapchat, TikTok, Twitch, and YouTube – have launched dozens of new safety features for children and teens in the last year, according to a report from the Center for Digital Democracy (CDD). Researchers at CDD conducted an analysis of tech industry strategies to head off regulation in the wake of the 2021 Facebook whistleblower revelations and the rising tide of public criticism, Congressional hearings, and pressures from abroad. These companies have introduced a spate of new tools, default navigation systems, and AI software aimed at increasing safeguards against child sexual abuse material, problematic content, and disinformation, the report found. But tech platforms have been careful not to allow any new safety systems to interfere significantly with advertising practices and business models that target the lucrative youth demographic. As a consequence, while industry spokespersons tout their concerns for children, “their efforts to establish safeguards are, at best, fragmented and conflicted,” the report concludes.  “Most of the operations inside these social media companies remain hidden from public view, leaving many questions about how the various safety protocols and teen-friendly policies actually function.”  

More attention should also be placed on advertisers, the report suggests, which have become a much more powerful and influential force in the tech industry in recent years. Researchers offer a detailed description of the industry’s “brand safety” system –  an “expanding infrastructure of specialized companies, technological tools, software systems, and global consortia that now operate at the heart of the digital economy, creating a highly sophisticated surveillance system that can determine instantaneously which content can be monetized and which cannot.” This system, which was set up to protect the advertisers from having their ads associated with problematic content, could do much more to ensure better protections for children. 

“The most effective way to ensure greater accountability and more meaningful transparency by the tech industry,” the authors argue, “is through stronger public policies.” Pointing out that protection of children online remains a strong bipartisan issue, researchers identify a number of current legislative vehicles and regulatory proceedings – including bills that are likely to be reintroduced in the next Congress – which could provide more comprehensive protections for young people, and rein in some of the immense power of the tech industry. “Tech policies in the U.S. have traditionally followed a narrow, piecemeal approach to addressing children’s needs in the online environment,” the authors note, “providing limited safeguards for only the youngest children, and failing to take into account the holistic nature of young peoples’ engagement with the digital media environment.” What is needed is a more integrated approach that protects privacy for both children and teens, along with safeguards that cover advertising, commercial surveillance, and child safety.   

Finally, the report calls for a strategic campaign that brings together the diverse constituencies working on behalf of youth in the online media. “Because the impacts of digital technologies on children are so widespread, efforts should also be made to broaden the coalition of organizations that have traditionally fought for children’s interests in the digital media to include groups representing the environment, civil rights, health, education, and other key stakeholder communities.”