CDD

Newsroom

  • To watch the full FTC Dark Patterns Workshop online visit the FTC website here (link is external).
  • Contact: Jeff Chester, CDD jeff@democraticmedia.org (link sends e-mail); 202-494-7100David Monahan, CCFC, david@commercialfreechildhood.org (link sends e-mail)Advocates say Google Play continues to disregard children’s privacy law and urge FTC to act BOSTON, MA and WASHINGTON, DC — March 31, 2021—Today, advocacy groups Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) called on the Federal Trade Commission (FTC) to investigate Google’s promotion of apps which violate the Children’s Online Privacy Protection Act (COPPA). In December 2018, CCFC and CDD led a coalition of 22 consumer and health advocacy groups in asking the FTC to investigate these same practices. Since then Google has made changes to the Play Store, but the advocates say these changes fail to address the core problem: Google is certifying as safe and appropriate for children apps that violate COPPA and put children at risk. Recent studies found that a significant number of apps in Google Play violate COPPA by collecting and sharing children’s personal information without getting parental consent. For instance, a JAMA Pediatrics study found that 67% of apps used by children aged 5 and under were transmitting personal identifiers to third parties.Comment of Angela Campbell, Chair of the Board of Directors, Campaign for a Commercial-Free Childhood, Professor Emeritus, Communications & Technology Law Clinic, Georgetown University Law Center:“Parents reasonably expect that Google Play Store apps designated as ‘Teacher approved’ or appropriate for children under age 13 comply with the law protecting children’s privacy. But far too often, that is not the case. The FTC failed to act when this problem was brought to its attention over two years ago. Because children today are spending even more time using mobile apps, the FTC must hold Google accountable for violating children’s privacy.”Comment of Jeff Chester, executive Director of the Center for Digital Democracy:"The Federal Trade Commission must swiftly act to stop Google’s ongoing disregard of the privacy and well-being of children. For too long, the Commission has allowed Google’s app store, and the data marketing practices that are its foundation, to operate without enforcing the federal law that is designed to protect young people under 13. With children using apps more than ever as a consequence of the pandemic, the FTC should enforce the law and ensure Google engages with kids and families in a responsible manner."###
  • Contextual Advertising—Now Driven by AI and Machine Learning—Requires Regulatory Review for Privacy and Marketing FairnessWhat’s known as contextual advertising is receiving a big boost from marketers and some policymakers, who claim that it provides a more privacy-friendly alternative to the dominant global surveillance-based “behavioral” marketing model. Google’s plans to eliminate cookies and other third-party trackers used for much of online ad delivery are also spurring greater interest in contextual marketing, which is being touted especially as safe for children.Until several years ago, contextual ads meant that you would see an ad based on the content of the page you were on—so there might be ads for restaurants on web pages about food, or cars would be pitched if you were reading about road trips. The ad tech involved was basic: keywords found on the page would help trigger an ad.Today’s version of what’s called “contextual intelligence (link is external), “Contextual 2.0 (link is external),” or Google’s “Advanced Contextual (link is external)” is distinct. Contextual marketing uses artificial intelligence (AI (link is external)) and machine learning technologies, including computer vision and natural language processing, to provide “targeting precision.” AI-based techniques, the industry explains, allow marketers to read “between the lines” of online content. Contextual advertising is now capable of comprehending “the holistic and subtle meaning of all text and imagery,” enabling predictions and decisions on ad design and placement by “leveraging deep neural (link is external) networks” and “proprietary data sets.” AI is used to decipher the meaning of visuals “on a massive scale, enabling advertisers to create much more sophisticated links between the content and the advertising.” Computer vision (link is external) technologies identify every visual element, and “natural language processing” minutely classifies all the concepts found on each page. Millions of “rules (link is external)” are applied in an instant, using software that helps advertisers take advantage of the “multiple meanings” that may be found on a page.For example, one leading contextual marketing company, GumGum (link is external), explains that its “Verity” algorithmic and AI-based service “combines natural language processing with computer vision technology to execute a multi-layered reading process. First, it finds the meat of the article on the page, which means differentiating it from any sidebar and header ads. Next, it parses the body text, headlines, image captions with natural language processing; at the same time, it uses computer vision to parse the main visuals.… [and then] blends its textual and visual analysis into one cohesive report, which it then sends off to an adserver,” which determines whether “Verity’s report on a given page matches its advertisers campaign criteria.”Machine learning also enables contextual intelligence services to make predictions about the best ways to structure and place marketing content, taking advantage of real-time events and the ways consumers interact with content. It enables segmentation of audience targets to be fine-tuned. It also incorporates a number of traditional behavioral marketing concepts, gathering a range of data “signals (link is external)” that ensure more effecting targeting. There are advanced measurement (link is external) technologies; custom methods to influence what marketers term our “customer journey,” structuring ad-buying in similar ways to behavioral, data-driven approaches, as “bids” are made to target—and retarget—the most desirable people. And, of course, once the contextual ad “works” and people interact with it, additional personal and other information is then gathered.Contextual advertising, estimated to generate (link is external) $412 billion in spending by 2025, requires a thorough review by the FTC and data regulators. Regulators, privacy advocates and others must carefully examine how the AI and machine-learning marketing systems operate, including for Contextual 2.0. We should not accept marketers’ claims that it is innocuous and privacy-appropriate. We need to pull back the digital curtain and carefully examine the data and impact of contextual systems.
    Jeff Chester
    black laptop computer turned on by Lewis Kang'ethe Ngugi
  • The Whole World will Still be Watching You: Google & Digital Marketing Industry “Death-of-the-Cookie” Privacy Initiatives Require Scrutiny from Public Policymakers Jeff Chester One would think, in listening to the language used by Google, Facebook, and other ad and data companies to discuss the construction and future of privacy protection, that they are playing some kind of word game. We hear terms (link is external) such as “TURTLEDOVE,” “FLEDGE,” SPARROW and “FLoC.” Such claims should be viewed with skepticism, however. Although some reports make it appear that Google and its online marketing compatriots propose to reduce data gathering and tracking, we believe that their primary goal is still focused on perfecting the vast surveillance system they’ve well-established. A major data marketing industry effort is now underway to eliminate—or diminish—the role of the tracking software known as “third-party” cookies. Cookies were developed (link is external) in the very earliest days of the commercial “World Wide Web,” and have served as the foundational digital tether connecting us to a sprawling and sophisticated data-mining complex. Through cookies—and later mobile device IDs and other “persistent” identifiers—Google, Facebook, Amazon, Coca-Cola and practically everyone else have been able to surveil and target us—and our communities. Tracking cookies have literally helped engineer a “sweet spot (link is external)” for online marketers, enabling them to embed spies into our web browsers, which help them understand our digital behaviors and activities and then take action based on that knowledge. Some of these trackers—placed and used by a myriad (link is external) of data marketing companies on various websites—are referred to as “third-party” cookies, to distinguish them from what online marketers claim, with a straight face, are more acceptable forms of tracking software—known as “first-party” cookies. According to the tortured online advertiser explanation, “first-party” trackers are placed by websites on which you have affirmatively given permission to be tracked while you are on that site. These “we-have-your-permission-to-use” first-party cookies would increasingly become the foundation for advances in digital tracking and targeting. Please raise your hand if you believe you have informed Google or Amazon, to cite the two most egregious examples, that they can surveil what you do via these first-party cookies, including engaging in an analysis of your actions, background, interests and more. What the online ad business has developed behind its digital curtain—such as various ways to trigger your response, measure your emotions (link is external), knit together information on device (link is external) use, and employ machine learning (link is external) to predict your behaviors (just to name a few of the methods currently in use)—has played a fundamental role in personal data gathering. Yet these and other practices—which have an enormous impact on privacy, autonomy, fairness, and so many other aspects of our lives—will not be affected by the “death-of-the-cookie” transition currently underway. On the contrary, we believe that a case to be made that the opposite is true. Rather than strengthening data safeguards, we are seeing unaccountable platforms such as Google actually becoming more dominant, as so-called “privacy preserving (link is external)” systems actually enable enhanced data profiling. In a moment, we will briefly discuss some of the leading online marketing industry work underway to redefine privacy. But the motivation for this post is to sound the alarm that we should not—once again—allow powerful commercial interests to determine the evolving structure of our online lives. The digital data industry has no serious track record of protecting the public. Indeed, it was the failure of regulators to rein in this industry over the years that led to the current crisis. In the process, the growth of hate speech, the explosion of disinformation, and the highly concentrated control over online communications and commerce—to name only a few— now pose serious challenges to the fate of democracies worldwide. Google, Facebook and the others should never be relied on to defer their principal pursuit of monetization out of respect to any democratic ideal—let alone consumer protection and privacy. One clue to the likely end result of the current industry effort is to see how they frame it. It isn’t about democracy, the end of commercial surveillance, or strengthening human rights. It’s about how best to preserve what they call the “Open Internet.” (link is external)Some leading data marketers believe we have all consented to a trade-off, that in exchange for “free” content we’ve agreed to a pact enabling them to eavesdrop on everything we do—and then make all that information available to anyone who can pay for it—primarily advertisers. Despite its rhetoric about curbing tracking cookies, the online marketing business intends to continue to colonize our devices and monitor our online experiences. This debate, then, is really about who can decide—and under what terms—the fate of the Internet’s architecture, including how it operationalizes privacy—at least in the U.S. It illustrates questions that deserve a better answer than the “industry-knows-best” approach we have allowed for far. That’s why we call on the Biden Administration, the Federal Trade Commission (FTC) and the Congress to investigate these proposed new approaches for data use, and ensure that the result is truly privacy protective, supporting democratic governance and incorporating mechanisms of oversight and accountability. Here’s a brief review (link is external) of some of the key developments, which illustrate the digital “tug-of-war” ensuing over the several industry proposals involving cookies and tracking. In 2019, Google announced (link is external) that it would end the role of what’s known as “third-party cookies.” Google has created a “privacy sandbox (link is external)” where it has researched various methods it claims will protect privacy, especially for people who rely on its Chrome browser. It is exploring “ways in which a browser can group together people with similar browsing habits, so that ad tech companies can observe the habits of large groups instead of the activity of individuals. Ad targeting could then be partly based on what group the person falls into.” This is its “Federated Learning of Cohorts (FLoC) approach, where people are placed into “clusters” based on the use of “machine learning algorithms” that analyze the data generated from the sites a person visited and their content. Google says these clusters would “each represent thousands of people,” and that the “input features” used to generate the targeting algorithm, such as our “web history,” would be stored on our browsers. There would be other techniques deployed, to add “noise” to the data sets and engage in various “anonymization methods” so that the exposure of a person’s individual information is limited. Its TURTLEDOVE initiative is designed to enable more personalized targeting, where web browsers will be used to help ensure our data is available for the real-time auctions that sell us to advertisers. The theory is that by allowing the data to remain within our devices, as well using clusters of people for targeting, our privacy is protected. But the goal of the process— to have sufficient data and effective digital marketing techniques—is still at the heart of this process. Google recently (link is external) reported that “FLoC can provide an effective replacement signal for third-party cookies. Our tests of FLoC to reach in-market and affinity Google Audiences show that advertisers can expect to see at least 95% of the conversions per dollar spent when compared to cookie-based advertising.” Google’s 2019 announcement caused an uproar in the digital marketing business. It was also perceived (correctly, in my view) as a Google power grab. Google operates basically as a “Walled Garden (link is external)” and has so much data that it doesn’t really need third-party data cookies to hone in on its targets. The potential “death of the cookie” ignited a number of initiatives from the Interactive (link is external) Advertising Bureau, as well as competitors (link is external) and major advertisers, who feared that Google’s plan would undermine their lucrative business model. They include such groups as the Partnership for Addressable Media (PRAM), (link is external) whose 400 members include Mastercard, Comcast/NBCU, P&G, the Association of National Advertisers, IAB and other ad and data companies. PRAM issued a request (link is external) to review proposals (link is external) that would ensure the data marketing industry continues to thrive, but could be less reliant on third-party cookies. Leading online marketing company Trade Desk is playing a key role here. It submitted (link is external) its “United ID 2.0 (link is external),” plan to PRAM, saying that it “represents an alternative to third party cookies that improves consumer transparency, privacy and control, while preserving the value exchange of relevant advertising across channels and devices.” There are also a number of other ways now being offered that claim both to protect privacy yet take advantage of our identity (link is external), such as various collaborative (link is external) data-sharing efforts. The Internet standards groups Worldwide Web Consortium (W3C) has created (link is external) a sort of neutral meeting ground where the industry can discuss proposals and potentially seek some sort of unified approach. The rationale for the [get ready for this statement] “Improving Web Advertising Business Group goal is to provide monetization opportunities that support the open web while balancing the needs of publishers and the advertisers that fund them, even when their interests do not align, with improvements to protect people from the individual and societal impacts of tracking content consumption over time.” Its participants (link is external) are another “Who’s Who” in data-driven marketing, including Google, AT&T, Verizon, NYT, IAB, Apple, Group M, Axel Springer, Facebook, Amazon, Washington Post, Verizon, and Criteo. DuckDuckGo is also a member (and both Google and Facebook have multiple representatives in this group). The sole NGO listed as a member is the Center for Democracy and Technology. W3Cs ad business group has a number of documents (link is external) about the digital marketing business that illustrate why the issue of the future of privacy and data collection and targeting should be a public—and not just data industry—concern. In an explainer (link is external) on digital advertising, they make the paradigm so many are working to defend very clear: Marketing’s goal can be boiled down to the "5 Rights": Right Message to the Right Person at the Right Time in the Right Channel and for the Right Reason. Achieving this goal in the context of traditional marketing (print, live television, billboards, et al) is impossible. In digital realm, however, not only can marketers achieve this goal, they can prove it happened. This proof is what enables marketing activities to continue, and is important for modern marketers to justify their advertising dollars, which ultimately finance the publishers sponsoring the underlying content being monetized.” Nothing I’ve read says it better. Through a quarter century of work to perfect harvesting our identity for profit, the digital ad industry has created a formidable complex of data clouds (link is external), real-time ad auctions, cross-device tracking tools and advertising techniques (link is external) that further commodify our lives, shred our privacy, and transform the Internet into a hall of mirrors that can amplify our fears and splinter democratic norms. It’s people, of course, who decide how the Internet operates—especially those from companies such as Google, Facebook, Amazon, and those working for trade groups as the IAB. We must not let them decide how cookies may or may not be used or what new data standard should be adopted by the most powerful corporate interests on the planet to profit from our “identity.” It’s time for action by the FTC and Congress. Part 1. (1)For the uninitiated, TURTLEDOVE stands for “Two Uncorrelated Requests, Then Locally-Executed Decision On Victory”; FLEDGE is short for “First Locally-Executed Decision over Groups Experiment”; SPARROW is “Secure Private Advertising Remotely Run On Webserver”; and FLoC is “Federated Learning of Cohorts”). (2) In January 2021, the UK’s Competition and Markets Authority (CMA) opened up an investigation (link is external) into Google privacy sandbox and cookie plans.
    Jeff Chester
  • Press Statement, Center for Digital Democracy (CDD) and Campaign for a Commercial-Free Childhood (CCFC), 12-14-20 Today, the Federal Trade Commission announced (link is external) it will use its to 6(b) authority to launch a major new study into the data collection practices of nine major tech platforms and companies: ByteDance (TikTok), Amazon, Discord, Facebook, Reddit, Snap, Twitter, WhatsApp and YouTube. The Commission’s study includes a section on children and teens. In December, 2019, the Campaign for a Commercial-Free Childhood (CCFC), Center for Digital Democracy (CDD) and their attorneys at Georgetown Law’s Institute for Public Representation urged the Commission to use its 6(b) authority to better understand how tech companies collect and use data from children. Twenty-seven consumer and child advocacy organizations joined that request. Below are statements from CDD and CCFC on today’s announcement. Josh Golin, Executive Director, CCFC: “We are extremely pleased that the FTC will be taking a hard look at how platforms like TikTok, Snap, and YouTube collect and use young people’s data. These 6(b) studies will provide a much-needed window into the opaque data practices that have a profound impact on young people’s wellbeing. This much-needed study will not only provide critical public education, but lay the groundwork for evidence-based policies that protect young people’s privacy and vulnerabilities when they use online services to connect, learn, and play.” Jeff Chester, Executive Director, CDD: "The FTC is finally holding the social media and online video giants accountable, by requiring leading companies to reveal how they stealthily gather and use information that impacts our privacy and autonomy. It is especially important the commission is concerned about also protecting teens— who are the targets of a sophisticated and pervasive marketing system designed to influence their behaviors for monetization purposes." For questions, please contact: jeff@democraticmedia.org (link sends e-mail) See also: https://www.markey.senate.gov/news/press-releases/senator-markey-stateme... (link is external)
  • General Comment submission Children’s rights in relation to the digital environment • Professor Amandine Garde, Law & Non-Communicable Research Unit, School of Law and Social Justice, University of Liverpool • Dr Mimi Tatlow-Golden, Senior Lecturer, Developmental Psychology and Childhood, The Open University • Dr Emma Boyland, Senior Lecturer, Psychology, University of Liverpool • Professor Emerita Kathryn C. Montgomery, School of Communication, American University; Senior Strategist, Center for Digital Democracy • Jeff Chester, Center for Digital Democracy • Josh Golin, Campaign for a Commercial Free Childhood • Kaja Lund-Iversen and Ailo Krogh Ravna, Norwegian Consumer Council • Pedro Hartung and Marina Reina, Alana Institute • Dr Marine Friant-Perrot, University of Nantes • Professor Emerita Wenche Barth Eide, University of Oslo; Coordinator, FoHRC • Professor Liv Elin Torheim, Oslo Metropolitan University • Professor Alberto Alemanno, HEC Paris Business School and The Good Lobby • Marianne Hammer, Norwegian Cancer Society • Nikolai Pushkarev, European Public Health Alliance 13 November 2020 Dear Members of the Committee on the Rights of the Child, We very much welcome the Committee’s Draft General Comment No25 on children’s rights in relation to the digital environment (the Draft) and are grateful for the opportunity to comment. We are a group of leading scholars and NGO experts on youth, digital media, child rights and public health who work to raise awareness and promote regulation of marketing (particularly of harmful goods, services and brands) to which children are exposed. We argue this infringes many of the rights enshrined in the UN Convention on the Rights of the Child (CRC) and other international instruments and should be strictly regulated. Based on our collective expertise, we call on the Committee to recognise more explicitly the fundamentally transformed nature of marketing in new digital environments, the harms stemming therefrom, and the corresponding need to protect children from targeting and exposure. Without such recognition, children will not be able to fully enjoy the many opportunities for learning, civic participation, creativity and communication that the digital environment offers for their development and fulfilment of their rights. Facilitating children’s participation in this environment should not come at the price of violations of any children's rights. Before making specific comments, we wish to highlight our support for much of this Draft. In particular, we strongly support the provisions in the following paragraphs of the General Comment: 11, 13, 14, 52, 54, 62, 63, 64, 67, 72, 74, 75, 88, 112, and 119. We also note concerns regarding provisions that will require mandatory age verification: e.g., paragraphs 56, 70, 120, 122. We call on the Committee to consider provisions that this be applied proportionately, as this will certainly have the effect of increasing the processing of children’s personal data - which should not happen to the detriment of the best interests of the child. The rest of this contribution, following the structure of the Draft, proposes specific additions / modifications (underlined, in italics), with brief explanations (in boxes). Numbers refer to original paragraphs in the Draft; XX indicates a new proposed paragraph. Hoping these comments are useful to finalise the General Comment, we remain at your disposal for further information. Yours faithfully, Amandine Garde and Mimi Tatlow-Golden On behalf of those listed above [See full comments in attached document]
  • CONSUMER AND CITIZEN GROUPS CONTINUE TO HAVE SERIOUS CONCERNS ABOUT GOOGLE FITBIT TAKEOVER Joint Statement on Possible Remedies (link is external)
  • October 9, 2020 Susan Wojciki CEO YouTube 901 Cherry Avenue San Bruno, CA 94006 Dear Ms. Wojciki: We commend Google/YouTube’s plan to create a $100 million investment fund for children’s content, announced in 2019 following the FTC settlement to address YouTube’s violations of COPPA. This fund has the potential to stamp an imprint on children’s online content which will have influence for years to come. We ask that YouTube adopt policies to ensure this fund will operate in the best interests of children worldwide. The programming supported by the fund should: Reflect the perspectives and interests of children from different countries and cultures Underwrite content makers who are diverse and independent, with at least 50% of funding dedicated to historically underrepresented communities Promote educational content and content which reflects the highest values of civil society, including diversity Not support content which promotes commercialism Facilitate union representation of creators of scripted and nonfiction content for YouTube Be advised by a team of leading independent experts who can ensure programming is commissioned that truly serves the educational, civic, and developmental needs of young people. As the leading global online destination for many millions of children, as well as the most powerful digital marketing entity, Google should be at the forefront of providing financial resources for quality content that is innovative, takes creative risks, and supports emerging program makers from many different backgrounds. For example, programming supported by the fund should reflect a major commitment to diversity by commissioning producers from around the world who represent diverse cultures and perspectives. The fund is also an opportunity for Google to make a significant contribution to the development of a distinct programming vision for young people that is primarily driven to foster their wellbeing. We urge Google to only fund programming free of commercial content, including influencer marketing, product and brand integration, and licensed characters or products. In addition, each program or series should have a robust release window that provides access to all children without being required to view digital advertising and other forms of commercial marketing. The expert commissioning board we advise you to adopt will help ensure that the fund will operate fairly, and help eliminate potential conflict of interests. Operating the fund using these principles will allow YouTube to cement its place as a leader in children’s programming and more importantly, make a world of difference—ensuring that time spent watching YouTube will enrich children. We stand ready to confer with you on these suggestions and your development of the fund, and would welcome the opportunity to meet with you in the near future to discuss these items. Sincerely, Jeffrey Chester, Executive Director, Center for Digital Democracy Jessica J. González, Co-CEO, Free Press Josh Golin, Executive Director, Campaign for a Commercial-Free Childhood Justin Ruben, Co-Director, ParentsTogether Lowell Peterson, Executive Director, Writers Guild of America, East, AFL-CIO
  • The Campaign for Commercial-Free Childhood (CCFC) and CDD filed comments with the UN’s Special Rapporteur on privacy, as part of a consultation designed to propose global safeguards for young people online. Both CCFC and CDD, along with allies in the U.S. and throughout the world, are working to advance stronger international protections for young people, especially related to their privacy and the impacts that digital marketing has on their development.
    Jeff Chester
  • For Immediate Release September 24, 2020 Contact: Jeff Chester (202-494-7100) jeff@democraticmedia.org (link sends e-mail) A Step Backwards for Consumer Privacy: Why Californians Should Vote No on Proposition 24 Ventura, CA, and Washington, DC: The Center for Digital Democracy (CDD) announced today its opposition to the California Privacy Rights Act (CPRA), also known as Proposition 24, which will appear on the November 2020 California general election ballot. Prop 24 does not sufficiently strengthen Californians’ privacy and may, in fact, set a new lower and thus more dangerous standard for privacy protection in the U.S., according to its analyses. “We need strong and bold privacy legislation, not weaker standards and tinkering at the margins,” declared CDD Policy Director Katharina Kopp. “Prop 24 fails to significantly limit data uses that undermine our privacy, increase corporate manipulation and exploitation, and exacerbate racial and economic inequality. This initiative allows the much more powerful companies to set unfair terms by default. It also condones pay-for-privacy schemes, where corporations would be allowed to charge a premium (or eliminate a discount) in exchange for privacy. These schemes tend to hurt the already disadvantaged the most,” she explained. CDD intends to work with allies from the consumer and privacy communities to inform voters about Prop 24 and how best to protect their privacy. The Center for Digital Democracy is a leading nonprofit organization focused on empowering and protecting the rights of the public in the digital era.
  • The Center for Digital Democracy (CDD) announced today its opposition to the California Privacy Rights Act (CPRA), also known as Proposition 24 (link is external), which will appear on the November 2020 California general election ballot. CDD has concluded that Prop 24 does not sufficiently strengthen Californians’ privacy and may, in fact, set a new, low, and thus dangerous standard for privacy protection in the U.S. We need strong and bold privacy legislation, not weaker standards and tinkering at the margins. We need digital privacy safeguards that address the fundamental drivers of our eroding privacy, autonomy, and that redress the growing levels of racial and social inequity. We need rules that go to the heart of the data-driven business model and curtail the market incentives that have created the deplorable state of affairs we currently face. What we need are protections that significantly limit data uses that undermine our privacy, increase corporate manipulation and exploitation, and exacerbate racial and economic inequality. We need default privacy settings that limit the sharing and selling of personal information, and the use of data for targeted advertising, personalized content, and other manipulative practices. We need to ensure privacy for all and limit any pay-for-privacy schemes that entice the most vulnerable to give up their privacy. In other words, we need to limit harmful data-use practices by default, and place the interests of consumers above market imperatives by allowing only those data practices that are not harmful to individuals, groups, and society at large. Prop 24 does none of that. Specifically, Prop 24 continues on the path of a failed notice-and-choice regime, allowing the much more powerful companies to set unfair terms. Instead, privacy legislation should focus on strong default settings and data-use practices that are allowable (“permissible uses”) and prohibiting all others. These safeguards should be in place by default, rather than forcing consumers to opt out of invasive advertising. Prop 24, in contrast, does not provide effective data-use limitations; instead it continues to limit data sharing and selling via an opt-out, rather than declaring them to be impermissible uses, or at minimum requiring an opt-in for such practices. Even “sensitive data” under Prop 24 is protected only via a consumer-initiated opt-out, rather than prohibiting the use of sensitive personal data altogether. Equally concerning, Prop 24 would expand rather than limit pay-for-privacy schemes. Under the terms of Prop 24, corporations are still allowed to charge a premium (or eliminate a discount) in exchange for privacy. Consumers shouldn’t be charged higher prices or be discriminated against simply for exercising their privacy rights. This provision of Prop 24 is particularly objectionable, as it tends to harm vulnerable populations, people of color, and the elderly by creating privacy “haves” and “have-nots,” further entrenching other, existing inequities as companies would be able use personal data to profile, segment, and discriminate in a variety of areas. There are many other reasons that CDD objects to Prop 24, chief among them that this flawed measure - employs an outdated concept of “sensitive data” instead of focusing on sensitive data uses; - fails to rein in the growing power of data brokers that collect and analyze personal data from a variety of sources, including public data sets, for sale to marketers; - does not employ strong enough data minimization provisions to limit data collection, use and disclosure only to what is necessary to provide the service requested by the consumer; - undermines consumer efforts to seek enforcement of privacy rights by neglecting to provide full private right-of-action provisions; and - unnecessarily delays its protection of employee privacy.
  • Reports

    Data Governance for Young People in the Commercialized Digital Environment

    A report for UNICEF's Global Governance of Children's Data Project

    TikTok (also known by its Chinese name, Dǒuyīn) has quickly captured the interest of children, adolescents, and young adults in 150 countries around the world. The mobile app enables users to create short video clips, customize them with a panoply of user-friendly special effects tools, and then share them widely through the platform’s vast social network. A recent industry survey of children’s app usage in the United States, the UK, and Spain reported that young people between the ages of 4 and 15 now spend almost as much time per day (80 minutes) on TikTok as they do on the highly popular YouTube (85 minutes). TikTok is also credited with helping to drive growth in children’s social app use by 100 percent in 2019 and 200 percent in 2020. Among the keys to its success is a sophisticated artificial intelligence (AI) system that offers a constant stream of highly tailored content, and fosters continuous interaction with the platform. Using computer vision technology to reveal insights based on images, objects, texts, and natural-language processing, the app “learns” about an individual’s preferences, interests and online behaviors so it can offer “high-quality and personalized” content and recommendations. TikTok also provides advertisers with a full spectrum of marketing and brand-promotion applications that tap into a vast store of user information, including not only age, gender, location, and interests, but also granular data sets based on constant tracking of behaviors and activities...TikTok is just one of many tech companies deploying these techniques… [full article attached and also here (link is external); more from series here (link is external)]
  • Press Release

    Advocates Call on TikTok Suitors to Clean Up Kids’ Privacy Practices

    Groups had filed complaint at FTC documenting how TikTok flouts children’s privacy law, tracks millions of kids without parental consent.

    Contact: Katharina Kopp, CDD (kkopp@democraticmedia.org (link sends e-mail); 202-836-4621) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail)) Advocates Call on TikTok Suitors to Clean Up Kids’ Privacy Practices Groups had filed complaint at FTC documenting how TikTok flouts children’s privacy law, tracks millions of kids without parental consent. WASHINGTON, DC and BOSTON, MA—September 3, 2020—The nation’s leading children’s privacy advocates are calling on potential buyers of TikTok “to take immediate steps to comprehensively improve its privacy and data marketing practices for young people” should they purchase the platform. In separate letters to Microsoft, Walmart, and Oracle, Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) detail TikTok’s extensive history of violating the Children’s Online Privacy Protection Act (COPPA), including a recent news report that TikTok internally classified more than one-third of its 49 million US users as fourteen or under. Given the likelihood that millions of these users are also under thirteen, the advocates urged Microsoft, Walmart, and Oracle to pledge to immediately stop collecting and processing data from any account flagged as or believed to be under thirteen if they acquire TikTok’s US operations, and only restore accounts that can be affirmatively verified as belonging to users that are thirteen or older. COPPA requires apps and websites to obtain verifiable parental consent before collecting the personal information of anyone under 13, but TikTok has not done so for its millions of accounts held by children. “Whoever purchases TikTok will have access to a treasure trove of ill-gotten, sensitive children’s data,” said Josh Golin, Executive Director of CCFC. “Any new owner must demonstrate their commitment to protecting young people’s privacy by immediately deleting any data that was illegally obtained from children under thirteen. With the keys to one of the most popular platforms for young people on the planet must come a commitment to protect children’s privacy and wellbeing.” In February 2019, TikTok was fined $5.7 million by the Federal Trade Commission (FTC) for COPPA violations and agreed to delete children’s data and properly request parental consent before allowing children under 13 on the site and collecting more data from them. This May, CCFC, CDD, and a coalition of 20 advocacy groups filed an FTC complaint against TikTok for ignoring their promises to delete kids’ data and comply with the law. To this day, the groups say, TikTok plays by its own rules, luring millions of kids under the age of 13, illegally collecting their data, and using it to manipulatively target them with marketing. In addition, they wrote to the companies today that, “By ignoring the presence of millions of younger children on its app, TikTok is putting them at risk for sexual predation; news reports and law enforcement agencies have documented many cases of inappropriate adult-to-child contact on the app.” In August, the groups’ allegations that TikTok had actual knowledge that millions of its users were under thirteen were confirmed by the New York Times. According to internal documents obtained by the Times, TikTok assigns an age range to each user utilizing a variety of methods including “facial recognition algorithms that scrutinize profile pictures and videos,” “comparing their activity and social connections in the app against those of users whose ages have already been estimated,” and drawing “upon information about users that is bought from other sources.” Using these methods, more than one third of TikTok’s 49 million users in the US were estimated to be under fourteen. Among daily users, the proportion that TikTok has designated as under fourteen rises to 47%. “The new owners of TikTok in the U.S. must demonstrate they take protecting the privacy and well-being of young people seriously,” said Katharina Kopp, policy director of the Center for Digital Democracy. “The federal law protecting kids’ privacy must be complied with and fully enforced. In addition, the company should implement a series of safeguards that prohibits manipulative, discriminatory and harmful data and marketing practices that target children and teens. Regulators should reject any proposed sale without ensuring a set of robust set of safeguards for youth are in place,” she noted. ###