Newsroom
Program Areas
-
Reports
Data Governance for Young People in the Commercialized Digital Environment
A report for UNICEF's Global Governance of Children's Data Project
TikTok (also known by its Chinese name, Dǒuyīn) has quickly captured the interest of children, adolescents, and young adults in 150 countries around the world. The mobile app enables users to create short video clips, customize them with a panoply of user-friendly special effects tools, and then share them widely through the platform’s vast social network. A recent industry survey of children’s app usage in the United States, the UK, and Spain reported that young people between the ages of 4 and 15 now spend almost as much time per day (80 minutes) on TikTok as they do on the highly popular YouTube (85 minutes). TikTok is also credited with helping to drive growth in children’s social app use by 100 percent in 2019 and 200 percent in 2020. Among the keys to its success is a sophisticated artificial intelligence (AI) system that offers a constant stream of highly tailored content, and fosters continuous interaction with the platform. Using computer vision technology to reveal insights based on images, objects, texts, and natural-language processing, the app “learns” about an individual’s preferences, interests and online behaviors so it can offer “high-quality and personalized” content and recommendations. TikTok also provides advertisers with a full spectrum of marketing and brand-promotion applications that tap into a vast store of user information, including not only age, gender, location, and interests, but also granular data sets based on constant tracking of behaviors and activities...TikTok is just one of many tech companies deploying these techniques… [full article attached and also here (link is external); more from series here (link is external)] -
Press Release
Advocates Call on TikTok Suitors to Clean Up Kids’ Privacy Practices
Groups had filed complaint at FTC documenting how TikTok flouts children’s privacy law, tracks millions of kids without parental consent.
Contact: Katharina Kopp, CDD (kkopp@democraticmedia.org (link sends e-mail); 202-836-4621) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail)) Advocates Call on TikTok Suitors to Clean Up Kids’ Privacy Practices Groups had filed complaint at FTC documenting how TikTok flouts children’s privacy law, tracks millions of kids without parental consent. WASHINGTON, DC and BOSTON, MA—September 3, 2020—The nation’s leading children’s privacy advocates are calling on potential buyers of TikTok “to take immediate steps to comprehensively improve its privacy and data marketing practices for young people” should they purchase the platform. In separate letters to Microsoft, Walmart, and Oracle, Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) detail TikTok’s extensive history of violating the Children’s Online Privacy Protection Act (COPPA), including a recent news report that TikTok internally classified more than one-third of its 49 million US users as fourteen or under. Given the likelihood that millions of these users are also under thirteen, the advocates urged Microsoft, Walmart, and Oracle to pledge to immediately stop collecting and processing data from any account flagged as or believed to be under thirteen if they acquire TikTok’s US operations, and only restore accounts that can be affirmatively verified as belonging to users that are thirteen or older. COPPA requires apps and websites to obtain verifiable parental consent before collecting the personal information of anyone under 13, but TikTok has not done so for its millions of accounts held by children. “Whoever purchases TikTok will have access to a treasure trove of ill-gotten, sensitive children’s data,” said Josh Golin, Executive Director of CCFC. “Any new owner must demonstrate their commitment to protecting young people’s privacy by immediately deleting any data that was illegally obtained from children under thirteen. With the keys to one of the most popular platforms for young people on the planet must come a commitment to protect children’s privacy and wellbeing.” In February 2019, TikTok was fined $5.7 million by the Federal Trade Commission (FTC) for COPPA violations and agreed to delete children’s data and properly request parental consent before allowing children under 13 on the site and collecting more data from them. This May, CCFC, CDD, and a coalition of 20 advocacy groups filed an FTC complaint against TikTok for ignoring their promises to delete kids’ data and comply with the law. To this day, the groups say, TikTok plays by its own rules, luring millions of kids under the age of 13, illegally collecting their data, and using it to manipulatively target them with marketing. In addition, they wrote to the companies today that, “By ignoring the presence of millions of younger children on its app, TikTok is putting them at risk for sexual predation; news reports and law enforcement agencies have documented many cases of inappropriate adult-to-child contact on the app.” In August, the groups’ allegations that TikTok had actual knowledge that millions of its users were under thirteen were confirmed by the New York Times. According to internal documents obtained by the Times, TikTok assigns an age range to each user utilizing a variety of methods including “facial recognition algorithms that scrutinize profile pictures and videos,” “comparing their activity and social connections in the app against those of users whose ages have already been estimated,” and drawing “upon information about users that is bought from other sources.” Using these methods, more than one third of TikTok’s 49 million users in the US were estimated to be under fourteen. Among daily users, the proportion that TikTok has designated as under fourteen rises to 47%. “The new owners of TikTok in the U.S. must demonstrate they take protecting the privacy and well-being of young people seriously,” said Katharina Kopp, policy director of the Center for Digital Democracy. “The federal law protecting kids’ privacy must be complied with and fully enforced. In addition, the company should implement a series of safeguards that prohibits manipulative, discriminatory and harmful data and marketing practices that target children and teens. Regulators should reject any proposed sale without ensuring a set of robust set of safeguards for youth are in place,” she noted. ### -
Reports
Does Buying Groceries Online Put SNAP Participants At Risk?
How to Protect Health, Privacy, and Equity
-
Press Release
USDA Online Buying Program for SNAP Participants Threatens Their Privacy and Can Exacerbate Racial and Health Inequities, Says New Report
Digital Rights, Civil Rights and Public Health Groups Call for Reforms from USDA, Amazon, Walmart, Safeway/Albertson’s and Other Grocery Retailers - Need for Safeguards Urgent During Covid-19 Crisis
Contact: Jeff Chester jeff@democraticmedia.org (link sends e-mail) 202-494-7100 Katharina Kopp kkopp@democraticmedia.org (link sends e-mail) https://www.democraticmedia.org/ USDA Online Buying Program for SNAP Participants Threatens Their Privacy and Can Exacerbate Racial and Health Inequities, Says New Report Digital Rights, Civil Rights and Public Health Groups Call for Reforms from USDA, Amazon, Walmart, Safeway/Albertson’s and Other Grocery Retailers Need for Safeguards Urgent During Covid-19 Crisis Washington, DC, July 16, 2020—A pilot program designed to enable the tens of millions of Americans who participate in the USDA’s Supplemental Nutrition Assistance Program (SNAP) to buy groceries online is exposing them to a loss of their privacy through “increased data collection and surveillance,” as well as risks involving “intrusive and manipulative online marketing techniques,” according to a report from the Center for Digital Democracy (CDD). The report reveals how online grocers and retailers use an orchestrated array of digital techniques—including granular data profiling, predictive analytics, geolocation tracking, personalized online coupons, AI and machine learning —to promote unhealthy products, trigger impulsive purchases, and increase overall spending at check-out. While these practices affect all consumers engaged in online shopping, the report explains, “they pose greater threats to individuals and families already facing hardship.” E-commerce data practices “are likely to have a disproportionate impact on SNAP participants, which include low-income communities, communities of color, the disabled, and families living in rural areas. The increased reliance on these services for daily food and other household purchases could expose these consumers to extensive data collection, as well as unfair and predatory techniques, exacerbating existing disparities in racial and health equity.” The report was funded by the Robert Wood Johnson Foundation, as part of a collaboration among four civil rights, digital rights, and health organizations: Color of Change, UnidosUS, Center for Digital Democracy, and Berkeley Media Studies Group. The groups issued a letter today to Secretary of Agriculture Sonny Perdue, urging the USDA to take immediate action to strengthen online protections for SNAP participants. USDA launched (link is external) its e-commerce pilot last year in a handful of states, with an initial set of eight retailers approved for participation: Amazon, Dash’s Market, FreshDirect, Hy-Vee, Safeway, ShopRite, Walmart and Wright’s Market. The program has rapidly expanded (link is external) to a majority of states, in part as a result of the current Covid-19 health crisis, in order to enable SNAP participants to shop more safely from home by following “shelter-in-place” rules. Through an analysis of the digital marketing and grocery ecommerce practices of the eight companies, as well as an assessment of their privacy policies, CDD found that SNAP participants and other online shoppers confront an often manipulative and nontransparent online grocery marketplace, which is structured to leverage the tremendous amounts of data gathered on consumers via their mobile devices, loyalty cards, and shopping transactions. E-commerce grocers deliberately foreground the brands and products that partner with them (which include some of the most heavily advertised, processed foods and beverages), making them highly visible on store home pages and on “digital shelves,” as well as through online coupons and well-placed reminders at the point of sale. Grocers working with the SNAP pilot have developed an arsenal of “adtech” (advertising technology) techniques, including those that use machine learning and behavioral science to foster “frictionless shopping” and impulsive purchasing of specific foods and beverages. The AI and Big Data operations documented in the report may also lead to unfair and discriminatory data practices, such as targeting low-income communities and people of color with aggressive promotions for unhealthy food. Data collected and profiles created during online shopping may be applied in other contexts as well, leading to increased exposure to additional forms of predatory marketing, or to denial of opportunities in housing, education, employment, and financial services. “The SNAP program is one of our nation’s greatest success stories because it puts food on the table of hungry families and money in the communities where they live,” explained Dr. Lori Dorfman, Director of the Berkeley Media Studies Group. “Shopping for groceries should not put these families in danger of being hounded by marketers intent on selling products that harm health. Especially in the time of coronavirus when everyone has to stay home to keep themselves and their communities safe, the USDA should put digital safeguards in place so SNAP recipients can grocery shop without being manipulated by unfair marketing practices.” CDD’s research also found that the USDA relied on the flawed and misleading privacy policies of the participating companies, which fail to provide sufficient data protections. According to the pilot’s requirement for participating retailers, privacy policies should clearly explain how a consumer’s data is gathered and used, and provide “optimal” protections. A review of these long, densely worded documents, however, reveals the failure of the companies to identify the extent and impact of their actual data operations, or the risks to consumers. The pilot’s requirements also do not adequately limit the use of SNAP participant’s data for marketing. In addition, CDD tested the companies’ data practices for tracking customers’ behavior online, and compared them to the USDA’s requirements. The research found widespread use of so-called “third party” tracking software (such as “cookies”), which can expose an individual’s personal data to others. “In the absence of strong baseline privacy and ecommerce regulations in the US, the USDA’s weak safeguards are placing SNAP recipients at substantial risk,” explained Dr. Katharina Kopp, one of the report’s authors. “The kinds of e-commerce and Big Data practices we have identified through our research could pose even greater threats to communities of color, including increased commercial surveillance and further discrimination.” “Being on SNAP, or any other assistance program, should not give corporations free rein to use intrusive and manipulative online marketing techniques on Black communities,” said Jade Magnus Ogunnaike, Senior Campaign Director at Color of Change. “Especially in the era of COVID, where online grocery shopping is a necessity, Black people should not be further exposed to a corporate surveillance system with unfair and predatory practices that exacerbate disparities in racial and health equity just because they use SNAP. The USDA should act aggressively to protect SNAP users from unfair, predatory, and discriminatory data practices.” “The SNAP program helps millions of Latinos keep food on the table when times are tough and our nation’s public health and economic crises have highlighted that critical role,” said Steven Lopez, Director of Health Policy at UnidosUS. “Providing enhanced access to healthy and nutritious foods at the expense of the privacy and health of communities of color is too high of a price. Predatory marketing practices have been linked to increased health disparities for communities of color. The USDA must not ignore that fact and should take strong and meaningful steps to treat all participants fairly, without discriminatory practices based on the color of their skin.” The report calls on the USDA to “take an aggressive role in developing meaningful and effective safeguards” before moving the SNAP online purchasing system beyond its initial trial. The agency needs to ensure that contemporary e-commerce, retail and digital marketing applications treat SNAP participants fairly, with strong privacy protections and safeguards against manipulative and discriminatory practices. The USDA should work with SNAP participants, civil rights, consumer and privacy groups, as well as retailers like Amazon and Walmart, to restructure its program to ensure the safety and well-being of the millions of people enrolled in the program. ### -
Contact: Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail);) Statement from Campaign for a Commercial-Free Childhood and Center for Digital Democracy on Comments filed with FTC regarding Endorsement Guides WASHINGTON, DC and BOSTON, MA—June 23, 2020—Advocacy groups Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) filed comments on Monday in response to the FTC’s request for public comment (link is external) on its Endorsement Guides. Jeff Chester, executive director, Center for Digital Democracy: "Influencer marketing should be declared an unfair and deceptive practice when it comes to children. The FTC is enabling so-called ‘kidfluencers,’ ‘brand ambassadors,’ and other ‘celebrity’ marketers to stealthily pitch kids junk food, toys and other products, despite the known risks to their privacy, personal health and security. Kids and teens are being targeted by a ‘wild west’ influencer marketing industry wherever they go online, including when they watch videos, play games, or use social media. It's time for the FTC to place the interests of America's youth before the manipulative commercial activities of influencers." Josh Golin, Executive Director, Campaign for a Commercial-Free Childhood: “The FTC’s failure to act has helped create an entire ecosystem of unfair and deceptive influencer marketing aimed at children. It’s past time for the Commission to send a strong message to everyone – platforms, brands, ad agencies and the influencers themselves – that children should not be targets for this insidious and manipulative marketing.” Angela J. Campbell, Director Emeritus of the Institute for Public Representation’s Communications and Technology Clinic at Georgetown Law, currently chair of CCFC’s Board, and counsel to CCFC and CDD: "Influencer videos full of hidden promotions and sometimes blatant marketing have largely displaced actual programs for children. The FTC must act now to stop these deceptive and unfair practices." ###
-
Supporting the Call for Racial JusticeThe Center for Digital Democracy supports the call for racial justice and the fight against police violence, against the systemic injustices that exist in all parts of our society – inferior educational opportunities; lack of affordable equitable health care; an unjust justice system; housing and employment discrimination; and discriminatory marketing practices.We grieve for the lives lost and the opportunities denied! We grieve for the everyday injustices people of color have to endure and had to endure for centuries.We grieve for an America that could be so much more!Our grieving is not enough! CDD will continue its fight for data justice in support of racial and social justiceJune 5, 2020
-
Press Release
Groups Tell FTC to Investigate TikTok’s Failure to Protect Children’s Privacy
TikTok gathers data from children despite promise made to commission
Contact: Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail);) Advocates Say TikTok In Contempt of Court Order More kids than ever use the site due to COVID19 quarantine, but TikTok flouts settlement agreement with the FTC WASHINGTON, DC and BOSTON, MA—May 14, 2020—Today, a coalition of leading U.S. child advocacy, consumer, and privacy groups filed a complaint (link is external) urging the Federal Trade Commission (FTC) to investigate and sanction TikTok for putting kids at risk by continuing to violate the Children’s Online Privacy Protection Act (COPPA). In February 2019, TikTok paid a $5.7 million fine for violating COPPA, including illegally collecting personal information from children. But more than a year later, with quarantined kids and families flocking to the site in record numbers, TikTok has failed to delete personal information previously collected from children and is still collecting kids’ personal information without notice to and consent of parents. Campaign for a Commercial-Free Childhood (CCFC), the Center for Digital Democracy (CDD), and a total of 20 organizations demonstrated in their FTC filing that TikTok continues to violate COPPA by: failing to delete personal information related to children under 13 it obtained prior to the 2019 settlement order; failing to give direct notice to parents and to obtain parents’ consent before collecting kids’ personal information; and failing to give parents the right to review or delete their children’s personal information collected by TikTok. TikTok makes it easy for children to avoid obtaining parental consent. When a child under 13 tries to register using their actual birthdate, they will be signed up for a “younger users account” with limited functions, and no ability to share their videos. If a child is frustrated by this limited functionality, they can immediately register again with a fake birthdate from the same device for an account with full privileges, thereby putting them at risk for both TikTok’s commercial data uses and inappropriate contact from adults. In either case, TikTok makes no attempt to notify parents or obtain their consent. And TikTok doesn’t even comply with the law for those children who stick with limited “younger users accounts.” For these accounts, TikTok collects detailed information about how the child uses the app and uses artificial intelligence to determine what to show next, to keep the child engaged online as long as possible. The advocates, represented by the Communications & Technology Law Clinic in the Institute for Public Representation at Georgetown Law, asked the FTC to identify and hold responsible those individuals who made or ratified decisions to violate the settlement agreement. They also asked the FTC to prevent TikTok from registering any new accounts for persons in the US until it adopts a reliable method of determining the ages of its users and comes into full compliance with the children’s privacy rules. In light of TikTok’s vast financial resources, the number and severity of the violations, and the large number of US children that use TikTok, they asked the FTC to seek the maximum monetary penalties allowed by law. Josh Golin, Executive Director of Campaign for a Commercial-Free Childhood, said “For years, TikTok has ignored COPPA, thereby ensnaring perhaps millions of underage children in its marketing apparatus, and putting children at risk of sexual predation. Now, even after being caught red-handed by the FTC, TikTok continues to flout the law. We urge the Commission to take swift action and sanction TikTok again – this time with a fine and injunctive relief commensurate with the seriousness of TikTok’s serial violations.” Jeff Chester, Executive Director of the Center for Digital Democracy, said “Congress empowered the FTC to ensure that kids have online protections, yet here is another case of a digital giant deliberately violating the law. The failure of the FTC to ensure that TikTok protects the privacy of millions of children, including through its use of predictive AI applications, is another reason why there are questions whether the agency can be trusted to effectively oversee the kids’ data law.” Michael Rosenbloom, Staff Attorney and Teaching Fellow at the Institute for Public Representation, Georgetown Law, said “The FTC ordered TikTok to delete all personal information of children under 13 years old from its servers, but TikTok has clearly failed to do so. We easily found that many accounts featuring children were still present on TikTok. Many of these accounts have tens of thousands to millions of followers, and have been around since before the order. We urge the FTC to hold TikTok to account for continuing to violate both COPPA and its consent decree.” Katie McInnis, Policy Counsel at Consumer Reports, said "During the pandemic, families and children are turning to digital tools like TikTok to share videos with loved ones. Now more than ever, effective protection of children's personal information requires robust enforcement in order to incentivize companies, including TikTok, to comply with COPPA and any relevant consent decrees. We urge the FTC to investigate the matters raised in this complaint" Groups signing on to the complaint to the FTC are: Campaign for a Commercial-Free Childhood, the Center for Digital Democracy, Badass Teachers Association, Berkeley Media Studies Group, Children and Screens: Institute of Digital Media and Child Development, Consumer Action, Consumer Federation of America, Consumer Reports, Defending the Early Years, Electronic Privacy Information Center, Media Education Foundation, Obligation, Inc., Parent Coalition for Student Privacy, Parents Across America, ParentsTogether Foundation, Privacy Rights Clearinghouse, Public Citizen, The Story of Stuff, United Church of Christ, and USPIRG. ### -
Press Release
Groups Say White House Must Show Efficacy, Protect Privacy, and Ensure Equity When Deploying Technology to Fight Virus
Fifteen leading consumer, privacy, civil and digital rights organizations called on the federal government to set guidelines to protect individuals’ privacy, ensure equity in the treatment of individuals and communities, and communicate clearly about public health objectives in responding to the COVID-19 pandemic. There must be consensus among all relevant stakeholders on the most efficacious solution before relying on a technological fix to respond to the pandemic.
FOR IMMEDIATE RELEASE Contacts: Susan Grant (link sends e-mail), CFA, 202-939-1003 May 5, 2020 Katharina Kopp (link sends e-mail), CDD, 202-836 4621 White House Must Act To protect privacy and ensure equity in responding to COVID-19 pandemic Groups Tell Pence to Set Standards to Guide Government and Public-Private Partnership Data Practices and Technology Use Washington, D.C. – Today, 15 leading consumer, privacy, civil and digital rights organizations called on the federal government (link is external) to set guidelines to protect individuals’ privacy, ensure equity in the treatment of individuals and communities, and communicate clearly about public health objectives in responding to the COVID-19 pandemic. In a letter to Vice President Michael R. Pence, who leads the Coronavirus Task Force, the groups said that the proper use of technology and data have the potential to provide important public health benefits, but must incorporate privacy and security, as well as safeguards against discrimination and violations of civil and other rights. Developing a process to assess how effective technology and other tools will be to achieve the desired public health objectives is also vitally important, the groups said. The letter (link is external) was signed by the Campaign for a Commercial Free Childhood, Center for Democracy & Technology, Center for Digital Democracy, Constitutional Alliance, Consumer Action, Consumer Federation of America, Electronic Privacy Information Center (EPIC), Media Alliance, MediaJustice, Oakland Privacy, Parent Coalition for Student Privacy, Privacy Rights Clearinghouse, Public Citizen, Public Knowledge, and Rights x Tech. “A headlong rush into technological solutions without carefully considering how well they work and whether they could undermine fundamental American values such as privacy, equity, and fairness would be a mistake,” said Susan Grant, Director of Consumer Protection and Privacy at the Consumer Federation of America. “Fostering public trust and confidence in the programs that are implemented to combat COVID-19 is crucial to their overall success.” “Measures to contain the deadly spread of COVID-19 must be effective and protect those most exposed. History has taught us that the deployment of technologies is often driven by forces that tend to risk privacy, undermine fairness and equity, and place our civil rights in peril. The White House Task Force must work with privacy, consumer and civil rights groups, and other experts, to ensure that the efforts to limit the spread of the virus truly protect our interests,” said Katharina Kopp, Director of Policy, Center for Digital Democracy. In addition to concerns about government plans that are being developed to address the pandemic, such as using technology for contact tracing, the groups noted the need to ensure that private-sector partnerships incorporate comprehensive privacy and security standards. The letter outlines 11 principles that should form the basis for standards that government agencies and the private sector can follow: Set science-based, public health objectives to address the pandemic. Then design the programs and consider what tools, including technology, might be most efficacious and helpful to meet those objectives. Assess how technology and other tools meet key criteria. This should be done before deployment when possible and consistent with public health demands, and on an ongoing basis. Questions should include: Can they be shown to be effective for their intended purposes? Can they be used without infringing on privacy? Can they be used without unfairly disadvantaging individuals or communities? Are there other alternatives that would help meet the objectives well without potentially negative consequences? Use of technologies and tools that are ineffective or raise privacy or other societal concerns should be discontinued promptly. Protect against bias and address inequities in technology access. In many cases, communities already disproportionately impacted by COVID-19 may lack access to technology, or not be fairly represented in data sets. Any use of digital tools must ensure that nobody is left behind. Set clear guidelines for how technology and other tools will be used. These should be aimed at ensuring that they will serve the public health objective while safeguarding privacy and other societal values. Public and private partners should be required to adhere to those guidelines, and the guidelines should be readily available to the public. Ensure that programs such as technology-assisted contact tracing are voluntary. Individual participation should be based on informed, affirmative consent, not coercion. Only collect individuals’ personal information needed for the public health objective. No other personal information should be collected in testing, contact tracing, and public information portals. Do not use or share individuals’ personal information for any other purposes. It is important to avoid “mission creep” and to prevent use for purposes unrelated to the pandemic such as for advertising, law enforcement, or for reputation management in non-public health settings. Secure individuals’ personal information from unauthorized access and use. Information collected from testing, contact tracing and information portals may be very revealing, even if it is not “health” information, and security breaches would severely damage public trust. Retain individuals’ personal information only for as long as it is needed. When it is no longer required for the public health objective, the information should be safely disposed of. Be transparent about data collection and use. Before their personal information is collected, individuals should be informed about what data is needed, the specific purposes for which the data will be used, and what rights they have over what’s been collected about them. Provide accountability. There must be systems in place to ensure that these principles are followed and to hold responsible parties accountable. In addition, individuals should have clear means to ask questions, make complaints, and seek recourse in connection with the handling of their personal information. The groups asked Vice President Pence for a meeting to discuss their concerns and suggested that the Coronavirus Task Force immediately create an interdisciplinary advisory committee comprised of experts from public health, data security, privacy, social science, and civil society to help develop effective standards. The Consumer Federation of America (link is external) is a nonprofit association of more than 250 consumer groups that was founded in 1968 to advance the consumer interest through research, advocacy, and education. The Center for Digital Democracy (CDD) is recognized as one of the leading NGOs organizations promoting privacy and consumer protection, fairness and data justice in the digital age. Since its founding in 2001 (and prior to that through its predecessor organization, the Center for Media Education), CDD has been at the forefront of research, public education, and advocacy. -
Blog
Joint civil society statement: States use of digital surveillance technologies to fight pandemic must respect human rights
The COVID-19 pandemic is a global public health emergency that requires a coordinated and large-scale response by governments worldwide. However, States’ efforts to contain the virus must not be used as a cover to usher in a new era of greatly expanded systems of invasive digital surveillance.We, the undersigned organizations, urge governments to show leadership in tackling the pandemic in a way that ensures that the use of digital technologies to track and monitor individuals and populations is carried out strictly in line with human rights.Technology can and should play an important role during this effort to save lives, such as to spread public health messages and increase access to health care. However, an increase in state digital surveillance powers, such as obtaining access to mobile phone location data, threatens privacy, freedom of expression and freedom of association, in ways that could violate rights and degrade trust in public authorities – undermining the effectiveness of any public health response. Such measures also pose a risk of discrimination and may disproportionately harm already marginalized communities.These are extraordinary times, but human rights law still applies. Indeed, the human rights framework is designed to ensure that different rights can be carefully balanced to protect individuals and wider societies. States cannot simply disregard rights such as privacy and freedom of expression in the name of tackling a public health crisis. On the contrary, protecting human rights also promotes public health. Now more than ever, governments must rigorously ensure that any restrictions to these rights is in line with long-established human rights safeguards.This crisis offers an opportunity to demonstrate our shared humanity. We can make extraordinary efforts to fight this pandemic that are consistent with human rights standards and the rule of law. The decisions that governments make now to confront the pandemic will shape what the world looks like in the future.We call on all governments not to respond to the COVID-19 pandemic with increased digital surveillance unless the following conditions are met:Surveillance measures adopted to address the pandemic must be lawful, necessary and proportionate. They must be provided for by law and must be justified by legitimate public health objectives, as determined by the appropriate public health authorities, and be proportionate to those needs. Governments must be transparent about the measures they are taking so that they can be scrutinized and if appropriate later modified, retracted, or overturned. We cannot allow the COVID-19 pandemic to serve as an excuse for indiscriminate mass surveillance.If governments expand monitoring and surveillance powers then such powers must be time-bound, and only continue for as long as necessary to address the current pandemic. We cannot allow the COVID-19 pandemic to serve as an excuse for indefinite surveillance.States must ensure that increased collection, retention, and aggregation of personal data, including health data, is only used for the purposes of responding to the COVID-19 pandemic. Data collected, retained, and aggregated to respond to the pandemic must be limited in scope, time-bound in relation to the pandemic and must not be used for commercial or any other purposes. We cannot allow the COVID-19 pandemic to serve as an excuse to gut individual’s right to privacy.Governments must take every effort to protect people’s data, including ensuring sufficient security of any personal data collected and of any devices, applications, networks, or services involved in collection, transmission, processing, and storage. Any claims that data is anonymous must be based on evidence and supported with sufficient information regarding how it has been anonymized. We cannot allow attempts to respond to this pandemic to be used as justification for compromising people’s digital safety.Any use of digital surveillance technologies in responding to COVID-19, including big data and artificial intelligence systems, must address the risk that these tools will facilitate discrimination and other rights abuses against racial minorities, people living in poverty, and other marginalized populations, whose needs and lived realities may be obscured or misrepresented in large datasets. We cannot allow the COVID-19 pandemic to further increase the gap in the enjoyment of human rights between different groups in society.If governments enter into data sharing agreements with other public or private sector entities, they must be based on law, and the existence of these agreements and information necessary to assess their impact on privacy and human rights must be publicly disclosed – in writing, with sunset clauses, public oversight and other safeguards by default. Businesses involved in efforts by governments to tackle COVID-19 must undertake due diligence to ensure they respect human rights, and ensure any intervention is firewalled from other business and commercial interests. We cannot allow the COVID-19 pandemic to serve as an excuse for keeping people in the dark about what information their governments are gathering and sharing with third parties.Any response must incorporate accountability protections and safeguards against abuse. Increased surveillance efforts related to COVID-19 should not fall under the domain of security or intelligence agencies and must be subject to effective oversight by appropriate independent bodies. Further, individuals must be given the opportunity to know about and challenge any COVID-19 related measures to collect, aggregate, and retain, and use data. Individuals who have been subjected to surveillance must have access to effective remedies.COVID-19 related responses that include data collection efforts should include means for free, active, and meaningful participation of relevant stakeholders, in particular experts in the public health sector and the most marginalized population groups.Signatories:7amleh – Arab Center for Social Media AdvancementAccess NowAfrican Declaration on Internet Rights and Freedoms CoalitionAI NowAlgorithm WatchAlternatif BilisimAmnesty InternationalApTIARTICLE 19Asociación para una Ciudadanía Participativa, ACI ParticipaAssociation for Progressive Communications (APC)ASUTIC, SenegalAthan - Freedom of Expression Activist OrganizationAustralian Privacy FoundationBarracón DigitalBig Brother WatchBits of FreedomCenter for Advancement of Rights and Democracy (CARD)Center for Digital DemocracyCenter for Economic JusticeCentro De Estudios Constitucionales y de Derechos Humanos de RosarioChaos Computer Club - CCCCitizen D / Državljan DCIVICUSCivil Liberties Union for EuropeCódigoSurCoding RightsColetivo Brasil de Comunicação SocialCollaboration on International ICT Policy for East and Southern Africa (CIPESA)Comité por la Libre Expresión (C-Libre)Committee to Protect JournalistsConsumer ActionConsumer Federation of AmericaCooperativa Tierra ComúnCreative Commons UruguayD3 - Defesa dos Direitos DigitaisData Privacy BrasilDemocratic Transition and Human Rights Support Center "DAAM"Derechos DigitalesDigital Rights Lawyers Initiative (DRLI)Digital Rights WatchDigital Security Lab UkraineDigitalcourageEPICepicenter.worksEuropean Digital Rights - EDRiFitugFoundation for Information Policy ResearchFoundation for Media AlternativesFundación Acceso (Centroamérica)Fundación Ciudadanía y Desarrollo, EcuadorFundación Datos ProtegidosFundación Internet BoliviaFundación Taigüey, República DominicanaFundación Vía LibreHermes CenterHiperderechoHomo DigitalisHuman Rights WatchHungarian Civil Liberties UnionImpACT International for Human Rights PoliciesIndex on CensorshipInitiative für NetzfreiheitInnovation for Change - Middle East and North AfricaInternational Commission of JuristsInternational Service for Human Rights (ISHR)Intervozes - Coletivo Brasil de Comunicação SocialIpandetecIPPFIrish Council for Civil Liberties (ICCL)IT-Political Association of DenmarkIuridicum Remedium z.s. (IURE)KarismaLa Quadrature du NetLiberia Information Technology Student UnionLibertyLuchadorasMajal.orgMasaar "Community for Technology and Law"Media Rights Agenda (Nigeria)MENA Rights GroupMetamorphosis FoundationNew America's Open Technology InstituteObservacomOpen Data InstituteOpen Rights GroupOpenMediaOutRight Action InternationalPangeaPanoptykon FoundationParadigm Initiative (PIN)PEN InternationalPrivacy InternationalPublic CitizenPublic KnowledgeR3D: Red en Defensa de los Derechos DigitalesRedesAyudaSHARE FoundationSkyline International for Human RightsSursiendoSwedish Consumers’ AssociationTahrir Institute for Middle East Policy (TIMEP)Tech InquiryTechHerNGTEDICThe Bachchao ProjectUnwanted Witness, UgandaUsuarios DigitalesWITNESSWorld Wide Web Foundation -
Blog
Platforms, Privacy, Pandemic and Data Profiteering: The COVID-19 crisis further fuels unaccountable growth from the digital tech and media industries
By Jeffrey Chester The COVID-19 pandemic is a profound global public health crisis that requires our upmost attention: to stem its deadly tide and rebuild the global health system so we do not experience such a dire situation in the future. It also demands that we ensure the U.S. has a digital media system that is democratic, accountable, and one that both provides public services and protects privacy. The virus is profoundly accelerating our reliance on digital media worldwide, ushering (link is external) in “a new landscape in terms of how shoppers are buying and how they are behaving online and offline.” Leading platforms—Amazon, Facebook and Google—as well as many major ecommerce and social media sites, video streaming services, gaming apps, and the like—are witnessing a flood of people attempting to research health concerns, order groceries and supplies, view entertainment and engage in communication with friends and family. According to a marketing industry report (link is external), “nearly 90% of consumers have changed their behavior because of COVID-19.” More data (link is external) about our health concerns, kids, financial status, products we buy and more are flowing into the databases of the leading digital media companies. The pandemic will further strengthen their power as they leverage all the additional personal information they are currently capturing as a consequence of the pandemic. This also poses a further threat to the privacy of Americans who are especially dependent on online services if they are to survive. The pandemic is accelerating societal changes (link is external) in our relationship to the Internet. For example, marketers predict that we are witnessing the emergence of an experience they call the “fortress home”—as “consumer psychology shifts into an extreme form of cocooning.” The move to online buying via ecommerce—versus going to a physical store—will become an even more dominant consumer behavior. So, too, will in-home media consumption increase, especially the reliance on streaming (“OTT”) video. Marketers are closely examining all these pandemic-related developments using a global lens—since the digital behaviors of all consumers—from China to the U.S.—have so many commonalities. For example, Nielsen has identified six (link is external) “consumer behavior thresholds” that reveal virus-influenced consumer buying behaviors, such as “quarantined living preparation” and “restricted living.” A host of sites are now regularly reporting how the pandemic impacts the public, and what it means for marketing and major brands. See, for example, Ipsos (link is external), Comscore (link is external), Nielsen (link is external), Kantar (link is external), and the Advertising Research Foundation (ARF (link is external)). In addition to the expanded market power of the giants, there are also growing threats to our privacy from surveillance by both government (link is external) and the commercial sector. Marketers are touting how all the real-time geolocation data that is continuously mined from our mobile devices, wearables (link is external) and “apps” can help public health experts better respond to the virus and similar threats. At a recent (link is external) Advertising Research Foundation townhall on the virus it was noted that “the location-based data that brand stewards have found useful in recent years to deliver right-time/right-place messages has ‘gone from being useful that helps businesses sell a little bit more’ to truly being a community and public-health tool.” Marketers will claim that they have to track all our moves because it’s in the national interest in order to sanction the rapid expansion of geo-surveillance (link is external) in all areas of our lives. They are positioning themselves to be politically rewarded for their work on the pandemic, hoping it will immunize them from the growing criticism about their monopolistic and anti-consumer privacy behaviors. Amazon, Facebook, Google, Snapchat and various “Big Data” digital marketing companies announced (link is external), for example, a COVID-19 initiative with the White House and CDC. Brokered by the Ad Council, it will unleash various data-profiling technologies, influencer marketing, and powerful consumer targeting engines to ensure Americans receive information about the virus. (At the same time, brands are worried about having their content appear alongside information about the coronavirus, adopting new (link is external) “brand safety” tools that can “blacklist” news and other online sites. This means that the funding for journalism and public safety information becomes threatened (link is external) because advertisers wish to place their own interests first.) But the tactics (link is external) now sanctioned by the White House are the exact same ones that must be addressed in any legislation that effectively protects our privacy online. We believe that the leading online companies should not be permitted to excessively enrich themselves during this moment by gathering even more information on the public. They will mine this information for insights that enable them to better understand our private health needs and financial status. They will know more about the online behaviors of our children, grandparents and many others. Congress should enact protections that ensure that the data gathered during this unprecedented public health emergency are limited in how they can be used. It should also examine how the pandemic is furthering the market power of a handful of platforms and ecommerce companies, to ensure there is a fair marketplace accessible to the public. It’s also evident there must be free or inexpensively priced broadband for all. How well we address the role of the large online companies during this period will help determine our ability to respond to future crises, as well as the impact of these companies on our democracy. -
Press Release
Children’s privacy advocates call on FTC to require Google, Disney, AT&T and other leading companies to disclose how they gather and use data to target kids and families
Threats to young people from digital marketing and data collection are heightened by home schooling and increased video and mobile streaming in response to COVID-19
Contact: Jeffrey Chester, CDD, jeff@democraticmedia.org (link sends e-mail), 202-494-7100 Josh Golin, CCFC, josh@commercialfreechildhood.org (link sends e-mail), 339-970-4240 Children’s privacy advocates call on FTC to require Google, Disney, other leading companies to disclose how they gather and use data to target kids and families Threats to young people from digital marketing and data collection are heightened by home schooling and increased video and mobile streaming in response to COVID-19 WASHINGTON, DC and BOSTON, MA – March 26, 2020 – With children and families even more dependent on digital media during the COVID-19 crisis, the Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) called on the Federal Trade Commission (FTC) to require leading digital media companies to turn over information on how they target kids, including the data they collect. In a letter to the FTC, the advocates proposed a series of questions to shed light on the array of opaque data collection and digital marketing practices which the tech companies employ to target kids. The letter includes a proposed list of numerous digital media and marketing companies and edtech companies that should be the targets of the FTC’s investigation—among them are Google, Zoom, Disney, Comcast, AT&T, Viacom, and edtech companies Edmodo and Prodigy. The letter—sent by the Institute for Public Representation at Georgetown Law, attorneys for the advocates—is in response to the FTC’s early review of the rules protecting children under the Children’s Online Privacy Protection Act (COPPA). The groups said “children’s privacy is under siege more than ever,” and urged the FTC “not to take steps that could undermine strong protections for children’s privacy without full information about a complex data collection ecosystem.” The groups ask the Commission to request vital information from two key sectors that greatly impact the privacy of children: the edtech industry, which provides information and technology applications in the K-12 school setting, and the commercial digital data and marketing industry that provides the majority of online content and communications for children, including apps, video streaming, and gaming. The letter suggests numerous questions for the FTC to get to the core of how digital companies conduct business today, including contemporary Big Data practices that capture, analyze, track, and target children across platforms. “With schools closed across the country, American families are more dependent than ever on digital media to educate and occupy their children,” said CCFC’s Executive Director, Josh Golin. “It’s now urgent that the FTC use its full authority to shed light on the business models of the edtech and children’s digital media industries so we can understand what Big Tech knows about our children and what they are doing with that information. The stakes have never been higher.” “Although children’s privacy is supposed to be protected by federal law and the FTC, young people remain at the epicenter of a powerful data-gathering and commercial online advertising system," said Dr. Katharina Kopp, Deputy Director of the Center for Digital Democracy. “We call on the FTC to investigate how companies use data about children, how these data practices work against children’s interests, and also how they impact low-income families and families of color. Before it proposes any changes to the COPPA rules, the FTC needs to obtain detailed insights into how contemporary digital data practices pose challenges to protecting children. Given the outsize intrusion of commercial surveillance into children’s and families’ lives via digital services for education, entertainment, and communication, the FTC must demonstrate it is placing the welfare of kids as its highest priority.” In December, CCFC and CDD led a coalition of 31 groups—including the American Academy of Pediatrics, Center for Science in the Public Interest, Common Sense Media, Consumer Reports, Electronic Privacy Information Center, and Public Citizen—in calling on the FTC to use its subpoena authority. The groups said the Commission must better assess the impacts on children from today’s digital data-driven advertising system, and features such as cross-device tracking, artificial intelligence, machine learning, virtual reality, and real-time measurement. “Childhood is more digital than ever before, and the various ways that children's data is collected, analyzed, and used have never been more complex or opaque,” said Lindsey Barrett, Staff Attorney and Teaching Fellow at IPR’s Communications and Technology Law Clinic at Georgetown Law. “The Federal Trade Commission should shed light on how children's privacy is being invaded at home, at school, and throughout their lives by investigating the companies that profit from collecting their data, and cannot undertake an informed and fact-based revision of the COPPA rules without doing so.” "Children today, more than ever, have an incredible opportunity to learn, play, and socialize online,” said Celia Calano, student attorney at the Institute for Public Representation. “But these modern playgrounds and classrooms come with new safety concerns, including highly technical and obscure industry practices. The first step to improving the COPPA Rule and protecting children online is understanding the current landscape—something the FTC can achieve with a 6(b) investigation." ### -
Google’s (i.e., Alphabet, Inc.) proposed acquisition of Fitbit, a leading health wearable device company, is just one more piece illustrating how the company is actively engaged in shaping the future of public health. It has assembled a sweeping array of assets in the health field, positioning its advertising system to better take advantage of health information, and is playing a proactive role lobbying to promote significant public policy changes for medical data at the federal level that will have major implications (link is external)for Americans and their health.Google understands that there are tremendous revenues to be made gathering data—from patients, hospitals, medical professionals and consumers interested in “wellness”—through the various services that the company offers. It sees a lucrative future as a powerful presence in our health system able to bill Medicare and other government programs. In reviewing the proposed takeover, regulators should recognize that given today’s “connected” economy, and with Google’s capability and intention to generate monetizeable insights from individuals across product categories (health, shopping, financial services, etc.), the deal should not be examined solely within a narrow framework. While the acquisition directly bolsters Google’s growing clout in what is called the “connected-health” marketplace, the company understands that the move is also designed to maintain its dominance in search, video and other digital marketing applications. It’s also a deal that raises privacy concerns, questions about the future direction of the U.S. health system, and what kinds of safeguards—if any at all—will be in place to protect health consumers and patients. As health venture capital fund Rock Health explained in a recent report, “Google acquired Fitbit in a deal that gives the tech giant access to troves of personal health data and healthcare partnerships, in addition to health tracking software.” Fitbit reports that “28 million active users” worldwide use its wearable device products. For Google, Fitbit brings (link is external) a rich layer of personal data, expertise in fitness (link is external) tracking software, heart-rate sensors, as well as relationships with health-service and employee-benefit providers. Wearable devices can provide a stream (link is external)of ongoing data on our activities, physical condition, geolocation and more. In a presentation to investors made in 2018, Fitbit claimed to be the “number one health and fitness” app in the U.S. for both the Android and Apple app store, and considered itself the “number one “wearable brand globally,” available in 47,000 stores, and had “direct applications for health and wellness categories such as diabetes, heart health, and sleep apnea.” “Driving behavior change” is cited as one of the company’s fundamental capabilities, such as its “use of data…to provide insights and guidance.” Fitbit developed a “platform for innovative data collection” for clinical researchers, designed to help advance (link is external) “the use of wearable devices in research and clinical applications. Fitbit also has relationships with pharmacies, including those that serves people with “complex health conditions.” Fitbit has also “made a number of moves to expand its Health Services division,” such as its 2018 acquisition of Twine Health, a “chronic disease management platform.” In 2018, it also unveiled a “connected health platform that enables payers and health systems to deliver personalized coaching” to individuals. The company’s Fitbit Health Solutions division is working with more than 100 insurance companies in the U.S., and “both government sponsored and private plans” work with the company. Fitbit Premium was launched last year, which “mines consumer data to provide personalized health insights” for health care delivery. According to Business Insider Intelligence, “Fitbit plans to use the Premium service to get into the management of costly chronic conditions like diabetes, sleep apnea, and hypertension.” The company has dozens of leading “enterprises” and “Fortune 500” companies as customers. It also works with thousands of app developers and other third parties (think Google’s dominance in the app marketplace, such as its Play store). Fitbit has conducted research to understand “the relationship between activity and mood” of people, which offers an array of insights that has applications for health and numerous other “vertical” markets. Even prior to the formal takeover of Fitbit by Google, it had developed strong ties to the digital data marketing giant. It has been a Google Cloud client since 2018, using its machine learning prowess to insert Fitbit data into a person’s electronic health record (EHR). In 2018, Fitbit said that it was going to transfer its “data infrastructure” to the Google Cloud platform. It planned to “leverage Google’s healthcare API” to generate “more meaningful insights” on consumers, and “collaborate on the future of wearables.” Fitbit’s data might also assist Google in forging additional “ties with researchers who want to unlock the constant stream of data” its devices collect. When considering how regulators and others should view this—yet again—significant expansion by Google in the digital marketplace—the following issues must be addressed: Google Cloud and its use of artificial intelligence and machine learning in a new data pipeline for health services, including marketing Google’s Cloud service offers “solutions” (link is external) for the healthcare and life sciences industry, by helping to “personalize patient experiences,” “drive data interoperability,” and improve commercialization and operations”—including for “pharma insights and analytics.” Google Cloud (link is external) has developed a specific “API” (application programming interface) that enables health-related companies to process and analyze their data, by using machine learning technologies, for example. The Health Care Cloud API (link is external)also provides a range of other data functionalities (link is external) for clinical and other uses. Google is now working to help create a “new data infrastructure layer via 3 key efforts,” according to a recent report on the market. It is creating “new data pipes for health giants,” pushing the Google Cloud and building “Google’s own healthcare datasets for third parties.” (See, for example, “G Suite (link is external) for Healthcare Businesses” products as well as its “Apigee API Platform,” which works with the Cleveland Clinic, Walgreens, and others). Illustrating the direct connection between the Google Cloud and Google’s digital marketing apparatus is their case study (link is external) of the leading global ad conglomerate, WPP. “Our strong partnership with Google Cloud is key,” said WPP’s CEO, who explained that “their vast experience in advertising and marketing combined with their strength in analytics and AI helps us to deliver powerful and innovative solutions for our clients” (which include (link is external) “369 of the Fortune Global 500, all 30 of the Dow Jones 30 and 71 of the NASDAQ 100”). WPP links the insights and other resources it generates from the Google Cloud to Google’s “Marketing Platform” (link is external) so its clients can “deliver better experiences for their audiences across media and marketing.” Google has made a significant push (link is external) to incorporate the role that machine learning plays with marketing across product categories, including search and YouTube. It is using machine learning to “anticipate needs” of individuals to further its advertising (link is external) business. Fitbit will bring in a significant amount of additional data for Google to leverage in its Cloud services, which impact a number of consumer and commercial markets beyond (link is external) health care. The Fitbit deal also involves Google’s ambitions to become an important force providing healthcare providers access to patient, diagnostic and other information. Currently the market is dominated by others, but Google has plans for this market. For example, it has developed a “potential EHR tool that would empower doctors with the same kind of intuitive and snappy search functionality they've come to expect from Google.” According to Business Insider Intelligence, Google could bundle such applications along with Google Cloud and data analytics support that would help hospitals more easily navigate the move to data heavy (link is external), value-based care (VBC) reimbursement models (link is external).” Google Health already incorporates a wide range of health-related services and investments “Google is already a health company,” according (link is external) to Dr. David Feinberg, the company’s vice president at Google Health. Feinberg explains that they are making strides in organizing and making health data more useful thanks to work being done by Cloud (link is external) and AI (link is external) teams. And looking across the rest of Google’s portfolio of helpful products, we’re already addressing aspects of people’s health. Search helps people answer everyday health questions (link is external), Maps helps get people to the nearest hospital, and other tools and products are addressing issues tangential to health—for instance, literacy (link is external), safer driving (link is external), and air pollution (link is external)…. and in response, Google and Alphabet have invested in efforts that complement their strengths and put users, patients, and care providers first. Look no further than the promising AI research and mobile applications coming from Google and DeepMind Health (link is external), or Verily’s Project Baseline (link is external) that is pushing the boundaries of what we think we know about human health. Among Google Health’s initiatives are “studying the use of artificial intelligence to assist in diagnosing (link is external) cancer, predicting (link is external) patient outcomes, preventing (link is external) blindness…, exploring ways to improve patient care, including tools that are already being used by clinicians…, [and] partnering with doctors, nurses, and other healthcare professionals to help improve the care patients receive.” Through its AI work, Google is developing “deep learning” applications for electronic health records. Google Health is expanding its team, including specifically to take advantage of the wearables market (and has also hired a former FDA commissioner to “lead health strategy”). Google is the leading source of search information on health issues, and health-related ad applications are integrated into its core marketing apparatus A billion health-related questions are asked every day on Google’s search engine, some 70,000 every minute (“around 7 percent of Google’s daily searches”). “Dr. Google,” as the company has been called, is asked about conditions, medication, symptoms, insurance questions and more, say company leaders. Google’s ad teams in the U.S. promote how health marketers can effectively use its ad products, including YouTube, as well as understand how to take advantage of what Google has called “the path to purchase.” In a presentation on “The Role of Digital Marketing in the Healthcare Industry,” Google representatives reported that After conducting various studies and surveys, Google has concluded that consumers consult 12.4 resources prior to a hospital visit. When consumers are battling a specific disease or condition, they want to know everything about it: whether it is contagious, how it started, the side-effects, experiences of others who have had the same condition, etc. When doing this research, they will consult YouTube videos, read patient reviews of specific doctors, read blog articles on healthcare websites, read reviews, side-effects, and uses of particular medicines. They want to know everything! When consuming this information, they will choose the business that has established their online presence, has positive reviews, and provides a great customer experience, both online and offline. Among the data shared with marketers was information that “88% of patients use search to find a treatment center,” “60% of patients use a mobile device,” “60% of patients like to compare and validate information from doctors with their own online research,” “56% of patients search for health-related concerns on YouTube,” “5+ videos are watched when researching hospitals or treatment centers,” and that “2 billion health-related videos are on YouTube.” The “Internet is a Patient/Caregiver’s #1 confidant,” they noted. They also discussed how mobile technologies have triggered “non-linear paths to purchase,” and that mobile devices are “now the main device used for health searches.” “Search and video are vital to the patient journey,” and “healthcare videos represent one of the largest, fastest growing content segments on YouTube today.” Their presentation demonstrated how health marketers can take advantage of Google’s ability to know a person’s location, as well as how other information related to their behaviors and interests can help them “target the right users in the right context.” To understand the impact of all of Google’s marketing capabilities, one also should review the company’s restructured (and ever-evolving) “Marketing Platform.” Google’s Map Product will be able to leverage Fitbit data Google is using data related to health that are gathered by Google Maps, such as when we do searches for needed care services (think ERs, hospitals, pharmacies, etc.). “The most popular mapping app in the U.S…. presents a massive opportunity to connect its huge user base with healthcare services,” explain Business Insider Intelligence. Google has laid the groundwork with its project addressing the country’s opioid epidemic, linking “Google Maps users with recovery treatment centers,” as well as identifying where Naloxone (the reversal drug for opioid overdoes) is available. Last year, Google Maps launched a partnership with CVS “to help consumers more easily find places to drop off expired drugs.” Through its Waze subsidiary, which provides navigation information for drivers, Google sells ads to urgent care centers, which find new patients as a result of map-based, locally tailored advertisements. Google’s impact on the wearable marketplace, including health, wellness and other apps The acquisition of Fitbit will bolster Google’s position in the wearables market, as well as its direct and indirect role providing access to its own and third-party apps. Google Fit, which “enables Android users to pair health-tracking devices with their phone to monitor activity,” already has partnerships with a number of wearable device companies, such as Nike, Adidas and Noom. Business Intelligencer noted in January 2020 that Google Fit was “created to ensure Android devices have a platform to house user-generated health data (making it more competitive with Apple products). In 2019, Google acquired the smartwatch technology from Fossil. Fitbit will play a role in Google’s plans for its Fit service, such as providing additional data that can be accessed via third parties and made available to medical providers through patients’ electronic health records. The transaction, said one analyst, “is partly a data play,” and also one intended to keep customers from migrating from its Android platform to Apple’s. It is designed, they suggest, to ensure that Google can benefit from the sales of health-related services during the peak earning years of consumers. The Google Play app store offers access to an array of health and wellness apps that will be impacted by this deal. Antitrust authorities in the EU have already sanctioned Google for the way it has leveraged its Android platform for anti-competitive behavior. Google’s health related investments, including its use of artificial intelligence, and the role of Fitbit data Verily is “where Alphabet is doing the bulk of its healthcare work,” according to a recent report on the role AI plays in Google’s plans to “reinvent the $3 Trillion U.S. healthcare industry.” Verily is “focused on using data to improve healthcare via analytics tools, interventions, research” and other activities, partnering with “existing healthcare institutions to find areas to apply AI.” One of these projects is the “Study Watch, a wearable device that captures biometric data.” Verily has also made significant investments globally as it seeks to expand. DeepMind works on AI research, including how it is applicable to healthcare. Notably, DeepMind is working with the UK’s National Health Service. Another subsidiary, Calico, uses AI as part of its focus to address aging and age-related illnesses. Additionally, “GV” (Google Ventures) makes health-related investments. According to the CB Insights report, “Google’s strategy involves an end-to-end approach to healthcare, including: Data generation — This includes digitizing and ingesting data produced by wearables, imaging, and MRIs among other methods. This data stream is critical to AI-driven anomaly detection; Disease detection — Using AI to detect anomalies in a given dataset that might signal the presence of some disease; and Disease/lifestyle management — These tools help people who have been diagnosed with a disease or are at risk of developing one go about their day-to-day lives and/or make positive lifestyle modifications. Google has also acquired companies that directly further its health business capabilities, such as Apigee, Senosis Health and others. Google’s continuous quest to gather more health data, such as “Project Nightingale,” has already raised concerns. There are now also investigations of Google by the Department of Justice and State Attorney’s-General. The Department of Justice, which is currently reviewing the Google/Fitbit deal, should not approve it without first conducting a thorough review of the company’s health-related business operations, including the impact (including for privacy) that Fitbit data will have on the marketplace. This should be made a part of the current ongoing antitrust investigation into Google by both federal and state regulators. Congress should also call on the DoJ, as well as the FTC, to review this proposed acquisition in light of the changes that digital applications are bringing to health services in the U.S. This deal accompanies lobbying from Google and others that is poised to open the floodgates of health data that can be accessed by patients and an array of commercial and other entities. The Department of Health and Human Services has proposed a rule on data “interoperability” that, while ostensibly designed to help empower health services users to have access to their own data, is also a “Trojan Horse” designed to enable app developers and other commercial entities to harvest that data as an important new profit center. “The Trump Administration has made the unfettered sharing of health data a health IT priority,” explained one recent news report. Are regulators really ready to stop further digital consolidation? The diagnosis is still out! For a complete annotated version, please see attached pdf
-
In March 2018, The New York Times and The Guardian/Observer broke an explosive story that Cambridge Analytica, a British data firm, had harvested more than 50 million Facebook profiles and used them to engage in psychometric targeting during the 2016 US presidential election (Rosenberg, Confessore, & Cadwalladr, 2018). The scandal erupted amid ongoing concerns over Russian use of social media to interfere in the electoral process. The new revelations triggered a spate of congressional hearings and cast a spotlight on the role of digital marketing and “big data” in elections and campaigns. The controversy also generated greater scrutiny of some of the most problematic tech industry practices — including the role of algorithms on social media platforms in spreading false, hateful, and divisive content, and the use of digital micro-targeting techniques for “voter suppression” efforts (Green & Issenberg; 2016; Howard, Woolley, & Calo, 2018). In the wake of these cascading events, policymakers, journalists, and civil society groups have called for new laws and regulations to ensure transparency and accountability in online political advertising.Twitter and Google, driven by growing concern that they will be regulated for their political advertising practices, fearful of being found in violation of the General Data Protection Regulation (GDPR) in the European Union, and cognisant of their own culpability in recent electoral controversies, have each made significant changes in their political advertising policies (Dorsey, 2019; Spencer, 2019). Despite a great deal of public hand wringing, on the other hand, US federal policymakers have failed to institute any effective remedies even though several states have enacted legislation designed to ensure greater transparency for digital political ads (California Clean Money Campaign, 2019; Garrahan, 2018). These recent legislative and regulatory initiatives in the US are narrow in scope and focused primarily on policy approaches to political advertising in more traditional media, failing to hold the tech giants accountable for their deleterious big data practices.On the eve of the next presidential election in 2020, the pace of innovation in digital marketing continues unabated, along with its further expansion into US electoral politics. These trends were clearly evident in the 2018 election, which, according to Kantar Media, were “the most lucrative midterms in history”, with $5.25 billion USD spent for ads on local broadcast cable TV, and digital — outspending even the 2016 presidential election. Digital ad spending “quadrupled from 2014” to $950 million USD for ads that primarily ran on Facebook and Google (Axios, 2018; Lynch, 2018). In the upcoming 2020 election, experts are forecasting overall spending on political ads will be $6 billion USD, with an “expected $1.6 billion to be devoted to digital video… more than double 2018 digital video spending” (Perrin, 2019). Kantar (2019), meanwhile, estimates the portion spent for digital media will be $1.2 billion USD in the 2019-2020 election cycle.In two earlier papers, we documented a number of digital practices deployed during the 2016 elections, which were emblematic of how big data systems, strategies and techniques were shaping contemporary political practice (Chester & Montgomery, 2017, 2018). Our work is part of a growing body of interdisciplinary scholarship on the role of data and digital technologies in politics and elections. Various terms have been used to describe and explain these practices — from computational politics to political micro-targeting to data-driven elections (Bodó, Helberger, & de Vreese, 2017; Bennett, 2016; Karpf, 2016; Kreiss, 2016; Tufekci, 2014). All of these labels highlight the increasing importance of data analytics in the operations of political parties, candidate campaigns, and issue advocacy efforts. But in our view, none adequately captures the full scope of recent changes that have taken place in contemporary politics. The same commercial digital media and marketing ecosystem that has dramatically altered how corporations engage with consumers is now transforming the ways in which campaigns engage with citizens (Chester & Montgomery, 2017).We have been closely tracking the growth of this marketplace for more than 25 years, in the US and abroad, monitoring and analysing key technological developments, major trends, practices and players, and assessing the impact of these systems in areas such as health, financial services, retail, and youth (Chester, 2007; Montgomery, 2007, 2015; Montgomery & Chester, 2009; Montgomery, Chester, Grier, & Dorfman, 2012; Montgomery, Chester, & Kopp, 2018). CDD has worked closely with leading EU civil society and data protection NGOs to address digital marketplace issues. Our work has included providing analysis to EU-based groups to help them respond critically to Google’s acquisition of DoubleClick in 2007 as well as Facebook’s purchase of WhatsApp in 2014. Our research has also been informed by a growing body of scholarship on the role that commercial and big data forces are playing in contemporary society. For example, advocates, legal experts, and scholars have written extensively about the data and privacy concerns raised by this commercial big data digital marketing system (Agre & Rotenberg, 1997; Bennett, 2008; Nissenbaum, 2009; Schwartz & Solove, 2011). More recent research has focused increasingly on other, and in many ways more troubling, aspects of this system. This work has included, for example, research on the use of persuasive design (including “mass personalisation” and “dark patterns”) to manage and direct human behaviours; discriminatory impacts of algorithms; and a range of manipulative practices (Calo, 2013; Gray, Kou, Battles, Hoggatt, & Toombs, 2018; Susser, Roessler, & Nissenbaum, 2019; Zarsky, 2019; Zuboff, 2019). As digital marketing has migrated into electoral politics, a growing number of scholars have begun to examine the implications of these problematic practices on the democratic process (Gorton, 2016; Kim et al., 2018; Kreiss & Howard, 2010; Rubinstein, 2014; Bashyakarla et al., 2019; Tufekci, 2014).The purpose of this paper is to serve as an “early warning system” — for policymakers, journalists, scholars, and the public — by identifying what we see as the most important industry trends and practices likely to play a role in the next major US election, and flagging some of the problems and issues raised. Our intent is not to provide a comprehensive analysis of all the tools and techniques in what is frequently called the “politech” marketplace. The recent Tactical Tech (Bashyakarla et al, 2019) publication, Personal Data: Political Persuasion, provides a highly useful compendium on this topic. Rather, we want to show how further growth and expansion of the big data digital marketplace is reshaping electoral politics in the US, introducing both candidate and issue campaigns to a system of sophisticated software applications and data-targeting tools that are rooted in the goals, values, and strategies for influencing consumer behaviours.1 (link is external) Although some of these new digitally enabled capabilities are extensions of longstanding political practices that pre-date the internet, others are a significant departure from established norms and procedures. Taken together, they are contributing to a major shift in how political campaigns conduct their operations, raising a host of troubling issues concerning privacy, security, manipulation, and discrimination. All of these developments are taking place, moreover, within a regulatory structure that is weak and largely ineffectual, posing daunting challenges to policymakers.In the following pages, we: 1) briefly highlight five key developments in the digital marketing industry since the 2016 election that are influencing the operations of political campaigns and will likely affect the next election cycle; 2) discuss the implications of these trends and techniques for the ongoing practice of contemporary politics, with a special focus on their potential for manipulation and discrimination; 3) assess both the technology industry responses and recent policy initiatives designed to address political advertising in the US; and 4) offer our own set of recommendations for regulating political ad and data practices.The growing big data commercial and political marketing systemIn the upcoming 2020 elections, the US is likely to witness an extremely hard-fought, under-the-radar, innovative, and in many ways disturbing set of races, not only for the White House but also for down-ballot candidates and issue groups. Political campaigns will be able to avail themselves of the current state-of-the-art big data systems that were used in the past two elections, along with a host of recent advances developed by commercial marketers. Several interrelated trends in the digital media and marketing industry are likely to play a particularly influential role in shaping the use of digital tools and strategies in the 2020 election. We discuss them briefly below:Recent mergers and partnerships in the media and data industries are creating new synergies that will extend the reach and enhance the capabilities of contemporary political campaigns. In the last few years, a wave of mergers and partnerships has taken place among platforms, data brokers, advertising exchanges, ad agencies, measurement firms and companies specialising in advertising technologies (so-called “ad-tech”). This consolidation has helped fuel the unfettered growth of a powerful digital marketing ecosystem, along with an expanding spectrum of software systems, specialty firms, and techniques that are now available to political campaigns. For example, AT&T (n.d.), as part of its acquisition of Time Warner Media, has re-launched its digital ad division, now called Xandr (n.d.). It also acquired the leading programmatic ad platform AppNexus.Leading multinational advertising agencies have made substantial acquisitions of data companies, such as the Interpublic Group (IPG) purchase of Acxiom in 2018 and the Publicis Groupe takeover of Epsilon in 2019. One of the “Big 3” consumer credit reporting companies, TransUnion (2019), bought TruSignal, a leading digital marketing firm. Such deals enable political campaigns and others to easily access more information to profile and target potential voters (Williams, 2019).In the already highly consolidated US broadband access market, only a handful of giants provide the bulk of internet connections for consumers. The growing role of internet service providers (ISPs) in the political ad market is particularly troubling, since they are free from any net neutrality, online privacy or digital marketing rules. Acquisitions made by the telecommunications sector are further enabling ISPs and other telephony companies to monetise their highly detailed subscriber data, combining it with behavioural data about device use and content preferences, as well as geolocation. (Schiff, 2018).Increasing sophistication in “identity resolution” technologies, which take advantage of machine learning and artificial intelligence applications, is enabling greater precision in finding and reaching individuals across all of their digital devices. The technologies used for what is known as “identity resolution” have evolved to enable marketers — and political groups — to target and “reach real people” with greater precision than ever before. Marketers are helping perfect a system that leverages and integrates, increasingly in real-time, consumer profile data with online behaviours to capture more granular profiles of individuals, including where they go, and what they do (Rapp, 2018). Facebook, Google and other major marketers are also using machine learning to power prediction-related tools on their digital ad platforms. As part of Google’s recent reorganisation of its ad system (now called the “Google Marketing Platform”), the company introduced machine learning into its search advertising and YouTube businesses (Dischler, 2018; Sluis, 2018). It also uses machine learning for its “Dynamic Prospecting” system, which is connected to an “Automatic Targeting” apparatus that enables more precise tracking and targeting of individuals (Google, n.d.-a-b). Facebook (2019) is enthusiastically promoting machine learning as a fundamental advertising tool, urging advertisers to step aside and let automated systems make more ad-targeting decisions.Political campaigns have already embraced these new technologies, even creating a special category in the industry awards for “Best Application of Artificial Intelligence or Machine Learning”, “Best Use of Data Analytics/Machine Learning”, and “Best Use of Programmatic Advertising” (“2019 Reed Award Winners”, 2019; American Association of American Political Consultants, 2019). For example, Resonate, a digital data marketing firm, was recognised in 2018 for its “Targeting Alabama’s Conservative Media Bubble”, which relied on “artificial intelligence and advanced predictive modeling” to analyse in real-time “more than 15 billion page loads per day. According to Resonate, this process identified “over 240,000 voters” who were judged to be “persuadable” in a hard-fought Senate campaign (Fitzpatrick, 2018). Similar advances in data analytics for political efforts are becoming available for smaller campaigns (Echelon Insights, 2019). WPA Intelligence (2019) won a 2019 Reed Award for its data analytics platform that generated “daily predictive models, much like microtargeting advanced traditional polling. This tool was used on behalf of top statewide races to produce up to 900 million voter scores, per night, for the last two months of the campaign”. Deployment of these techniques was a key influence in spending for the US midterm elections (Benes, 2018; Loredo, 2016; McCullough, 2016).Political campaigns are taking advantage of a rapidly maturing commercial geo-spatial intelligence complex, enhancing mobile and other geotargeting strategies. Location analytics enable companies to make instantaneous associations between the signals sent and received from Wi-Fi routers, cell towers, a person’s devices and specific locations, including restaurants, retail chains, airports, stadiums, and the like (Skyhook, n.d.). These enhanced location capabilities have further blurred the distinction between what people do in the “offline” physical world and their actions and behaviours online, giving marketers greater ability both to “shadow” and to reach individuals nearly anytime and anywhere.A political “geo-behavioural” segment is now a “vertical” product offered alongside more traditional online advertising categories, including auto, leisure, entertainment and retail. “Hyperlocal” data strategies enable political campaigns to engage in more precise targeting in communities (Mothership Strategies, 2018). Political campaigns are also taking advantage of the widespread use of consumer navigation systems. Waze, the Google-owned navigational firm, operates its own ad system but also is increasingly integrated into the Google programmatic platform (Miller, 2018). For example, in the 2018 midterm election, a get-out-the-vote campaign for one trade group used voter file and Google data to identify a highly targeted segment of likely voters, and then relied on Waze to deliver banner ads with a link to an online video (carefully calibrated to work only when the app signalled the car wasn’t moving). According to the political data firm that developed the campaign, it reached “1 million unique users in advance of the election” (Weissbrot, 2019, April 10).Political television advertising is rapidly expanding onto unregulated streaming and digital video platforms. For decades, television has been the primary medium used by political campaigns to reach voters in the US. Now the medium is in the process of a major transformation that will dramatically increase its central role in elections (IAB, n.d.-a). One of the most important developments during the past few years is the expansion of advertising and data-targeting capabilities, driven in part by the rapid adoption of streaming services (so-called “Over the Top” or “OTT”) and the growth of digital video (Weissbrot, 2019, October 22). Leading OTT providers in the US are actively promoting their platform capabilities to political campaigns, making streaming video a new battleground for influencing the public. For example, a “Political Data Cloud” offered by OTT specialist Tru Optik (2019) enables “political advertisers to use both OTT and streaming audio to target specific voter groups on a local, state or national level across such factors as party affiliation, past voting behavior and issue orientation. Political data can be combined with behavioral, demographic and interest-based information, to create custom voter segments actionable across over 80 million US homes through leading publishers and ad tech platforms” (Lerner, 2019).While political advertising on broadcast stations and cable television systems has long been subject to regulation by the US Federal Communications Commission, newer streaming television and digital video platforms operate outside of the regulatory system (O’Reilly, 2018). According to research firm Kantar “political advertisers will be able to air more spots on these streaming video platforms and extend the reach of their messaging—particularly to younger voters” (Lafayette, 2019). These ads will also be part of cross-device campaigns, with videos showing up in various formats on mobile devices as well.The expanding role of digital platforms enables political campaigns to access additional sources of personal data, including TV programme viewing patterns. For example, in 2018, Altice and smart TV company Vizio launched a new partnership to take advantage of recent technologies now being deployed to deliver targeted advertising, incorporating viewer data from nearly nine million smart TV sets into “its footprint of more than 90 million households, 85% of broadband subscribers and one billion devices in the U.S.” (Clancy, 2018). Vizio’s Inscape (n.d.) division produces technology for smart TVs, offering what is known as “automatic content recognition” (ACR) data. According to Vizio, ACR enables what the industry calls “glass level” viewing data, using “screen level measurement to reveal what programs and ads are being watched in near-real time”, and incorporating the IP address from any video source in use (McAfee, 2019). Campaigns have demonstrated the efficacy of OTT’s role. AdVictory (n.d.) modelled “387,000 persuadable cord cutters and 1,210 persuadable cord shavers” (the latter referring to people using various forms of streaming video) to make a complex media buy in one state-wide gubernatorial race that reached 1.85 million people “across [video] inventory traditionally untouched by campaigns”.Further developments in personalisation techniques are enabling political campaigns to maximise their ability to test an expanding array of messaging elements on individual voters. Micro-targeting now involves a more complex personalisation process than merely using so-called behavioural data to target an individual. The use of personal data and other information to influence a consumer is part of an ever-evolving, orchestrated system designed to generate and then manage an individual’s online media and advertising experiences. Google and Facebook, in particular, are adept at harvesting the latest innovations to advance their advertising capabilities, including data-driven personalisation techniques that generate hundreds of highly granular ad-campaign elements from a single “creative” (i.e., advertising message). These techniques are widely embraced by the digital marketing industry, and political campaigns across the political spectrum are being encouraged to expand their use for targeting voters (Meuse, 2018; Revolution Marketing, n.d.; Schuster, 2015). The practice is known by various names, including “creative versioning”, “dynamic creative”, and “Dynamic Creative Optimization”, or DCO (Shah, 2019). Google’s creative optimisation product, “Directors Mix” (formerly called “Vogon”), is integrated into the company’s suite of “custom affinity audience targeting capabilities, which includes categories related to politics and many other interests”. This product, it explains, is designed to “generate massively customized and targeted video ad campaigns” (Google, n.d.-c). Marketing experts say that Google now enables “DCO on an unprecedented scale”, and that YouTube will be able to “harness the immense power of its data capabilities…” (Mindshare, 2017). Directors Mix can tap into Google’s vast resources to help marketers influence people in various ways, making it “exceptionally adept at isolating particular users with particular interests” (Boynton, 2018). Facebook’s “Dynamic Creative” can help transform a single ad into as many as “6,250 unique combinations of title, image/video, text, description and call to action”, available to target people on its news feed, Instagram and outside of Facebook’s “Audience Network” ad system (Peterson, 2017).Implications for 2020 and beyondWe have been able to provide only a partial preview of the digital software systems and tools that are likely to be deployed in US political campaigns during 2020. It’s already evident that digital strategies will figure even more centrally in the upcoming campaigns than they have in previous elections (Axelrod, Burke, & Nam, 2019; Friedman, 2018, June 19). Many of the leading Democratic candidates, and President Trump, who has already ramped up his re-election campaign apparatus, have extensive experience and success in their use of digital technology. Brad Parscale, the campaign manager for Trump’s re-election effort, explained in 2019 that “in every single metric, we’re looking at being bigger, better, and ‘badder’ than we were in 2016,” including the role that “new technologies” will play in the race (Filloux, 2019).On the one hand, these digital tools could be harnessed to create a more active and engaged electorate, with particular potential to reach and mobilise young voters and other important demographic groups. For example, in the US 2018 midterm elections, newcomers such as Congresswoman Alexandria Ocasio-Cortez, with small budgets but armed with digital media savvy, were able to seize the power of social media, mobile video, and other digital platforms to connect with large swaths of voters largely overlooked by other candidates (Blommaert, 2019). The real-time capabilities of digital media could also facilitate more effective get-out-the-vote efforts, targeting and reaching individuals much more efficiently than in-person appeals and last-minute door-to-door canvassing (O’Keefe, 2019).On the other hand, there is a very real danger that many of these digital techniques could undermine the democratic process. For example, in the 2016 election, personalised targeted campaign messages were used to identify very specific groups of individuals, including racial minorities and women, delivering highly charged messages designed to discourage them from voting (Green & Issenberg, 2016). These kinds of “stealth media” disinformation efforts take advantage of “dark posts” and other affordances of social media platforms (Young et al., 2018).Though such intentional uses (or misuses) of digital marketing tools have generated substantial controversy and condemnation, there is no reason to believe they will not be used again. Campaigns will also be able to take advantage of a plethora of newer and more sophisticated targeting and message-testing tools, enhancing their ability to fine tune and deliver precise appeals to the specific individuals they seek to influence, and to reinforce the messages throughout that individual’s “media journey”.But there is an even greater danger that the increasingly widespread reliance on commercial ad technology tools in the practice of politics will become routine and normalised, subverting independent and autonomous decision making, which is so essential to an informed electorate (Burkell & Regan, 2019; Gorton, 2016). For example, so-called “dynamic creative” advertising systems are in some ways extensions of A/B testing, which has been a longstanding tool in political campaigns. However, today’s digital incarnation of the practice makes it possible to test thousands of message variations, assessing how each individual responds to them, and changing the content in real time and across media in order to target and retarget specific voters. The data available for this process are extensive, granular, and intimate, incorporating personal information that extends far beyond the conventional categories, encompassing behavioural patterns, psychographic profiles, and TV viewing histories. Such techniques are inherently manipulative (Burkell & Regan, 2019; Gorton, 2016; Susser, Roessler, & Nissenbaum, 2019). The increasing use of digital video, in all of its new forms, raises similar concerns, especially when delivered to individuals through mobile and other platforms, generating huge volumes of powerful, immersive, persuasive content, and challenging the ability of journalists and scholars to review claims effectively. AI, machine learning, and other automated systems will be able to make predictions on behaviours and have an impact on public decision-making, without any mechanism for accountability. Taken together, all of these data-gathering, -analysis, and -targeting tools raise the spectre of a growing political surveillance system, capable of capturing unlimited amounts of detailed and highly sensitive information on citizens and using it for a variety of purposes. The increasing predominance of the big data political apparatus could also usher in a new era of permanent campaign operations, where individuals and groups throughout the country are continually monitored, targeted, and managed.Because all of these systems are part of the opaque and increasingly automated operations of digital commercial marketing, the techniques, strategies, and messages of the upcoming campaigns will be even less transparent than before. In the heat of a competitive political race, campaigns are not likely to publicise the full extent of their digital operations. As a consequence, journalists, civil society groups, and academics may not be able to assess them fully until after the election. Nor will it be enough to rely on documenting expenditures, because digital ads can be inexpensive, purposefully designed to work virally and aimed at garnering “free media”, resulting in a proliferation of messages that evade categorisation or accountability as “paid political advertising”.Some scholars have raised doubts about the effectiveness of contemporary big data and digital marketing applications when applied to the political sphere, and the likelihood of their widespread adoption (Baldwin-Philippi, 2017). It is true we are in the early stages of development and implementation of these new tools, and it may be too early to predict how widely they will be used in electoral politics, or how effective they might be. However, the success of digital marketing worldwide in promoting brands and products in the consumer marketplace, combined with the investments and innovations that are expanding its ability to deliver highly measured impacts, suggest to us that these applications will play an important role in our political and electoral affairs. The digital marketing industry has developed an array of measurement approaches to document their impact on the behaviour of individuals and communities (Griner, 2019; IAB Europe, 2019; MMA, 2019). In the no-holds-barred environment of highly competitive electoral politics, campaigns are likely to deploy these and other tools at their disposal, without restraint. There are enough indications from the most recent uses of these technologies in the political arena to raise serious concerns, making it particularly urgent to monitor them very closely in upcoming elections.Industry and legislative initiativesThe largest US technology companies have recently introduced a succession of internal policies and transparency measures aimed at ensuring greater platform responsibility during elections. In November 2019, Twitter announced it was prohibiting the “promotion of political content”, explaining that it believed that “political message reach should be earned, not bought”. CEO Jack Dorsey (2019) was remarkably frank in explaining why Twitter had made this decision: “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale”.That same month, Google unveiled policy changes of its own, including restricting the kinds of internal data capabilities available to political campaigns. As the company explained, “we’re limiting election ads audience targeting to the following general categories: age, gender, and general location (postal code level)”. Google also announced it was “clarifying” its ads policies and “adding examples to show how our policies prohibit things like ‘deep fakes’ (doctored and manipulated media), misleading claims about the census process, and ads or destinations making demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process” (Spencer, 2019). It remains to be seen whether such changes as Google’s and Twitter’s will actually alter, in any significant way, the contemporary operations of data-driven political campaigns. Some observers believe that Google’s new policy will benefit the company, noting that “by taking away the ability to serve specific audiences content that is most relevant to their values and interests, Google stands to make a lot MORE money off of campaigns, as we’ll have to spend more to find and reach our intended audiences” (“FWIW: The Platform Self-regulation Dumpster Fire”, 2019).Interestingly, Facebook, the tech company that has been subject to the greatest amount of public controversy over its political practices, had not, at the time of this writing, made similar changes in its political advertising policies. Though the social media giant has been widely criticised for its refusal to fact-check political ads for accuracy and fairness, it has not been willing to institute any mechanisms for intervening in the content of those ads (Ingram, 2018; Isaac, 2019; Kafka, 2019). However, Facebook did announce in 2018 that it was ending its participation in the industry-wide practice of embedding, which involved sales teams working hand-in-hand with leading political campaigns (Ingram, 2018; Kreiss & McGregor, 2017). After a research article generated extensive news coverage of this industry-wide marketing practice, Facebook publicly announced it would cease the arrangement, instead “offering tools and advice” through a politics portal that provides “candidates information on how to get their message out and a way to get authorised to run ads on the platform” (Emerson, 2018; Jeffrey, 2018). In May 2019, the company also announced it would stop paying commissions to employees who sell political ads (Glazer & Horowitz, 2019). Such a move may not have a major effect on sales, however, especially since the tech giant has already generated significant income from political advertising for the 2020 campaign (Evers-Hillstrom, 2019).Under pressure from civil rights groups over discriminatory ad targeting practices in housing and other areas, Facebook has undergone an extensive civil rights audit, which has resulted in a number of internal policy changes, including some practices related to campaigns and elections. For example, the company announced in June 2019 that it had “strengthened its voter suppression policy” to prohibit “misrepresentations” about the voting process, as well as any “threats of violence related to voting”. It has also committed to making further changes, including investments designed to prevent the use of the platform “to manipulate U.S. voters and elections” (Sandberg, 2019).Google, Facebook, and Twitter have all established online archives to enable the public to find information on the political advertisements that run on their platforms. But these databases provide only a limited range of information. For example, Google’s (2018) archive contains copies of all political ads run on the platform, shows the amount spent overall and on specific ads by a campaign, as well as age range, gender, area (state) and dates when an ad appeared, but does not share the actual “targeting criteria” used by political campaigns (Walker, 2018). Facebook’s (n.d.-b) Ad Library describes itself as a “comprehensive, searchable collection of all ads currently running across Facebook Products”. It claims to provide “data for all ads related to politics or to issues of national importance” that have run on its platform since May 2018 (Sullivan, 2019). While the data include breakdowns on the age, gender, state where it ran, number of impressions and spending for the ad, no details are provided to explain how the ad was constructed, tested, and altered, or what digital ad targeting techniques were used. For example, Facebook (n.d.-a-e) permits US-based political campaigns to use its “Custom or Lookalike Audiences” ad-targeting product, but it does not report such use in its ad library. Though all of these new transparency systems and ad archives offer useful information, they also place a considerable burden on users. Many of these new measures are likely to be more valuable for watchdog organisations and journalists, who can use the information to track spending, identify emerging trends, and shed additional light on the process of digital political influence.While these kinds of changes in platform policies and operations should help to mitigate some of the more egregious uses of social media by unscrupulous campaigns and other actors, they are not likely to alter in any major way the basic operations of today’s political advertising practices. With each tech giant instituting its own set of internal ad policies, there are no clear industry-wide “rules-of-the-game” that apply to all participants in the digital ecosystem. Nor are there strong transparency or accountability systems in place to ensure that the policies are effective. Though platform companies may institute changes that appear to offer meaningful safeguards, other players in the highly complex big data marketing infrastructure may offer ways to circumvent these apparent restrictions. As a case in point, when Facebook (2018, n.d.-c) announced in the wake of the Cambridge Analytica scandal that it was “shutting down Partner Categories”, the move provoked alarm inside the ad-tech industry that a set of powerful applications was being withdrawn (Villano, 2018). The product had enabled marketers to incorporate data provided by Facebook’s selected partners, including Acxiom and Epsilon (Pathak, 2018). However, despite the policy change, Facebook still enables marketers to bring a tremendous amount of third-party data to Facebook for targeting (Popkin, 2019). Indeed, shortly after Facebook’s announcement, LiveRamp offered assurances to its clients that no significant changes had been made, explaining that “while there’s a lot happening in our industry, LiveRamp customers have nothing to fear” (Carranza, 2018).The controversy generated by recent foreign interference in US elections has also fuelled a growing call to update US election laws. However, the current policy debate over regulation of political advertising continues to be waged within a very narrow framework, which needs to be revisited in light of current digital practices. Legislative proposals have been introduced in Congress that would strengthen the disclosure requirements for digital political ads regulated by the Federal Election Commission (FEC). For example, under the Honest Ads Act, digital media platforms would be required to provide information about each ad via a “public political file”, including who purchased the ad, when it appeared, how much was spent, as well as “a description of the targeted audience”. Campaigns would also be required to provide the same information for online political ads that are required for political advertising in other media. The proposed legislation currently has the support of Google, Facebook, Twitter and other leading companies (Ottenfeld, 2018, April 25). A more ambitious bill, the For the People Act is backed by the new Democratic majority in the House of Representatives, and includes similar disclosure requirements, along with a number of provisions aimed at reducing “the influence of big money in politics”. Though these bills are a long-overdue first step toward bringing transparency measures into the digital age, neither of them addresses the broad range of big data marketing and targeting practices that are already in widespread use across political campaigns. And it is doubtful whether either of these limited policy approaches stands a chance of passage in the near future. There is strong opposition to regulating political campaign and ad practices at the federal level, primarily because of what critics claim would be violations of the free speech principle of the US First Amendment (Brodey, 2019).While the prospects for regulating political advertising appear dim at the present time, there is a strong bi-partisan move in Congress to pass federal privacy legislation that would regulate commercial uses of data, which could, in turn, affect the operations, tools, and techniques available for digital political campaigns. Google, Facebook, and other digital data companies have long opposed any comprehensive privacy legislation. But a number of recent events have combined to force the industry to change its strategy: the implementation of the EU General Data Protection Regulation (GDPR) and the passage of state privacy laws (especially in California); the seemingly never-ending news reports on Facebook’s latest scandal; massive data breaches of personal information; accounts of how online marketers engage in discriminatory practices and promote hate speech; and the continued political fallout from “Russiagate”. Even the leading tech companies are now pushing for privacy legislation, if only to reduce the growing political pressure they face from the states, the EU, and their critics (Slefo, 2019). Also fuelling the debate on privacy are growing concerns over digital media industry consolidation, which have triggered calls by political leaders as well as presidential candidates to “break up” Amazon and Facebook (Lecher, 2019). Numerous bills have been introduced in both houses of Congress, with some incorporating strong provisions for regulating both data use and marketing techniques. However, as the 2020 election cycle gets underway, the ultimate outcome of this flurry of legislative activity is still up in the air (Kerry, 2019).Opportunities for interventionGiven the uncertainty in the regulatory and self-regulatory environment, there is likely to be little or no restraint in the use of data-driven digital marketing practices in the upcoming US elections. Groups from across the political spectrum, including both campaigns and special interest groups will continue to engage in ferocious digital combat (Lennon, 2018). With the intense partisanship, especially fuelled by what is admittedly a high-stakes-for-democracy election (for all sides), as well as the current ease with which all of the available tools and methods are deployed, no company or campaign will voluntarily step away from the “digital arms race” that US elections have become. Given what is expected to be an extremely close race for the Electoral College that determines US presidential elections, 2020 is poised to see both parties use digital marketing techniques to identify and mobilise the handful of voters needed to “swing” a state one way or another (Schmidt, 2019).Campaigns will have access to an unprecedented amount of personal data on every voter in the country, drawing from public sources as well as the growing commercial big data infrastructure. As a consequence, the next election cycle will be characterised by ubiquitous political targeting and messaging, fed continuously through multiple media outlets and communication devices.At the same time, the concerns over continued threats of foreign election interference, along with the ongoing controversy triggered by the Cambridge Analytica/Facebook scandal, have re-energised campaign reform and privacy advocates and engaged the continuing interest of watchdog groups and journalists. This heightened attention on the role of digital technologies in the political process has created an unprecedented window of opportunity for civil society groups, foundations, educators, and other key stakeholders to push for broad public policy and structural changes. Such an effort would need to be multi-faceted, bringing together diverse organisations and issue groups, and taking advantage of current policy deliberations at both the federal and state levels.In other western democracies, governments and industry organisations have taken strong proactive measures to address the use of data-driven digital marketing techniques by political parties and candidates. For example, the Institute for Practitioners in Advertising (IPA), a leading UK advertising organisation, has called for a “moratorium on micro-targeted political advertising online”. “In the absence of regulation”, the IPA explained, “we believe this almost hidden form of political communication is vulnerable to abuse”. Leading members of the UK advertising industry, including firms that work on political campaigns, have endorsed these recommendations (Oakes, 2018). The UK Information Commissioner’s Office (ICO, 2018), which regulates privacy, conducted an investigation of recent digital political practices, and issued a report urging the government to “legislate at the earliest opportunity to introduce a statutory code of practice” addressing the “use of personal information in political campaigns” (Denham, 2018). In Canada, the Privacy Commissioner offered “guidance” to political parties in their use of data, including “Best Practices” for requiring consent when using personal information (Office of the Privacy Commissioner of Canada, 2019). The European Council (2019) adopted a similar set of policies requiring political parties to adhere to EU data protection rules.We recognise that the United States has a unique regulatory and legal system, where First Amendment protections of free speech have limited regulation of political campaigns. However, the dangers that big data marketing operations pose to the integrity of the political process require a rethinking of policy approaches. A growing number of legal scholars have begun to question whether political uses of data-driven digital marketing should be afforded the same level of First Amendment protections as other forms of political speech (Burkell & Regan, 2019; Calo, 2013; Rubinstein, 2014; Zarsky, 2019). “The strategies of microtargeting political ads”, explain Jacquelyn Burkell and Priscilla Regan (2019), “are employed in the interests not of informing, or even persuading voters but in the interests of appealing to their non-rational biases as defined through algorithmic profiling”.Advocates and policymakers in the US should explore various legal and regulatory strategies, developing a broad policy agenda that encompasses data protection and privacy safeguards; robust transparency, reporting and accountability requirements; restrictions on certain digital advertising techniques; and limits on campaign spending. For example, disclosure requirements for digital media need to be much more comprehensive. At the very least, campaigns, platforms and networks should be required to disclose fully all the ad and data practices they used (e.g., cross-device tracking, lookalike modelling, geolocation, measurement, neuromarketing), as well as variations of ads delivered through dynamic creative optimisation and other similar AI applications. Some techniques — especially those that are inherently manipulative in nature — should not be allowed in political campaigns. Greater attention will need to be paid to the uses of data and targeting techniques as well, articulating distinctions between those designed to promote robust participation, such as “Get Out the Vote” efforts, and those whose purpose is to discourage voters from exercising their rights at the ballot box. Limits should also be placed on the sources and amount of data collected on voters. Political parties, campaigns, and political action committees should not be allowed to gain unfettered access to consumer profile data, and voters should have the right to provide affirmative consent (“opt-in”) before any of their information can be used for political purposes. Policymakers should be required to stay abreast of fast-moving innovations in the technology and marketing industries, identifying the uses and abuses of digital applications for political purposes, such as the way that WhatsApp was deployed during recent elections in Brazil for “computational propaganda” (Magenta, Gragnani, & Souza, 2018).In addition to pushing for government policies, advocates should place pressure on the major technology industry players and political institutions, through grassroot campaigns, investigative journalism, litigation, and other measures. If we are to have any reform in the US, there must be multiple and continuous points of pressure. The two major political parties should be encouraged to adopt a proposed new best-practices code. Advocates should also consider adopting the model developed by civil rights groups and their allies in the US, who negotiated successfully with Google, Facebook and others to develop more responsible and accountable marketing and data practices (Peterson & Marte, 2016). Similar efforts could focus on political data and ad practices. NGOs, academics, and other entities outside the US should also be encouraged to raise public concerns.All of these efforts would help ensure that the US electoral process operates with integrity, protects privacy, and does not engage in discriminatory practices designed to diminish debate and undermine full participation.citations available via: https://policyreview.info/articles/analysis/digital-commercialisation-us... (link is external)This paper is part of Data-driven elections (link is external), a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon: https://policyreview.info/data-driven-elections (link is external)
-
Blog
Digital Marketing, Personal Information and Political Campaigns: Advocates should press for policy reforms and not leave it to the “experts”
A new report (link is external) on how political marketing insiders and platforms such as Facebook view the “ethical” issues raised by the role of digital marketing in elections illustrates why advocates and others concerned about election integrity should make this issue a public-policy priority. We cannot afford to leave it in the hands of “Politech” firms and political campaign professionals, who appear unable to acknowledge the consequences to democracy of their unfettered use of powerful data-driven online-marketing applications. “Digital Political Ethics: Aligning Principles with Practice” reports on a series of conversations and a two-day meeting last October that included representatives of firms (such as Blue State, Targeted Victory, WPA Intelligence, and Revolution Messaging) that work either for Democrats or Republicans, as well as officials from both Facebook and Twitter. The goal of the project was to “identify areas of agreement among key stakeholders concerning ethical principles and best practices in the conduct of digital campaigning in the U.S.” Perhaps it should not be a surprise that this group of people appears to be incapable of critically examining (or even candidly assessing) all of the problems connected with the role of digital marketing in political campaigns. Missing from the report is any real concern about how today’s electoral process takes advantage of the absence of any meaningful privacy safeguards in the U.S. A vast commercial surveillance apparatus that has no bounds has been established. This same system that is used to market goods and services, and which is driven by data-brokers, marketing clouds, (link is external) real-time ad-decision engines, geolocation (link is external) identification and other AI-based (link is external)technologies—along with the clout of leading platforms and publishers—is now also used for political purposes. All of us are tracked and profiled 24/7, including where we go and what we do—with little location privacy anymore. Political insiders and data ad companies such as Facebook, however, are unwilling to confront the problem of this loss of privacy, given how valuable all this personal data is to their business model or political goal. Another concern is that these insiders now view digital marketing as a normative, business-as-usual process—and nothing out of the ordinary. But anyone who knows how the system operates should be deeply concerned about the nontransparent and often far-reaching ways digital marketing is constructed to influence (link is external) our decision-making and behaviors, including at emotional (link is external) and subconscious (link is external) levels. The report demonstrates that campaign officials have largely accepted as reasonable the various invasive and manipulative technologies and techniques that the ad-tech industry has developed over the past decade. Perhaps these officials are simply being pragmatic. But society cannot afford such a cynical position. Today’s political advertising is not yesterday’s TV commercial—nor is it purely an effort to “microtarget” sympathetic market segments. Today’s digital marketing apparatus follows all of us continuously, Democrats, Republicans, and independents alike. The marketing ecosystem (link is external) is finely tuned to learn how we react, transforming itself depending on those reactions, and making decisions about us in milliseconds in order to use—and refine—various tactics to influence us, entirely including new ad formats, each tested and measured to have us think and behave one way or another. And this process is largely invisible to voters, regulators and the news media. But for the insiders, microtargeting helps get the vote out and encourages participation. Nothing much is said about what happened in the 2016 U.S. election, when some political marketers sought to suppress the vote among communities of color, while others engaged is disinformation. Some of these officials now propose that political campaigns should be awarded a digital “right of way” that would guarantee them unfettered access to Facebook, Google and other sites, as well as ensure favorable terms and support. This is partly in response to the recent and much-needed reforms adopted by Twitter (link is external)and Google (link is external)that either eliminate or restrict how political campaigns can use their platforms, which many in the politech industry dislike. Some campaign officials see FCC (link is external) rules regulating TV ads for political ads as an appropriate model to build policies for digital campaigning. That notion should be alarming to those who care about the role that money plays in politics, let alone the nature of today’s politics (as well as those who know the myriad failures of the FCC over the decades). The U.S. needs to develop a public policy for digital data and advertising that places the interests of the voter and democracy before that of political campaigns. Such a policy should include protecting the personal information of voters; limiting deceptive and manipulative ad practices (such as lookalike (link is external) modeling); as well as prohibiting those contemporary ad-tech practices (e.g., algorithmic based real-time programmatic (link is external) ad systems) that can unfairly influence election outcomes. Also missing from the discussion is the impact of the never-ending expansion of “deep-personalization (link is external)” digital marketing applications designed to influence and shift consumer behavior more effectively. The use of biodata, emotion recognition (link is external), and other forms of what’s being called “precision data”—combined with a vast expansion of always-on sensors operating in an Internet of Things world—will provide political groups with even more ways to help transform electoral outcomes. If civil society doesn’t take the lead in reforming this system, powerful insiders who have their own conflicts of interests will be able to shape the future of democratic decision-making in the U.S. We cannot afford to leave it to the insiders to decide what is best for our democracy. -
Press Release
Popular Dating, Health Apps Violate Privacy
Leading Consumer and Privacy Groups Urge Congress, the FTC, State AGs in California, Texas, Oregon to Investigate
Popular Dating, Health Apps Violate Privacy Leading Consumer and Privacy Groups Urge Congress, the FTC, State AGs in California, Texas, Oregon to Investigate For Immediate Release: Jan. 14, 2020 Contact: David Rosen, drosen@citizen.org (link is external), (202) 588-7742 Angela Bradbery, abradbery@citizen.org (link is external), (202) 588-7741 WASHINGTON, D.C. – Nine consumer groups today asked (link is external) the Federal Trade Commission (FTC), congressional lawmakers and the state attorneys general of California, Texas and Oregon to investigate several popular apps available in the Google Play Store. A report (link is external) released today by the Norwegian Consumer Council (NCC) alleges that the apps are systematically violating users’ privacy. The report found that 10 well-known apps – Grindr, Tinder, OkCupid, Happn, Clue, MyDays, Perfect365, Qibla Finder, My Talking Tom 2 and Wave Keyboard – are sharing information they collect on users with third-party advertisers without users’ knowledge or consent. The European Union’s General Data Protection Regulation forbids sharing information with third parties without users’ knowledge or consent. When it comes to drafting a new federal privacy law, American lawmakers cannot trust input from companies who do not respect user privacy, the groups maintain. Congress should use the findings of the report as a roadmap for a new law that ensures that such flagrant violations of privacy found in the EU are not acceptable in the U.S. The new report alleges that these apps (and likely a great many others) are allowing commercial third parties to collect, use and share sensitive consumer data in a way that is hidden from the user and involves parties that the consumer neither knows about nor would be familiar with. Although consumers can limit some tracking on desktop computers through browser settings and extensions, the same cannot be said for smartphones and tablets. As consumers use their smartphones throughout the day, the devices are recording information about sensitive topics such as our health, behavior, religion, interests and sexuality. “Consumers cannot avoid being tracked by these apps and their advertising partners because they are not provided with the necessary information to make informed choices when launching the apps for the first time. In addition, consumers are unable to make an informed choice because the extent of tracking, data sharing, and the overall complexity of the adtech ecosystem is hidden and incomprehensible to average consumers,” the letters sent to lawmakers and regulators warn. The nine groups are the American Civil Liberties Union of California, Campaign for a Commercial-Free Childhood, the Center for Digital Democracy, Consumer Action, Consumer Federation of America, Consumer Reports, the Electronic Privacy Information Center (EPIC), Public Citizen and U.S. PIRG. In addition to calling for an investigation, the groups are calling for a strong federal digital privacy law that includes a new data protection agency, a private right of action and strong enforcement mechanisms. Below are quotes from groups that signed the letters: “Every day, millions of Americans share their most intimate personal details on these apps, upload personal photos, track their periods and reveal their sexual and religious identities. But these apps and online services spy on people, collect vast amounts of personal data and share it with third parties without people’s knowledge. Industry calls it adtech. We call it surveillance. We need to regulate it now, before it’s too late.” Burcu Kilic, digital rights program director, Public Citizen “The NCC’s report makes clear that any state or federal privacy law must provide sufficient resources for enforcement in order for the law to effectively protect consumers and their privacy. We applaud the NCC’s groundbreaking research on the adtech ecosystem underlying popular apps and urge lawmakers to prioritize enforcement in their privacy proposals.” Katie McInnis, policy counsel, Consumer Reports “U.S. PIRG is not surprised that U.S. firms are not complying with laws giving European consumers and citizens privacy rights. After all, the phalanx of industry lobbyists besieging Washington, D.C., has been very clear that its goal is simply to perpetuate a 24/7/365 surveillance capitalism business model, while denying states the right to protect their citizens better and denying consumers any real rights at all.” Ed Mierzwinski, senior director for consumer programs, U.S. PIRG “This report reveals how the failure of the U.S. to enact effective privacy safeguards has unleashed an out-of-control and unaccountable monster that swallows up personal information in the EU and elsewhere. The long unregulated business practices of digital media companies have shred the rights of people and communities to use the internet without fear of surveillance and manipulation. U.S. policymakers have been given a much-needed wake-up call by Norway that it’s overdue for the enactment of laws that bring meaningful change to the now lawless digital marketplace.” Jeff Chester, executive director, Center for Digital Democracy “For those of us in the U.S., this research by our colleagues at the Norwegian Consumer Council completely debunks the argument that we can protect consumers’ privacy in the 21st century with the old notice-and-opt-out approach, which some companies appear to be clinging to in violation of European law. Business practices have to change, and the first step to accomplish that is to enact strong privacy rights that government and individuals can enforce.” Susan Grant, director of consumer protection and privacy, Consumer Federation of America “The illuminating report by our EU ally the Norwegian Consumer Council highlights just how impossible it is for consumers to have any meaningful control over how apps and advertising technology players track and profile them. That’s why Consumer Action is pressing for comprehensive U.S. federal privacy legislation and subsequent strong enforcement efforts. Enough is enough already! Congress must protect us from ever-encroaching privacy intrusions.” Linda Sherry, director of national priorities, Consumer Action “For families who wonder what they’re trading off for the convenience of apps like these, this report makes the answer clear. These companies are exploiting us – surreptitiously collecting sensitive information and using it to target us with marketing. It’s urgent that Congress pass comprehensive legislation which puts the privacy interests of families ahead of the profits of businesses. Thanks to our friends at the Norwegian Consumer Council for this eye-opening research.” David Monahan, campaign manager, Campaign for a Commercial-Free Childhood “This report highlights the pervasiveness of corporate surveillance and the failures of the FTC notice-and-choice model for privacy protection. Congress should pass comprehensive data protection legislation and establish a U.S. Data Protection Agency to protect consumers from the privacy violations of the adtech industry.” Christine Bannan, consumer protection counsel, EPIC -
Press Release
Groups Praise Sen. Markey and Google for Ensuring Children on YouTube Receive Key Safeguards
Contact: Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail); 617-896-9397) Groups Praise Sen. Markey and Google for Ensuring Children on YouTube Receive Key Safeguards BOSTON, MA & WASHINGTON, DC—December 18, 2019—The organizations that spurred the landmark FTC settlement with Google over COPPA violations applauded the announcement of additional advertising safeguards for children on YouTube today. The Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) commended Google for announcing it would apply most of its robust marketing protections on YouTube Kids, including no advertising of food or beverages or harmful products, to all child-directed content on its main YouTube platform. The groups also lauded Senator Markey for securing (link is external) a public commitment from Google to implement these long-overdue safeguards. The advocates expressed disappointment, however, that Google did not agree to prohibit paid influencer marketing and product placement to children on YouTube as it does on YouTube Kids “Sen. Ed Markey has long been and remains the champion for kids,” said Jeff Chester, CDD’s executive director. “Through the intervention of Sen. Markey, Google has finally committed to protecting children whether they are on the main YouTube platform or using the YouTube Kids app. Google has acted responsibly in announcing that its advertising policies now prohibit any food and beverage marketing on YouTube Kids, as well as ads involving ‘sexually suggestive, violent or dangerous content.’ However, we remain concerned that Google may try to weaken these important child- and family-friendly policies in the near future. Thus we call on Google to commit to keeping these rules in place, and to implement other needed safeguards that children deserve,” added Chester. Josh Golin, Executive Director of CCFC, said, “We are so grateful to Senator Markey for his leadership on one of the most crucial issues faced by children and families today. And we commend Google for implementing a robust set of advertising safeguards on the most popular online destination for children. We urge Google to take another critical step and prohibit child-directed influencer content on YouTube; if this manipulative marketing isn’t allowed on children’s TV or YouTube Kids, it shouldn’t be targeted to children on the main YouTube platform either.” ###