PX14A6G 1 d512230px14a6g.htm

  

Notice of Exempt Solicitation

 

 

Name of Registrant: Alphabet, Inc.

Name of Person Relying on Exemption: Shareholder Association for Research and Education (SHARE)

Address of Person Relying on Exemption: Unit 412, 401 Richmond Street West, Toronto, ON M5V 3A8, Canada

Date: May 12, 2023

 

 

 

 

This is not a solicitation of authority to vote your proxy.

Please DO NOT send us your proxy card as it will not be accepted

 

 

 

 

Shareholder Proposal Number 11 Regarding Human Rights Impact Assessment of Targeted Ad Policies

 

 

We, the Proponents, urge shareholders to vote FOR Proposal Number 11 – Shareholder Proposal regarding a Human Rights Assessment of Targeted Ad Policies (the “Proposal”) – at the Alphabet, Inc. (“Alphabet” or the “Company”) Shareholder Meeting on June 2, 2023.

 

The Proposal asks Alphabet’s board of directors to:

 

Publish an independent third-party Human Rights Impact Assessment (the “Assessment”) to examine the actual and potential human rights impacts of Google’s targeted advertising policies and practices throughout its business operations. This Assessment should be conducted at a reasonable cost; omit proprietary and confidential information, as well as information relevant to litigation or enforcement actions; and be published on the company’s website by June 1, 2024.

 

Google advertising accounted for approximately 80% of Alphabet’s revenue in 2021. Alphabet’s ad

business, including Google Search, YouTube Ads, and Google Network, has grown significantly in recent years, reaching $209 billion in 2021.1 Algorithmic systems are deployed to enable the delivery of targeted advertisements, determining what users see. This often results in and exacerbates, systemic discrimination and other human rights violations. Google’s current ad infrastructure is driven by third-party cookies, which enable other companies to track users across the internet by accumulating vast troves of personal and behavioral data on Google users. This further exposes Google to violations of user privacy.

 

Despite the importance of targeted advertising to Google’s business model and the well-documented human rights risks associated with targeted advertising, Alphabet has not conducted a human rights impact assessment (“HRIA”) or demonstrated a sufficiently robust and transparent equivalent due diligence system to identify, address, and prevent the adverse human rights impacts of its technologies.

 

 _____________________________

1 https://abc.xyz/investor/static/pdf/2021_alphabet_annual_report.pdf?cache=3a96f54

 

   
 


Google has previously published a summary of a third-party HRIA of a celebrity facial recognition algorithm.2 Its targeted ad systems, which affect billions, merit at least the same level of due diligence and public disclosure, particularly as Google and its peers develop new approaches to targeting advertisements.

 

1.Targeted advertising technologies can negatively impact human rights

 

Targeted advertising is a form of online advertising that uses the traits, interests, and preferences of a consumer to display customized ads. Advertisers procure this information by tracking a person’s activity across the Internet3, most notably through snippets of code known as third-party cookies. Companies and advertisers use cookies and other technological levers to algorithmically infer users’ interests. They can also acquire data through direct purchases, data-sharing agreements, and other contractual relationships that potentially put users’ human rights in jeopardy.4 Ads are predominantly delivered to consumers through automated auctions that factor in the advertiser’s targeting parameters. These bidding processes take place within seconds after a consumer clicks on a link.

 

As targeted advertising has become more widespread and sophisticated, consumers’ awareness of how these systems can compromise their privacy has grown.5 In line with previous trends, a survey, published in December 2022, revealed that more than half of Americans are uncomfortable sharing personal information with websites in exchange for a “better online experience.6

 

A New York Times article investigating targeted advertising described it as follows: “Just by browsing the web, you’re sending valuable data to trackers and ad platforms. Websites can also provide marketers with specific things they know about you, like your date of birth or email address. Ad companies often identify you when you load a website using trackers and cookies - small files containing information about you. Then your data is shared with multiple advertisers who bid to fill the ad space. The winning bid gets to fill the ad slot.” All of this happens in milliseconds.

 

There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded. For example, a new study by NYU researchers found that gender-neutral internet searches yield results that nonetheless produce “male-dominated” output.7 Pernicious errors in targeting can lead to algorithmic bias, in which automated systems create consistently unfair outcomes, such as privileging one group over another, often aggravating existing inequities.8

  

_____________________________

2 https://services.google.com/fh/files/blogs/bsr-google-cr-api-hria-executive-summary.pdf

3 https://edu.gcfglobal.org/en/thenow/what-is-targeted-advertising/1/

4 https://www.washingtonpost.com/dc-md-va/2023/03/09/catholics-gay-priests-grindr-data-bishops/

5 https://hbr.org/2018/01/ads-that-dont-overstep

6 https://www.thedrum.com/news/2022/11/15/43-americans-still-accept-all-cookies-despite-growing-privacy-concerns-new-study

7 https://www.nyu.edu/about/news-publications/news/2022/july/gender-bias-in-search-algorithms-has-effect-on-users--new-study-.html

8 https://alumni.berkeley.edu/california-magazine/online/biased-algorithms-exacerbate-racial-inequality-health-care/

 

   
 

 

Targeted ads can also exacerbate racial discrimination. Recently, Harvard University research found that online search queries for “Black identifying” names were more likely to return ads to that person from a service that renders arrest records, as compared to the ad results for “white-identifying” names. The same differential treatment occurred in the micro-targeting of higher-interest credit cards and other financial products when the computer inferred that the subjects were African-Americans, despite having similar backgrounds to people with white-identifying names.9

 

Machine algorithms can treat similarly situated people differently. Business models provide very little transparency on where personal information ends up.10 Research has highlighted numerous examples of algorithmic decision-making replicating and even amplifying human biases.11 Although the right to privacy is crucial to everyone, privacy violations have particularly negative impacts on demographic groups who are at a higher risk of exclusion.12

 

2.Failure to safeguard human rights exposes shareholders to material risks

 

2.1. Regulatory risks.

 

There is a growing consensus among civil society experts, academics, and policymakers that targeted advertising can lead to the erosion of human rights. Legislation in Europe13 and the United States14 is poised to severely restrict or even ban targeted ads, largely due to concerns about the underlying algorithms. Given the importance of advertising for Alphabet’s business model, the failure to implement and demonstrate effective human rights policies and processes may expose shareholders to regulatory risks.

 

Since 2022, several pieces of legislation drafted in the U.S. Congress have focused on enforcing algorithmic accountability and better control of targeted advertising.15

Introduced by Congressman Pallone in June 2022, The American Data Privacy and Protection Act16, if passed, will create the first set of national standards and safeguards for personal data collected by companies, prohibit targeting ads to minors, ban the use of “sensitive data” for targeted advertising, possibly including the tracking of users across the Internet.

 

_____________________________

9 https://papers.ssrn.com/abstract=2208240

10 https://www.theguardian.com/world/2019/nov/05/targeted-ads-fake-news-clickbait-surveillance-capitalism-data-mining-democracy

11 https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

12 https://www.brookings.edu/blog/techtank/2022/07/18/examining-the-intersection-of-data-privacy-and-civil-rights/

13 https://www.europarl.europa.eu/news/en/press-room/20220412IPR27111/digital-services-act-agreement-for-a-transparent-and-safe-online-environment

14 https://www.varonis.com/blog/us-privacy-laws

15 1.H.R. 5596 - Justice Against Malicious Algorithms Act
2.S. 3572 / H.R. 6580 - Algorithmic Accountability Act of 2022
3.S. 2024 / H.R. 5951 - Filter Bubble Transparency Act
4.S. 3029 / H.R. 2154 - Protecting Americans from Dangerous Algorithms Act
5.S. 2918 / H.R. 5439 - Kids Internet Design and Safety Act
6.S. 1896 / H.R. 3611 - Algorithmic Justice and Online Platform Transparency Act
7.S. 3663 - Kids Online Safety Act
8.H.R. 6796 - Digital Services Oversight and Safety Act of 2022

16 H.R.8152 - 117th Congress (2021-2022): American Data Privacy and Protection Act | Congress.gov | Library of Congress

 

   
 

 

In January 2022, Congresswoman Eshoo (D-CA) introduced the Banning Surveillance Advertising Act17 which would prevent advertising platforms from targeting individuals based on some forms of personal information and behavioral data outright;
In February 2023, President Joe Biden’s State of the Union called for legislation to stop tech companies from collecting data on kids and teenagers.18

  

U.S. government agencies have also taken focused, assertive action in this direction. For example, in 2019 Google LLC and Youtube LLC agreed to pay US$170 million in settlement with the Federal Trade Commission (FTC) over allegations of violations of the Children’s Online Privacy Protection Act (COPPA) Rule. According to the complaint filed by the FTC and the New York Attorney General, Youtube has allegedly collected personal information from children without their parents’ consent to deliver targeted ads on child-directed channels.19

 

Additionally, the European Union has enacted robust legislation on targeting advertising.20 For instance, The Digital Services Act (DSA) prevents online platforms from using sensitive information, such as sexual orientation, race, and religion for targeted ads.21 In the long run, with the legislation coming into force in most EU-member states in 2024, it aims to give better protection to users and to promote fundamental rights online, establish a powerful transparency and accountability framework for online platforms, and provide a single, uniform framework across the EU.

 

2.2. Legal risks

 

Alphabet’s failure to comply with laws aiming at protecting users’ rights or align with the requirements set by internationally recognised human rights standards exposes the Company to material legal risks. As public scrutiny over privacy rights increased in recent years, Alphabet has been subject to legal issues over its data collection practices and policies. For example:

 

-In 2022 a bipartisan group of Attorneys General from Texas, Indiana, Washington State, and the District of Columbia filed a lawsuit against Google over “deceptive location tracking practices” invading users’ privacy. Noting that Google “has a powerful financial incentive to obscure the details of its location-tracking practices and make it difficult for users to opt out,” the defendants claim that “Google has systematically misled, deceived, and withheld material facts from users in Texas about how their location is tracked and used and how to stop Google from monetizing their movements.”22

  

_____________________________

17 S. 3520 / H.R. 6416 - Banning Surveillance Advertising Act 11. H.R. 3451 - Social Media DATA Act

18 https://www.politico.com/news/2023/02/07/biden-calls-for-ban-of-online-ads-targeting-children-00081731

19 https://www.ftc.gov/news-events/news/press-releases/2019/09/google-youtube-will-pay-record-170-million-alleged-violations-childrens-privacy-law

2.20    1. Digital Markets Act
3.3AI Act
4.Political Ads Regulation
5.ePrivacy Regulation
6.Platform Workers Directive
7.Regulation on child sexual abuse material

21 Questions and Answers: Digital Services Act (europa.eu)

22 https://www.texasattorneygeneral.gov/sites/default/files/images/executive-management/DRAFT%20Texas%20Geolocation%20Petition%201.23%20Final%20Redacted.pdf?utm_content=&utm_medium=email&utm_name=&utm_source=govdelivery&utm_term=

  

   
 

 

-In 2022, a coalition of forty Attorneys General entered in a record $391.5 million settlement agreement with Google over its location tracking practices. Through their investigation, the Attorneys General found that “Google violated state consumer protection laws by misleading consumers about its location tracking practices in various ways since at least 2014.” Under the settlement, Google agreed to increase consumer transparency on how their location data are tracked and how to opt out from location tracking. In application of the settlement, the Company will also limit its use and storage of certain types of location information.23

 

2.3. Reputational risk

 

As one of the world’s largest technology companies, Alphabet has an outsized influence society. This status exposes the Company to significant scrutiny from the public as well as governments, regulators and law makers. In the past decade, Alphabet subsidiaries, including Google and YouTube, have been subject to high-profile controversies and criticisms over human rights-related issues, including data privacy. These issues have resulted in regulatory scrutiny, public backlash, and negative media coverage, all of which can deteriorate the company's reputation in the long run.

 

A survey24, dated April 2018, showed that Google was the second least-trusted technology company when it comes to handling personal information.25 A poll published by the Washington Post-Schar School in 2021, revealed that 53% and 47% of Internet users said they did not trust much or did not trust at all YouTube and Google respectively to handle their personal data.26 The poll also found that targeted ads are “widely disliked” by Internet users and 74% of them said they are invasive.

 

Although Alphabet is considered as a dominant player in the industry, growing awareness and concerns among consumers and regulators about the potential risks associated with the human rights impacts of its products may create greater opportunities to other technology actors that offer alternative revenue models that allow users to retain greater control over their data.

 

3.A human rights impact assessment is necessary to reinforce Google’s due diligence and protect long-term shareholder value

  

Given the importance of targeted advertising to Google’s business model and the well-documented human rights risks associated with targeted advertising, a robust and transparent Human Rights Impact Assessment in line with internationally recognized human rights standards is necessary.

 

_____________________________

23 https://www.attorneygeneral.gov/taking-action/attorney-general-josh-shapiro-announces-391-million-settlement-with-google-over-location-tracking-practices/

24 https://www.cultofmac.com/541167/trust-apple-most-trusted-tech-companies-privacy-personal-info/

25 Other companies ranked included Facebook, Google, Uber, Twitter, Snap, Apple, Amazon, Microsoft, Lyft, Tesla and Netflix.

26 https://www.washingtonpost.com/technology/2021/12/22/tech-trust-survey/

 

   
 

 

An independent third-party assessment would help inform Alphabet’s management, the Board of Directors, and shareholders about the human rights risks that the Company faces in its ads business and the merits of its human rights approach including policies and practices. In addition, such an Assessment would help the management and Board manage the risks associated with failure to respect these human rights, guide management’s approach to protect the human rights of its users, including the steps to remedy any negative human rights impacts stemming from its technologies.

 

Considering the material nature of the regulatory, legal and reputational risks that Alphabet faces and, by extension its shareholders, it is key for the Company to increase the degree of transparency it provides so that investors can take informed investment decisions. Notably, the Proponents believe that the fast pace of technological change and product upgrades27 heighten the need for greater transparency on these issues. For example, in 2021, Google announced a new targeted advertising system called FloC (also known as the “Federated Learning of Cohorts”)28 that was meant to be implemented in 2022. While FloC was designed to replace third-party cookies by 2023 and offers greater protection to users’ privacy, many experts identified major flaws with the technology. Some even said that “from a certain perspective you could argue this is actually worse for privacy than cookies”.29 Just a year later, following the concerns expressed by tech and human rights experts30, Google announced the roll back of FloC and the implementation of a new advertising system called Topics API31. Notably, Google has not conducted a third-party HRIA on any of these systems but rather relies on its own internal resources and collaborative tool, such as the Privacy Sandbox.

 

Alphabet explicitly endorses the UN Guiding Principles on Business and Human Rights (UNGPs) 32 — the authoritative global standard on the role of businesses in ensuring respect for human rights in their own operations and through their business relationships. The UNGPs explicitly state that companies must conduct human rights due diligence on their products and services, particularly if the scale and scope of the impacts are likely to be important.33

 

The Proponents believe that the limited steps Alphabet has taken to mitigate risks associated with targeted advertising remain insufficient relative to the scale and materiality of the risks mentioned above. A third-party HRIA will provide an assessment with the proper expertise, objectivity, and comprehensiveness34 necessary to address the wide and varied range of human rights risks faced by Alphbabet’s billions of global users.

 

 _____________________________

27 https://www.weforum.org/agenda/2020/11/heres-how-technology-has-changed-and-changed-us-over-the-past-20-years/

28 https://blog.google/products/ads-commerce/2021-01-privacy-sandbox/

29 https://www.thedrum.com/news/2022/01/25/wait-wtf-happened-with-google-floc-we-explain

30 https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea

31 https://blog.google/products/chrome/get-know-new-topics-api-privacy-sandbox/; https://techcrunch.com/2022/01/25/google-kills-off-floc-replaces-it-with-topics/

32 https://about.google/intl/ALL_us/human-rights/

33 https://digitallibrary.un.org/record/720245/files/GuidingPrinciplesBusinessHR_EN.pdf?ln=en

34 The UN Global Compact Guide to Human Rights Impact Assessment and Management (HRIAM): issues_doc/human_rights/GuidetoHRIAM.pdf (d306pr3pise04h.cloudfront.net)

 

   
 

 

Conclusion:

 

Alphabet, Inc has one of the largest footprints of any entity in the world. According to data provider Statista, Google had over 259 million unique visitors and a market share of 61.4 percent among the leading U.S. search engine providers as of March 2022.35 As of March 2023, more than 4.3 billion people use Google daily.36

 

This unmatched reach and influence require an equally unmatched commitment to preserving and respecting human rights across all parts of the business model. Given concerns around the fairness, accountability, and transparency of the underlying algorithmic systems, targeted advertising has been heavily scrutinized for its adverse impacts on human rights and will likely face increasing regulatory and legal risks.

 

A robust HRIA will enable the Company to better identify, mitigate, and prevent such adverse human rights impacts that expose the Company to regulatory, legal, and reputational risks while protecting long-term shareholder value.

 

For these reasons, we urge Alphabet’s shareholders to vote FOR PROPOSAL NUMBER 11 Regarding Human Rights Impact Assessment of Targeted Ad Policies.

 

Any questions regarding this exempt solicitation or Proposal Number 11 should be directed to Sarah Couturier-Tanoh, Associate Director of Corporate Engagement at SHARE at scouturier-tanoh@share.ca.

 

 

 

This is not a solicitation of authority to vote your proxy.

Please DO NOT send us your proxy card as it will not be accepted

 

_____________________________

35 https://www.statista.com/topics/1001/google/#topicOverview

36 https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/