People for the Ethical Treatment of Animals v. Tabak

Decision Date31 March 2023
Docket NumberCivil Action 21-cv-2380 (BAH)
PartiesPEOPLE FOR THE ETHICAL TREATMENT OF ANIMALS, MADELINE KRASNO, and RYAN HARTKOPF, Plaintiffs, v. LAWRENCE A. TABAK, in his official capacity as Acting Director of the National Institutes of Health, and XAVIER BECERRA, in his official capacity as Secretary of the U.S. Department of Health and Human Services, Defendants.
CourtU.S. District Court — District of Columbia
MEMORANDUM OPINION

BERYL A. HOWELL U.S. DISTRICT COURT JUDGE

Pending before this Court is a question at the frontier of courts' application of the First Amendment to the internet-an environment that is challenging to navigate under the strict categorization and spatial reasoning of First Amendment doctrine. The foundational case law of the First Amendment has developed in the quaint setting of sidewalks and parks, schools and public transit, when, by comparison cyberspace is analogizable to the shifting sands of the Sahara. Here, the architectural guideposts in the form of physical “materials, design, and demarcation from the surrounding area,” Hodge v. Talkin, 799 F.3d 1145, 1150 (D.C. Cir. 2015), that steer traditional permissible expressive use and access analysis must be found in alternative ways. Courts confronting such issues must tread these sands with caution.

The corner of cyberspace at issue in the instant matter is the official Facebook and Instagram pages maintained by the National Institutes of Health (NIH), where animal-rights activists, including plaintiffs, regularly post comments criticizing the practice of animal testing. See Parties' Joint Stipulation of Facts (“Jt. Stip”) ¶¶ 62-86, ECF No. 28. Indeed, NIH maintains that animal-rights activists' comments comprise “the lion's share of the comment thread[s] responding to many of the agency's posts. Defs.' Opp'n Pls.' Mot. Summ. J. &amp Mem. Supp. Cross-Mot. Summ. J. (“Defs.' Opp'n”) at 12, ECF No. 31-1.[1]In response to these posts, NIH uses custom keyword filters to hide comments containing a range of keywords, including words related to animal testing, such as “animals” and “torture.” Jt. Stip. ¶ 58. Plaintiffs initiated the instant litigation challenging NIH's keyword filters as a “patently unconstitutional practice of surreptitiously censoring speech it does not like.” Pls.' Mem. Supp. Mot. Summ. J (“Pls.' Mem.”) at 7, ECF No. 30-1. For the reasons set forth below, the Court agrees with defendants that NIH's social media pages' comment threads are limited public fora, and NIH's enforcement of its commenting guidelines through keyword filters is viewpoint-neutral and reasonable. Accordingly, defendants' cross-motion for summary judgment is granted, and plaintiffs' motion for the same is denied.

I. BACKGROUND

The factual background and procedural history relevant to the pending motions are described below.

A. Factual Background

NIH, the primary federal agency charged with performing and supporting biomedical and behavior research, maintains verified Facebook and Instagram accounts that are viewable by the public. See Jt. Stip. ¶¶ 4, 36-38, 41-43, ECF No. 28. Users of both social media platforms are invited to react to NIH's posts on these platforms-in the case of Facebook, through a spectrum of “reaction” options, and on Instagram, via the humble “like” reaction-and to post comments. Id. ¶¶ 14, 29, 39, 45. Both platforms are used by NIH to ‘communicate and interact with citizens' about agency-related updates and public health news,” id. ¶ 36 (quoting Web Policies and Notices, National Institutes OF HEALTH (JAN. 14, 2021), https://perma.cc/BC7R-ZFME), such as, for example, providing updates on research studies related to COVID-19 vaccines and sharing a woman's “Alzheimer's Caregiver story” to promote caregiving resources, see id. ¶ 44; id., Ex. 1, NIH Facebook Post & Viewable Comments (June 4, 2021), ECF No. 28-1; id., Ex. 15, NIH Instagram Post & Viewable Comments (Nov. 12, 2021), ECF No. 28-15.

1. NIH's Social Media Moderation Policy

Comments on NIH's Facebook and Instagram pages are governed by NIH's publicly available guidelines designed to “encourage respectful and constructive dialogue.” Id., Ex. 19, NIH Comment Guidelines, ECF No. 28-19. The guidelines, first posted online in March 2015, See Jt. stip. ¶ 52, prohibit, inter alia, [v]ulgar, obscene, profane, threatening, or abusive language,” [r]epetitive posts,” and-most relevant to the instant matter-[o]ff-topic posts,” NIH Comment Guidelines at 2. At the time that the parties filed the Joint stipulation, in February 2022, NIH's Facebook and Instagram accounts contained direct links to the comment guidelines. Jt. stip. ¶ 53. The accounts also contained direct links to NIH's separate “Web Policies and Notices” page, which adds that “as a practice, comment moderator policy requires the removal from NIH Facebook pages of any comments that contain spam or are improper, inflammatory, off-topic, or offensive.” Id. ¶ 54 (not mentioning NIH's policy vis-a-vis Instagram).

NIH's own comment guidelines supplement content policies maintained by the platforms themselves. Facebook and Instagram prohibit certain content pursuant to their community standards, including content inciting or facilitating serious violence, hate speech, and spam. Id. ¶¶ 20 & n.2, 35 & n.5. Content falling within these prohibited categories are removed by the platforms with notice to users. Id. ¶¶ 20, 35. Instagram also uses an artificial intelligence algorithm to hide automatically comments that are similar to those reported as violating the platform's policies; hidden comments are not deleted, but rather accessible at the bottom of a post by clicking a “view hidden comments” option. Id. ¶ 35.

Both platforms give account-holders tools to moderate the comments on their account pages. On Facebook, account administrators, like NIH, have the option to “ban” individual users-effectively muting the targeted users by allowing them to access the page but removing their commenting or reacting privileges-and to “block” users, by preventing them from accessing the page altogether. Id. ¶ 19. Administrators can also moderate their pages comment-by-comment by manually deleting or hiding certain comments, with the latter function resulting in the comment remaining visible only to the comment's creator and that user's friends. Id. ¶ 17. On Instagram, similarly, account holders can block other users from commenting on or viewing their posts, such that blocked users' comments are rendered invisible to all but the blocked users, who are not notified of the hidden nature of their comments. Id. ¶¶ 33-34. Instagram also maintains a “restrict comments” feature, allowing account holders to pre-screen particular users' comments before they appear on the account holder's posts. Id. ¶ 32.

Most relevant to the instant litigation are the platforms' keyword filtering functions. On Facebook, the platform offers comment filtering tools such as a “profanity filter,” which account administrators can simply enable to filter all comments containing profanity. Id. ¶ 16. Account administrators can add specified words to the filter. Comments containing filtered words are removed from public view but still accessible to the user who posted them and the user's friends. Id. On Instagram, too, account holders can automatically hide all comments containing specified words or phrases, resulting in the hidden comment only being accessible to the user who posted it and to the account holder upon choosing to “view hidden comments.” Id. ¶ 30. In addition to this function permitting customized lists of blocked keywords, Instagram maintains a “Hide Comments” function that by default identifies comments containing offensive words or phrases reported to violate the platform's terms of service. Those comments, unlike those blocked via the custom keywords list, are still visible to all Instagram users, but they are deprioritized: to view them, users must scroll to the bottom of a post's comments section and choose to view the hidden comments. Id. ¶ 31.

NIH primarily enforces its comment guidelines on Facebook and Instagram via the keyword filtering function, using Facebook and Instagram's default filters, which are supplemented with NIH's custom keyword lists, discussed infra in Part I.A.3. Id. ¶¶ 56-58. Before plaintiffs initiated this litigation, NIH endeavored manually to hide comments that violated the guidelines, but those efforts were “limited” by resource constraints, and “especially limited” in the wake of the further strains created by the COVID-19 pandemic. Id. ¶ 61. NIH has advised that the practice of manually hiding comments violating the agency's guidelines will resume upon the resolution of the instant motions. Id.

2. Plaintiffs' Anti-Animal Testing Social Media Advocacy

Plaintiff People for the Ethical Treatment of Animals (PETA) is a non-profit organization that has launched a social media campaign focusing on NIH's funding of primate testing, and as part of this campaign, its supporters and employees engage on NIH's social media channels to protest animal testing. Jt. Stip. ¶¶ 1, 88. Individual plaintiffs Madeline Krasno and Ryan Hartkopf are animal rights advocates, who use social media as a tool to “raise awareness” about animal testing, NIH's funding of research involving the practice, and “mental health issues often experienced by animal lab workers”; to pressure institutions including NIH to curtail animal testing; and to “signal to potential whistleblowers that they have allies.” Id. ¶¶ 89-90. To achieve these advocacy goals, plaintiffs frequently comment about animal testing on social media, including NIH's Instagram and Facebook pages. Pls.' Mem. at 20-21.

3. NIH's Keyword Filters

In addition to the default...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT