710 F.3d 1020 (9th Cir. 2013), 10-55946, Columbia Pictures Industries, Inc. v. Fung
|Citation:||710 F.3d 1020|
|Opinion Judge:||BERZON, Circuit Judge:|
|Party Name:||COLUMBIA PICTURES INDUSTRIES, INC.; Disney Enterprises, Inc.; Paramount Pictures Corporation; Tristar Pictures, Inc.; Twentieth Century Fox Film Corporation; Universal City Studios LLLP; Universal City Studios Productions, LLLP; Warner Bros Entertainment, Inc., Plaintiffs-Appellees, v. Gary FUNG; isoHunt Web Technologies, Inc., Defendants-Appellant|
|Attorney:||Ira P. Rothken, Esq. (argued), Robert L. Kovsky, Esq., and Jared R. Smith, Esq. of Rothken Law Firm, Novato, CA, for Defendant-Appellants. Paul M. Smith (argued), Steven B. Fabrizio, William M. Hohengarten, Duane C. Pozza, Garret A. Levin, Caroline D. Lopez, Jenner & Block LLP, Washington, D.C.; ...|
|Judge Panel:||Before: HARRY PREGERSON, RAYMOND C. FISHER, and MARSHA S. BERZON, Circuit Judges.|
|Case Date:||March 21, 2013|
|Court:||United States Courts of Appeals, Court of Appeals for the Ninth Circuit|
Argued May 6, 2011.
Submitted March 21, 2013.
[Copyrighted Material Omitted]
[Copyrighted Material Omitted]
Appeal from the United States District Court for the Central District of California, Stephen V. Wilson, District Judge, Presiding. D.C. No. 2:06-cv-05578-SVW-JC.
This case is yet another concerning the application of established intellectual property concepts to new technologies. See, e.g., UMG Recordings, Inc. v. Shelter Capital Partners, LLC, --- F.3d ----, 2013 WL 1092793 (9th Cir.2013); Perfect 10, Inc. v. Visa Int'l Serv. Ass'n, 494 F.3d 788 (9th Cir.2007); Viacom Int'l, Inc. v. YouTube, Inc., 676 F.3d 19 (2d Cir.2012). Various film studios alleged that the services offered and websites maintained by Appellants Gary Fung and his company, isoHunt Web Technologies, Inc. (isohunt.com, torrentbox.com, podtropolis.com, and ed2k-it.com, collectively referred to in this opinion as " Fung" or the " Fung sites" ) induced third parties to download infringing
copies of the studios' copyrighted works.1 The district court agreed, holding that the undisputed facts establish that Fung is liable for contributory copyright infringement. The district court also held as a matter of law that Fung is not entitled to protection from damages liability under any of the " safe harbor" provisions of the Digital Millennium Copyright Act (" DMCA" ), 17 U.S.C. § 512, Congress's foray into mediating the competing interests in protecting intellectual property interests and in encouraging creative development of devices for using the Internet to make information available. By separate order, the district court permanently enjoined Fung from engaging in a number of activities that ostensibly facilitate the infringement of Plaintiffs' works.
Fung contests the copyright violation determination as well as the determination of his ineligibility for safe harbor protection under the DMCA. He also argues that the injunction is punitive and unduly vague, violates his rights to free speech, and exceeds the district court's jurisdiction by requiring filtering of communications occurring outside of the United States. We affirm on the liability issues but reverse in part with regard to the injunctive relief granted.
This case concerns a peer-to-peer file sharing protocol 2 known as BitTorrent. We begin by providing basic background information useful to understanding the role the Fung sites play in copyright infringement.
I. Client-server vs. peer-to-peer networks
The traditional method of sharing content over a network is the relatively straightforward client-server model. In a client-server network, one or more central computers (called " servers" ) store the information; upon request from a user (or " client" ), the server sends the requested information to the client. In other words, the server supplies information resources to clients, but the clients do not share any of their resources with the server. Client-server networks tend to be relatively secure, but they have a few drawbacks: if the server goes down, the entire network fails; and if many clients make requests at the same time, the server can become overwhelmed, increasing the time it takes the server to fulfill requests from clients. Client-server systems, moreover, tend to be more expensive to set up and operate than other systems. Websites work on a client-server model, with the server storing the website's content and delivering it to users upon demand.
" Peer-to-peer" (P2P) networking is a generic term used to refer to several different types of technology that have one thing in common: a decentralized infrastructure whereby each participant in the network (typically called a " peer," but sometimes called a " node" ) acts as both a supplier and consumer of information resources. Although less secure, P2P networks are generally more reliable than client-server networks and do not suffer from the same bottleneck problems. See generally
Metro-Goldwyn-Mayer Studios, Inc. v. Grokster, Ltd. (" Grokster III " ), 545 U.S. 913, 920 & n. 1, 125 S.Ct. 2764, 162 L.Ed.2d 781 (2005). These strengths make P2P networks ideally suited for sharing large files, a feature that has led to their adoption by, among others, those wanting access to pirated media, including music, movies, and television shows. Id. But there also are a great number of non-infringing uses for peer-to-peer networks; copyright infringement is in no sense intrinsic to the technology, any more than making unauthorized copies of television shows was to the video tape recorder. Compare A & M Records v. Napster, Inc., 239 F.3d 1004, 1021 (9th Cir.2001) with Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S. 417, 456, 104 S.Ct. 774, 78 L.Ed.2d 574 (1984).
II. Architecture of P2P networks
In a client-server network, clients can easily learn what files the server has available for download, because the files are all in one central place. In a P2P network, in contrast, there is no centralized file repository, so figuring out what information other peers have available is more challenging. The various P2P protocols permit indexing in different ways.
A. " Pure" P2P networks
In " pure" P2P networks, a user wanting to find out which peers have particular content available for download will send out a search query to several of his neighbor peers. As those neighbor peers receive the query, they send a response back to the requesting user reporting whether they have any content matching the search terms, and then pass the query on to some of their neighbors, who repeat the same two steps; this process is known as " flooding." In large P2P networks, the query does not get to every peer on the network, because permitting that amount of signaling traffic would either overwhelm the resources of the peers or use up all of the network's bandwidth (or both). See Grokster III, 545 U.S. at 920 n. 1, 125 S.Ct. 2764. Therefore, the P2P protocol will usually specify that queries should no longer be passed on after a certain amount of time (the so-called " time to live" ) or after they have already been passed on a certain number of times (the " hop count" ). Once the querying user has the search results, he can go directly to a peer that has the content desired to download it.
This search method is an inefficient one for finding content (especially rare content that only a few peers have), and it causes a lot of signaling traffic on the network. The most popular pure P2P protocol was Gnutella. StreamCast, a Grokster defendant, used Gnutella to power its software application, Morpheus. See Grokster III, 545 U.S. at 921-22, 125 S.Ct. 2764.3
B. " Centralized" P2P networks
" Centralized" P2P networks, by contrast, use a centralized server to index the content available on all the peers: the user sends the query to the indexing server, which tells the user which peers have the content available for download. At the same time the user tells the indexing server what files he has available for others to download. Once the user makes contact with the indexing server, he knows which specific peers to contact for the content sought, which reduces search time and signaling traffic as compared to a " pure" P2P protocol.
Although a centralized P2P network has similarities with a client-server network, the key difference is that the indexing server does not store or transfer the content. It just tells users which other peers
have the content they seek. In other words, searching is centralized, but file transfers are peer-to-peer. One consequent disadvantage of a centralized P2P network is that it has a single point of potential failure: the indexing server. If it fails, the entire system fails. Napster was a centralized P2P network, see generally Napster, 239 F.3d at 1011-13, as, in part, is eDonkey, the technology upon which one of the Fung sites, ed2k-it.com, is based.
C. Hybrid P2P networks
Finally, there are a number of hybrid protocols. The most common type of hybrid systems use what are called " supernodes." In these systems, each peer is called a " node," and each node is assigned to one " supernode." A supernode is a regular node that has been " promoted," usually because it has more bandwith available, to perform certain tasks. Each supernode indexes the content available on each of the nodes attached to it, called its " descendants." When a node sends out a search query, it goes just to the supernode to which it is attached. The supernode...
To continue readingFREE SIGN UP