Amazon.com v. Swarm Tech.

Docket NumberIPR2022-00283,Patent 9,852,004 B2
Decision Date20 June 2023
PartiesAMAZON.COM, INC. and AMAZON WEB SERVICES, INC., Petitioner, v. SWARM TECHNOLOGY LLC, Patent Owner.
CourtPatent Trial and Appeal Board

FOR PETITIONER: Jonathan M. Strang Adam M. Greenfield David Zucker Kimberly Q. Li LATHAM & WATKINS LLP

FOR PATENT OWNER: Daniel R. Pote JENNINGS STROUSS & SALMON PLC Michael K. Kelly BEUS GILBERT MCGRODER PLLC Daniel J Anderson ERNST, BROWN & DRAPER, PLLC

Before MICHAEL R. ZECHER, GREGG I. ANDERSON, and SCOTT B. HOWARD Administrative Patent Judges.

JUDGMENT

HOWARD, ADMINISTRATIVE PATENT JUDGE

Determining All Challenged Claims Unpatentable

Denying Patent Owner's Motion to Amend

35 U.S.C. § 318(a)

I. INTRODUCTION
A. Background and Summary

Amazon.com, Inc. and Amazon Web Services, Inc. (collectively "Petitioner") filed a Petition requesting inter partes review ("IPR") of claims 1-12 of U.S. Patent No. 9,852,004 B2 (Ex. 1001, "the '004 patent"). Paper 2 ("Pet."). Swarm Technology LLC ("Patent Owner") filed a Preliminary Response. Paper 5. We instituted an inter partes review of claims 1-12 of the '004 patent on all grounds of unpatentability alleged in the Petition. Paper 6 ("Institution Decision" or "Inst. Dec").

After institution of trial, Patent Owner filed a Corrected Response (Paper 16, "PO Resp."), Petitioner filed a Reply (Paper 19, "Pet. Reply"), and Patent Owner filed a Sur-reply (Paper 24, "PO Sur-reply").

Patent Owner also filed a Contingent Motion to Amend the '004 patent (Paper 12, "MTA"), to which Petitioner filed an Opposition (Paper 20, "MTA Opp."). We issued Preliminary Guidance (Paper 22, "Prelim. Guid.") concerning the Contingent Motion to Amend. Following the Preliminary Guidance, Patent Owner filed a Reply to Petitioner's Opposition (Paper 23, "MTA Reply"), and Petitioner filed a Sur-reply to Patent Owner's Reply (Paper 35, "MTA Sur-reply").

An oral hearing was held on March 29, 2023, and the record contains a transcript of this hearing. Paper 42 ("Tr.").

We have jurisdiction under 35 U.S.C. § 6. This Final Written Decision is issued pursuant to 35 U.S.C. § 318(a). For the reasons that follow, we determine that Petitioner has shown by a preponderance of the evidence that all the challenged claims are unpatentable. Additionally, because we determine that Patent Owner has not met the statutory and regulatory requirements associated with filing a motion to amend and Petitioner has shown by a preponderance of the evidence that the proposed substitute claims are unpatentable, we deny Patent Owner's Contingent Motion to Amend.

B. Real Parties in Interest

Petitioner identifies Amazon.com, Inc. and Amazon Web Services, Inc. as the real parties in interest. Pet. 78.

Patent Owner identifies Swarm Technology LLC as the real party in interest. Paper 3, 1 (Patent Owner's Mandatory Notices).

C. Related Matters

The parties identify the following district court proceedings involving the '004 patent: (1) Juniper Networks, Inc. v. Swarm Technology LLC, No. 3:20-cv-03137-JD (N.D. Cal.) ("California proceeding") and (2) Swarm Technology, LLC v. Amazon.com, Inc., No. 2:21-cv-00438-DJH (D. Az.) ("Arizona proceeding"). Pet. 79; Paper 3, 1-2. The parties also identify the following inter partes review proceeding involving the '004 patent: Juniper Networks, Inc. v. Swarm Technology LLC, IPR2021-01445. [1] Pet. 79; Paper 3,2.

Additionally, the parties identify a number of patents and patent applications related to the '004 patent and inter partes review proceedings involving some of those patents. Pet. 79; Paper 3, 2-3.

D. The '004 Patent

The '004 patent is titled "System and Method for Parallel Processing Using Dynamically Configurable Proactive Co-Processing Cells" and is generally directed to "a processing architecture which involves autonomous co-processors configured to proactively retrieve tasks from a task pool populated by a central processing unit." Ex. 1001, code (54), 1:14-18.

According to the '004 patent, "[c]omputer processors traditionally execute machine coded instructions serially. To run a plurality of applications concurrently, a single processor interleaves instructions from various programs and executes them serially, although from the user's perspective the applications appear to be processed in parallel." Ex. 1001, 1:42-47. The '004 patent further states that "[t]rue parallel or multi-core processing, on the other hand, is a computational approach that breaks large computational tasks into individual blocks of computations and distributes them among two or more processors." Ex. 1001, 1:47-50. "Atypical multiprocessor system includes a central processing unit ('CPU') [2] and one or more co-processors. The CPU partitions the computational requirements into tasks and distributes the tasks to co-processors. Completed threads are reported to the CPU, which continues to distribute additional threads to the co-processors as needed." Ex. 1001, 1:56-61 (footnote added).

The '004 patent identifies a problem with using the CPU to control the distribution of tasks:

Presently known multiprocessing approaches are disadvantageous in that a significant amount of CPU bandwidth is consumed by task distribution; waiting for tasks to be completed before distributing new tasks (often with dependencies on previous tasks); responding to interrupts from co-processors when a task is completed; and responding to other messages from co-processors. In addition, co-processors often remain idle while waiting for a new task from the CPU.

Ex. 1001, 1:61-2:3. The '004 patent addresses that problem using a system that reduces CPU management overhead and which "more effectively harnesses and exploits available co-processing resources." Ex. 1001, 2:4-7.

Figure 1 of the '004 patent is reproduced below.

(Image Omitted)

Figure 1 "is a schematic block diagram of a parallel processing architecture including a CPU, memory, task pool, and a plurality of co-processors configured to communicate through a fabric." Ex. 1001, 3:56-59. More specifically, Figure 1 shows "a single or multi-core CPU 11 and one or more solidarity or co-processing cells 12A-12[n] configured to communicate with a task pool 13 through a cross-bar switching fabric 14. The solidarity cells 12 may also communicate with each other through the switching fabric 14 or through a separate cell bus (not shown)." Ex. 1001, 4:30-36. "The CPU 11 may communicate with the task pool 13 directly or through the switching fabric 14. One or more memory units 15 each contain data and/or instructions" to perform computations. Ex. 1001, 4:36-39.

E. Illustrative Claim

Claims 1 and 3 are independent, reproduced below, and are illustrative of the claimed invention.

1. [1pre] A processing system, comprising:
[1a] a task pool;
[1b] a controller configured to populate the task pool with a plurality of first tasks and a plurality of second tasks;
[1c] a first co-processor configured to successively retrieve a first task from the task pool; deliver the first task to the first co-processor; process the first task; generate first resulting data; and update the task pool to reflect completion of the first task, all without any communication between the first co-processor and the controller; and
[1d] a second co-processor configured to successively: retrieve a second task from the task pool; deliver the second task to the second co-processor; process the second task; generate second resulting data; and update the task pool to reflect completion of the second task, all without any communication between the second co-processor and the controller;
[1e] wherein the processing system is configured to dynamically accept the first co-processor, the second co-processor, and an additional co-processor into the processing system on a plug-and-play basis without any communication with the controller.
3. [3pre] A processing system, comprising:
[3a] a task pool;
[3b] a controller configured to populate the task pool with a plurality of first tasks and a plurality of second tasks; [3c] a first co-processor configured to successively: retrieve a first task from the task pool; deliver the first task to the first co-processor; process the first task; generate first resulting data; and update the task pool to reflect completion of the first task, all without any communication between the first co-processor and the controller; and
[3d] a second co-processor configured to successively: retrieve a second task from the task pool; deliver the second task to the second co-processor; process the second task; generate second resulting data; and update the task pool to reflect completion of the second task, all without any communication between the second co-processor and the controller;
[3e] wherein:
the processing system is configured to dynamically accept the first co-processor, the second co-processor, and an additional co-processor into the processing system on a plug-and-play basis without any communication with the controller;
[3f] the first task includes indicia of a first task type, the first co-processor is configured to perform tasks of the first type, and the first agent is configured to search the task pool for a task of the first type;
[3g] the second task includes indicia of a second task type, the second co-processor is configured to perform tasks of the second type, and the second agent is configured to search the task pool for a task of the second type;
[3h] the first co-processor includes a first agent comprising a first source address, a first destination address, and a first payload; and
[3i] the second co-processor includes a second agent comprising a second source address, a second destination address, and a second payload;
[3
...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT