Professional Documents
Culture Documents
Patent 6,339,428
DOCKET NO.: 2211726-00131
Filed on behalf of Unified Patents Inc.
By: David L. Cavanaugh, Reg. No. 36,476
Daniel V. Williams 45,221
Wilmer Cutler Pickering Hale and Dorr LLP
1875 Pennsylvania Ave., NW
Washington, DC 20006
Tel: (202) 663-6000
Email: David.Cavanaugh@wilmerhale.com
Jonathan Stroud, Reg. No. 72,518
Unified Patents Inc.
1875 Connecticut Ave. NW, Floor 10
Washington, DC, 20009
Tel: (202) 805-8931
Email: jonathan@unifiedpatents.com
UNITED STATES PATENT AND TRADEMARK OFFICE
____________________________________________
IPR2016-01374 Petition
Patent 6,339,428
TABLE OF CONTENTS
Page
I.
II.
III.
B.
IV.
TECHNOLOGY BACKGROUND................................................................. 3
V.
VI.
Ground I: Claims 1, 2, 3, 8, 18, 19, 20, 21, 25, 26, 27, and 28 are
rendered obvious by Kuo in view of Griffin ....................................... 13
IPR2016-01374 Petition
Patent 6,339,428
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
VIII. CONCLUSION.............................................................................................. 47
ii
IPR2016-01374 Petition
Patent 6,339,428
I.
MANDATORY NOTICES
A.
Real Party-in-Interest
Related Matters
IPR2016-01374 Petition
Patent 6,339,428
Volkswagen Group of America, Inc. on April 15, 2016. IPR2016-01178 was filed
by petitioner Texas Instruments Inc. on June 10, 2016.
C.
Counsel
David L. Cavanaugh (Reg. No. 36,476) will act as lead counsel; Jonathan
Stroud (Reg. No. 72,518) and Daniel Williams (Reg. No. 45,221) will act as backup counsel.
D.
review is sought is available for inter partes review and that Petitioner is not
barred or estopped from requesting an inter partes review challenging the patent
claims on the grounds identified in this Petition.
III.
claims 1, 2, 3, 8, 18, 19, 20, 21, 25, 26, 27, and 28 of the 428 Patent.
IPR2016-01374 Petition
Patent 6,339,428
A.
2.
B.
TECHNOLOGY BACKGROUND
In graphics systems, object surfaces are usually rendered by first dividing
The 428 patent issued from a patent application filed prior to enactment of the
IPR2016-01374 Petition
Patent 6,339,428
memory. Texture space contains many sets of independently organized visual
descriptions (color, transparency, fog, etc.) of what will eventually become
polygon screen pixel data. Before the filing date of the 428 patent, it was known
that the process of sequentially accessing and mapping textures to each of the
many thousands of polygons that make up an image that contain thousands of
objects posed a major design challenge: reduce texture mapping rendering time to
increase overall performance. (Stark 20 (EX1005)). To decrease rendering time,
a texture caching scheme that stores related or frequently used textures was
devised (a scheme that operates directly analogously to the memory caches that
were and are standard in general purpose memory systems).
(Id. at 20
(EX1005)). Since a cache may be accessed much more rapidly than either local
memory or system memory (that holds the textures), a dramatic reduction in image
processing time results. It was also known that since the cache would necessarily
need to be of limited size (to reduce integrated circuit size and cost), the use of data
compression techniques would increase the amount of data that could be stored in a
given cache thereby further decreasing overall processing time.
(EX1005)).
(Id. at 20
It was also known that since texture space may already contain
compressed texture data (so as to allow more textures to be stored in a limited size
texture space memory), no additional processing time or integrated circuit size
penalty would need to be paid to store pre-existing compressed texture data. (Id. at
4
IPR2016-01374 Petition
Patent 6,339,428
20 (EX1005)). It was additionally known that storing compressed data in a cache
was advantageous since texture decompression operations occur logically and
temporally after compressed data retrieval from texture space memory; i.e., the
inclusion of a compressed data cache does not interfere with existing processing
flow. (Id. at 20 (EX1005)).
Graphics circuits for storing image related data in a compressed format
were well known in the industry at the time of invention.
For example,
p. 354, col.1, ll. 22 (EX1004)). Torborg states Talisman broadly applies image
compression technology to solve these problems. (Id. at p. 355, col. 2, ll. 18-19
(EX1004)). Torborg further states we are able to effectively apply compression
IPR2016-01374 Petition
Patent 6,339,428
to images and textures, achieving a significant improvement in priceperformance. (Id. at p. 355, col. 2, ll. 23-25 (EX1004)) (Stark 22 (EX1005)).
Torborg identifies compressed textures that are controlled, stored, and
decompressed using the highlighted Command and Memory Control, Compressed
Texture Cache, and Decompress blocks found in Figure 1, reproduced below (Id.
at p. 358 (EX1004)). These highlighted blocks found in Torborg exhibit a strong
correspondence with the texture address module, cache and decompression blocks
of Fig. 1 of the 428 patent. (Stark 23 (EX1005)).
Further, US Pat. No. 5,745,125, published on April 28, 1998 discloses [i]n
the preferred embodiment, the command preprocessor 142 receives data
transferred from the memory subsystem 106, including both compressed and noncompressed geometry data. (125 patent at 9:13-16). US Pat. No. 5,790,705,
IPR2016-01374 Petition
Patent 6,339,428
published on August 4, 1998 discloses that [t]he present invention relates to an
improvement in digital image data storage through data compression techniques.
(386 patent at 2:41-42).
US Pat. No. 5,822,452, published on October 13, 1998, discloses that the
invention compresses a texture image, stores the compressed texture image, and
quickly and efficiently decompresses the texture image when determining a value
of a pixel. (452 patent at Abstract). Similarly, US Pat. No. 6,111,585, filed on
May 14, 1998, discloses that when the texture image is compressed using JPEG
method, 88 pixels are compressed to one block package to be stored in the refill
line region 320 of the texture cache of FIG. 2. (585 patent at 4:46-54).
The above noted references, in addition to those discussed below, reveal that
storing compressed image data, and in particular, using a cache for storing
compressed texture data as discussed in the 428 patent was well known in the
prior art. (Stark 26 (EX1005)).
V.
The 428 patent discloses a video graphics texture mapping circuit 10 used
to determine color values for pixels in a display frame. (428 patent, 3:31-34
(EX1001)). The video graphics texture mapping circuit 10 includes a memory 20,
a cache 40, a texture address module 30, and a decompression block 50, as shown
IPR2016-01374 Petition
Patent 6,339,428
below in Figure 1 of the 428 patent. (Id. at 3:34-36; Figure 1 (EX1001)). The
428 patent explains that available bandwidth for accessing memory is often a
performance-limiting factor in video graphic circuits. (Id. at 3:36-38 (EX1001))
(Stark 27 (EX1005)).
IPR2016-01374 Petition
Patent 6,339,428
can be transferred to the cache 40. (Id. at 3:41-43 (EX1001)). The 428 patent
explains that the memory 20 may be local system memory or memory accessible
over an accelerated graphics for (AGP) bus. (Id. at 3:43-45 (EX1001)).
The alleged advancement of the 428 patent is based on the texture
information in the cache 40 remaining in the compressed format so that the cache
40 can store more information than it could if the texture information was not
compressed. (Id. at 4:21-25 (EX1001)) (Stark 29 (EX1005)). The compressed
texture information in the cache 40 is decompressed prior to use. (428 patent at
4:25-30 (EX1001)).
A texture address module 30 is used to determine whether or not texture data
for a particular texturing operation is currently stored in the cache 40. (Id. at 4:3538 (EX1001)). When needed texture information is not stored in the cache 40, the
texture address module 30 copies texture information from the memory 20 into the
cache 40. (Id. at 4:38-41 (EX1001)). The texture address module 30 also provides
control information, which may include address and control signals, to the cache
40 so that the cache 40 provides the required texture data 42 at its outputs. (Id. at
4:41-45 (EX1001)).
The 428 patent describes the following three prior art scenarios were known
to reduce memory bandwidth with respect to retrieving stored texture data: (1)
storing texture data in a cache (Id. at 1:41-43 (EX1001)); (2) compressing texture
9
IPR2016-01374 Petition
Patent 6,339,428
data for storage in memory (Id. at 1:51-52 (EX1001)); and (3) compressing texture
data for storage, decompressing the texture data, and storing the decompressed
texture data in a cache. (Id. at 1:61-65 (EX1001)). The admitted prior art alone
would have motivated one of ordinary skill to provide the fundamental feature of
the 428 patent, namely, the storing of compressed texture data in a cache. (Stark
31 (EX1005)).
B.
A person having ordinary skill in the art at the time of the invention would
have a B.S. degree in Electrical Engineering, Computer Science, or an equivalent
field as well as at least 2-3 years of academic or industry experience in computer
graphics, image processing hardware, or comparable industry experience. (Id. at
33 (EX1005)).
C.
Prosecution History
The 428 Patent issued from US Pat. Appl. No. 09/356,398, which was filed
on July 16, 1999 with claims 1-30 (File History, Application (7/16/1999)
(EX1006)). Claims 1-30 were allowed on October 1, 2001 without receiving an
Office Action on the merits. (File History, Notice of Allowability (10/1/2001)
(EX1007)).
10
IPR2016-01374 Petition
Patent 6,339,428
VI.
CLAIM CONSTRUCTION
Claim terms of an unexpired patent in inter partes review are given the
37 C.F.R.
42.100(b); In re Cuozzo Speed Techs., LLC 778 F.3d 1271, 127981 (Fed. Cir.
2015). Any claim term that lacks a definition in the specification is therefore given
a broad interpretation.2 In re ICON Health & Fitness, Inc., 496 F.3d 1374, 1379
(Fed. Cir. 2007). Under the broadest reasonable interpretation standard, claim
terms are given their ordinary and customary meaning, as they would be
understood by one of ordinary skill in the art, in the context of the disclosure. In re
Translogic Tech., Inc., 504 F.3d 1249, 1257 (Fed. Cir. 2007).
Any special
definition for a claim term must be set forth in the specification with reasonable
clarity, deliberateness, and precision. In re Paulsen, 30 F.3d 1475, 1480 (Fed.
Cir. 1994).
The following proposes a construction and offers support for that
construction.
11
IPR2016-01374 Petition
Patent 6,339,428
those of ordinary skill in the art. Should the Patent Owner, to avoid the prior art,
contend that a claim term has a construction different from its broadest reasonable
interpretation, the appropriate course is for the Patent Owner to seek to amend the
claim to expressly correspond to its contentions in this proceeding. See 77 Fed.
Reg. 48764 (Aug. 14, 2012).
A.
operably coupled
Claims 1-4, 10-13, and 25 recite the term operably coupled. In the context
of the 428 patent, a person of ordinary skill in the art would have understood this
term to mean connected such that data or information passes from one to
another. (Stark 44 (EX1005)). The 428 patent discloses that texture data and
control signal information passes from one place to another. (See, e.g., 428 patent
at 4:41-61, 9:1-9 (EX1001)). The 428 patent specification and claims use the term
operably coupled in a similar manner and demonstrate that the coupling does not
need to be direct. (Stark 44 (EX1005)). For example, the 428 patent discloses
that caches 40 and 120 are operably coupled to the memory 20 so that the
memory is accessible over an accelerated graphics bus. (428 patent at 6:1321(EX1001)). Because the disclosed caches 40 and 120 are operably coupled to
the memory 20 over a bus, the construction should encompass direct and indirect
coupling. (Stark 44 (EX1005)).
12
IPR2016-01374 Petition
Patent 6,339,428
VII. SPECIFIC GROUNDS FOR PETITION
Pursuant to Rule 42.104(b)(4)(5), the following sections (as confirmed in
the Stark Declaration 45105 (EX1005)) detail the grounds of unpatentability,
the limitations of the challenged claims of the 428 Patent, and how these claims
were therefore obvious in view of the prior art.
A.
Ground I: Claims 1, 2, 3, 8, 18, 19, 20, 21, 25, 26, 27, and 28 are
rendered obvious by Kuo in view of Griffin
Overview of Kuo
memory module 140, cache module 220, address module 210 and controller
module 290. (Id. at 4:18-23, Figure 2 (EX1002)).
13
IPR2016-01374 Petition
Patent 6,339,428
Overview of Griffin
IPR2016-01374 Petition
Patent 6,339,428
acknowledges the problem of memory bottlenecks in graphics systems. (Id. at
2:30-34 ([m]emory bandwidth is always a critical bottleneck.) (EX1003)) (Stark
47 (EX1005)). Griffin addresses problems in the prior art by disclosing an
improved memory system that uses compressed information, including compressed
texture data. (Griffin at 7:21-25 (EX1003)).
Figure 4A of Griffin is reproduced below to show shared memory 216,
digital signal processor 176, and tiler 200. (Id. at 9:39-44; Figure 4A (EX1003)).
Compressed data retrieved from the shared memory 216 can be temporarily stored
in a compressed cache. (Id. at 24:26-28 (EX1003)) (Stark 48 (EX1005)).
15
IPR2016-01374 Petition
Patent 6,339,428
blending for multi-pass rendering. (Griffin at 10:1-4 (EX1003)). Figure 9A of
Griffin is reproduced below to show an enlarged view of the tiler. (Id. at 3:31-32
(EX1003)).
compressed cache 416, a decompression engine 404 and a texture cache 402. (Id.
at Figure 9 (EX1003)). The compressed cache 416 is used to buffer compressed
data before the decompression engine 404 decompresses and transfers it to the
texture cache 402. (Id. at 16:11-13 (EX1003)) (Stark 49 (EX1005)).
16
IPR2016-01374 Petition
Patent 6,339,428
3.
Kuo and Griffin both disclose graphics rendering systems that utilize aspects
of a cache for improving processing, as noted in the first portion of each patents
Abstract.
A caching system for increasing the operation
concurrency between a cache module and a memory
module by comparing received memory block identifiers,
which correspond to texels needed for pixel composition,
with memory block identifiers corresponding to texels
locally stored within the cache module.
(Kuo at Abstract, line 1 (emphasis added) (EX1002)).
A system for accessing texture data in a graphics
rendering system allows texture data to be stored in
memories with high latency or in a compressed format.
The system utilizes a texture cache to temporarily store
blocks of texture data retrieved from an external memory
during rendering operations.
(Griffin at Abstract (emphasis added) (EX1003)).
Kuo acknowledges the problem of memory bottlenecks in graphics systems.
(Kuo at 2:30-34 (bandwidth bottleneck results in a relatively significant latency
17
IPR2016-01374 Petition
Patent 6,339,428
between the system requesting a plurality of texels (memory block) from the
memory module and the system receiving back the requested memory block.)
(EX1002)). Griffin similarly acknowledges bottleneck problems. (Griffin at 6:1819; see also, 5:38-42 (Memory bandwidth is always a critical bottleneck. The
cost of high performance systems are often driven by the need to provide numerous
banks of interleaved memory to provide adequate bandwidth for pixel and texture
data accesses) (EX1003)).
Griffin acknowledges a significant side effect of the memory bandwidth
issue and discloses how it detrimentally affects the system by making it difficult to
store compressed texture data. (Id. at 2:30-32 (A significant side effect of the
memory bandwidth problem outlined above is that it makes it difficult, if not
impossible, to store texture data in compressed form) (EX1003)). Griffin further
discloses what most skilled artisans already knew, namely, that by failing to
effectively take advantage of compressed data, memory requirements are
increased. (Id. at 2:35-37 (As a result, memory requirements to store texture data
can be substantial. The need for additional memory adds further to the expense of
the system.) (EX1003)) (Stark 52 (EX1005)).
Griffin then discloses how to compress texture data so as to increase system
efficiency. (Griffin at 3:3-8 ([b]oth approaches outlined above enable texture to
be accessed efficiently using a texture cache. Texture data can be stored in lower
18
IPR2016-01374 Petition
Patent 6,339,428
cost memory with higher latency. In addition, texture data can even be stored in a
compressed format despite the additional latency associated with decompressing
the texture data.) (EX1003)).
It would have been obvious to a person of ordinary skill in the art to have
combined the compression techniques of Griffin with the configuration of Kuo by
known methods to obtain the predictable result of reducing bottlenecks. (Stark
54 (EX1005)).
between Kuo and Griffin, one of ordinary skill, who was familiar with Kuo and
then read Griffin, would have been motivated to improve Kuos system by using
Griffins compressed texture data technique.
(Id. 54 (EX1005)).
Griffin
discloses not only that it is advantageous to use compressed texture data, but how
to implement it in a system. Doing so allows the system to overcome memory
bottleneck issues known to both Griffin and Kuo. Modifying Kuos graphics
processing system to implement the compression teachings of Griffin was well
within the abilities of one of ordinary skill in the art and would have been
accomplished with a reasonable chance of success. (Id. 54 (EX1005))
4.
19
IPR2016-01374 Petition
Patent 6,339,428
(EX1002)). Kuo further discloses that its invention generally relates to the field
of computer graphics systems and more particularly to the generation and
processing of textures in computerized graphical images. (Id. at 1:6-8 (EX1002)).
Similarly, Griffin discloses a system that accesses texture data in a graphics
rendering system to allow texture data to be stored in a compressed format.
(Griffin at Abstract (EX1003)). Griffin also discloses that its system is well suited
for texture mapping and supports operations that require access to texture memory.
(Id. at 2:43-45 (EX1003)). One of ordinary skill in the art would have understood
that both Kuo and Griffin implement their graphics texture mapping aspects using a
circuit. (Stark 55 (EX1005)).
b)
The combination of Kuo and Griffin teaches this limitation. Kuo discloses to
store texture information. (Id. 56 (EX1005)). The texture information of Kuo
corresponds to at least one texture. (Id. 56 (EX1005)). For example, Kuo
discloses that memory module 140 is comprised of dynamic random access
memory (DRAM), which stores each 44 matrix of texels in one of a plurality of
memory blocks. (Kuo at 4:3-6 (EX1002)). Thus, the memory module 140 stores
texture information, as claimed. One of ordinary skill in the art would understand
that this texture information corresponds to at least one texture, as further
20
IPR2016-01374 Petition
Patent 6,339,428
claimed. (Stark 56 (EX1005)). Kuo does not explicitly disclose that its texture
information in its cache is compressed but does acknowledge the general
benefits of compression. (Kuo at 1:39-43 (EX1002)) (Stark 56 (EX1005)).
Griffin discloses to store compressed texture information and provides the
guidance needed for one of ordinary skill to do the same. (Id. 57 (EX1005)).
Similar to Kuo, Griffin acknowledges bandwidth issues.
(Griffin at 5:38-39
Griffin
IPR2016-01374 Petition
Patent 6,339,428
compressing its stored texture information, e.g., the texture information stored in
Kuos memory 140. (Stark 59 (EX1005)). This combination would have been
attainable by one of ordinary skill in the art using image routine processing
techniques, such as software and circuit implementation. (Id. 59 (EX1005)). The
combination would have been expected to be reasonably successful and would
have produced a predictable result. (Id. 59 (EX1005)). Thus, the combination of
Kuo and Griffin teaches each feature in limitation (b). (Id. 59 (EX1005)).
c)
(Id. 60
22
IPR2016-01374 Petition
Patent 6,339,428
(Stark 60 (EX1005)).
(Id. 61
(EX1005)). The teachings of Griffin relied upon here are those directed to the
compressed information and its beneficial use. In addition to other benefits noted
herein, Griffin discloses generally that compression allows much smaller memory
size requirements. (Griffin at 30:37-39 (EX1003)).
One of ordinary skill in the art would understand that when Griffins image
compressions teachings are implemented in Kuos systems, the texels loaded into
Kuos memory module 140 would first be compressed. (Stark 62 (EX1005)).
This is what Griffin teaches, e.g., that its shared memory is for storing texture data
in compressed form. (Griffin at 9:59-60 (EX1003)). Figures 9A-C of Griffin
show the compressed cache as element for 416. (Id. at Figure 9A-C (EX1003)).
As noted above, it would have been obvious to modify Kuos memory to
include compressed texture data, based on Griffin. (Stark 63 (EX1005)). Griffin
teaches to pass portions of this compressed texture data to its cache. (Id. 63
(EX1005)). For example, Griffin discloses that the memory control block fetches
the requested texture data, and if it is compressed, stores it in the compressed cache
416 (990). (Griffin at 30:4-7 (EX1003)). Claim 6 of Griffin also provides details
23
IPR2016-01374 Petition
Patent 6,339,428
by describing a compressed cache in communication with the memory, and the
decompression unit, the compressed cache operable to temporarily store the
compressed texture data retrieved from memory as the decompression unit
decompresses compressed blocks of the compressed texture data. (Id. at 37:40-45
(EX1003)).
As one skilled in the art would appreciate, the requested texture data
stored in compressed cache 416 represents a portion of the compressed texture
information stored in the shared memory, as claimed. (Stark 64 (EX1005)).
This is because Griffins texture fetch block 420 only requests the appropriate
texture blocks from memory that are needed to process the specific geometric
primitives that are set up for rasterization. (Griffin at 15:6-20; 17:37-50 (EX1003))
(Stark 64 (EX1005)).
It would have been obvious for Kuos system to pass the compressed texture
data to its cache, as disclosed by Griffin. (Id. 65 (EX1005)). Further, Kuo
discloses the concept of compression with respect to avoiding potentially
overwhelming
bandwidth
requirements.
(Kuo
at
1:39-43
(EX1002)).
Accordingly, one of ordinary skill in the art would have been taught that the texels
in Kuo, requested from memory module for storage in cache module 220, should
remain compressed because this will help address bandwidth requirements. (Stark
24
IPR2016-01374 Petition
Patent 6,339,428
65 (EX1005)). Thus, the combination of Kuo and Griffin teaches each feature in
limitation (c). (Id. 65 (EX1005)).
d)
(Id. 66
25
IPR2016-01374 Petition
Patent 6,339,428
In operation, the address module 210 generates a memory block (MB)
identifier and a texel identifier 520. The address module 210 utilizes the MB
identifier 530 to identify a memory block within the memory module 140. The
address module 210 utilizes the texel identifier 520 to locate a specific texel within
that memory block. (Id. at 4:40-45 (EX1002)).
The address module 210 then determines whether texture data for a texturing
operation is stored in the cache, as claimed. (Stark 68 (EX1005)). In particular,
the address module 210 utilizes the MB identifier 530 to determine whether the
memory block corresponding to the MB identifier 530 already has been locally
stored within a cache block of the cache module 220. (Kuo at 4:62-65 (EX1002)).
Thus, the combination of Kuo and Griffin teaches each feature of limitation (d).
(Stark 68 (EX1005)).
e)
(Id. 69
(EX1005)). Kuo discloses that when the texture data is not stored in the cache, the
texture address module copies the texture data from the memory to the cache. (Id.
69 (EX1005)). For example, Kuo discloses the following:
[i]f the memory block is not already locally
cached (miss), the address module 210 initiates a request
26
IPR2016-01374 Petition
Patent 6,339,428
for retrieval of the memory block from the memory
module 140 by transmitting a retrieval signal to the
request buffer 250, which operates as a FIFO
queue.The retrieval signal includes the MB identifier
530 and a linear memory module address.
(Kuo at 5:8-15 (emphasis added) (EX1002)). Kuo discloses that that memory
module 140 then responds to the receipt of the retrieval signals by transmitting the
memory block, which corresponds to the MB identifier 530, back to the TCM
130. (Id. at 5:24-26 (EX1002)). The TCM 130 is a texture cache module and
includes the cache module 220. (Id. at 4:1, 4:18-23, Figure 2 (EX1002)). The
address module 210 of Kuo generates a retrieval tag signal that the controller
module 290 uses to control the transmission of the memory blocks from the
retrieval buffer 260 to the cache block associated with the memory block. (Id. at
5:37-54 (EX1002)). Thus, the combination of Kuo and Griffin teaches each feature
of limitation (e). (Stark 69 (EX1005)).
f)
(Id. 70
(EX1005)). For example, Kuo discloses a controller module 290. (Kuo at 4:22-23,
27
IPR2016-01374 Petition
Patent 6,339,428
Figure 2 (EX1002)).
Although Kuo does not explicitly label the address module 210 and the
controller module 290 as a single module, it would have been obvious to one of
ordinary skill in the art that the functions performed by the address module 210
and the control module 290 could be carried out by a module with a name that
connotes the address and control functions. (Stark 71 (EX1005)). For example,
Kuo uses the term module in non-restrictive manner. (Id. 71 (EX1005)). As
shown above in annotated Figure 2, a module, e.g., texture cache module 130, may
include multiple subparts. These subparts, such as the address module 210 and
controller module 290, may also be considered modules. (Id. 71 (EX1005)).
28
IPR2016-01374 Petition
Patent 6,339,428
Thus, one of ordinary skill in the art would understand that multiple elements may
cumulatively be considered a module and that the term module may also be used
to describe subparts. (Id. 71 (EX1005)). One skilled in the art would understand
and find obvious that the address module 210 and controller module 290 of Figure
2 is analogous to showing a box, as illustrated above with the dotted lines, that is
called an address and controller module or address/controller module. (Id.
71 (EX1005)).
modifications and variations are possible in light of the above teaching. (Kuo at
8:27-28 (EX1002)). Thus, even if one argues that the address module 210 and
controller module 290 are separate modules, a person of ordinary skill in the art
would have understood that one of these many modifications would be to
incorporate together the functions performed by address module 210 and control
module 290 to streamline aspects of the texture filter module 180 in general.
(Stark 71 (EX1005)). Therefore, one of ordinary skill in the art would have
understood that the address module 210 and control module 290 cumulatively
represents the claimed texture address module. (Id. 71 (EX1005)).
The texture address module of Kuo, i.e., the address module 210 and control
module 290, provides control information to the cache such that the cache outputs
the texture data, as claimed. (Id. 72 (EX1005)). For example, Kuo discloses that
when the requested texel is stored in cache module 220, the address module 210
29
IPR2016-01374 Petition
Patent 6,339,428
generates a transmit tag signal. (Kuo at 4:65-5:7 (EX1002)).
Kuo further
discloses that the transmit tag signal includes at least the cache block address, texel
identifier, and interpolation factors.
The control
module 290, which also forms part of Kuos texture address module, then uses
the transmit tag signal to determine when to trigger the transmission to the texture
filter module TFM 180 of each texel associated with each transmit tag signal from
the cache module 220. (Id. at 5:66-6:3 (EX1002)). The TFM is shown above in
Figure 2 as receiving output from the cache module 220.
Further, the address module 210 and control module 290 collectively control
when texels are written from memory module 140 to cache module 220, and
control when texels are read from cache module 220, such that they are operably
coupled to cache module 220 and memory module 140, as claimed. (Stark 73
(EX1005)). Thus, the combination of Kuo and Griffin teaches each feature of
limitation (f). (Id. 73 (EX1005)).
g)
(Id. 74
(EX1005)). As noted above, it would have been obvious to modify the cache of
Kuo to store compressed texture information. (Id. 74 (EX1005)). It naturally
30
IPR2016-01374 Petition
Patent 6,339,428
follows that the compressed information needs to be decompressed. (Id. 74
(EX1005)). Since Kuo does not disclose this feature, it would have been obvious
to rely on Griffins disclosure of a decompression block. (Id. 74 (EX1005)).
Figure 9B of Griffin shows a decompression block as element 404 and
describes this element by stating that decompression engine 404 decompresses
texture data and transfers it to the texture cache 402. (Griffin at 16:1- 2, Figure
9B (EX1003)). Griffin also illustrates and describes that the decompression engine
404 is operably coupled to the cache. (Stark 75 (EX1005)). For example,
Griffin discloses that the compressed cache 416 can be used to buffer compressed
data before the decompression engine 404 decompresses and transfers it to the
texture cache 402. (Griffin at 16:11-13 (EX1003)).
Griffin further discloses that the decompression engine 404 decompresses
the texture data to produce uncompressed texture data for use in the texturing
operation, as claimed. (Stark 76 (EX1005)). For example, Griffin discloses that
the texture cache is also in communication with the decompression engine which
will decompress texture data (which is stored in a compressed format) for use by
the texture filter engine. (Griffin at 28:58-62 (EX1003)). One skilled in the art
would understand that Griffins system, including the functions performed by the
texture filter engine, include a texturing operation, as claimed. (Id. at 20:15-17,
20:44-47 (EX1003)) (Stark 76 (EX1005)).
31
IPR2016-01374 Petition
Patent 6,339,428
It would have been obvious to a personal of ordinary skill in the art to
implement Griffins decompression engine 404 to decompress data from the cache
module 220 in Kuo. (Id. 77 (EX1005)). The decompression engine 404 would
be placed between the cache module 220 and texture filter module 180 of Kuo, as
taught by Griffin. (Griffin at Figure 9C showing the decompression engine 404
after the compression cache 416 (EX1003)) (Stark 77 (EX1005)). Modifying
Kuos graphics processing system to implement the decompression teachings of
Griffin was well within the abilities of one of ordinary skill in the art and would
have been accomplished with a reasonable chance of success using know
techniques.
(Id. 77 (EX1005)).
uncompressed texture data for use in the texturing operation, as claimed. (Id. 77
(EX1005)). It would have been obvious to a person of ordinary skill in the art that
Kuos texture filtering and pixel processing are texturing operations. (Id. 77
(EX1005)). Thus, the combination of Kuo and Griffin teaches each limitation of
claim 1. (Id. 77 (EX1005)).
5.
(Id. 78
(EX1005)). The texture filtering module (TFM) 180 of Kuo discloses the claimed
32
IPR2016-01374 Petition
Patent 6,339,428
filtering block. (Kuo at 4:2, Figures 1, 2 (EX1002)) (Stark 78 (EX1005)).
Similar to the 428 patent, the TFM 180 performs bi-linear filtering. (Kuo at 1:3137, 4:51-53, 6:3-9 (EX1002)). The 428 patent explains that bilinear filtering
involves combining different texel color values to produce a resultant texture color.
(428 patent at 4:54-58 (EX1001)). One of ordinary skill in the art would have
understood that because Kuos TFM180 is configurable to perform a filtering
operation similar to that disclosed in the 428 patent, Kuos TFM 180 likewise
combines texture data to produce a texture color, as claimed.
(Stark 78
(Id. 79
(EX1005)). Griffin discloses pixel engine 406 that corresponds to the claimed
blending block. (Id. 79 (EX1005)). For example, Griffin discloses that pixel
engine 406 performs pixel level calculations including blending, and depth
33
IPR2016-01374 Petition
Patent 6,339,428
buffering. (Griffin at 16:14-16 (emphasis added); see also Figure 9A (EX1003)).
The pixel values or texture color generated by the texture filter engine or
filtering block are sent to pixel engine 406. (Id. at 20:44-47 (EX1003)). The
pixel engine 406 also receives individual pixel addresses along with color and
depth information from the scan convert block. (Id. at 28:66-29:3; 30:23-26
(EX1003)). The pixel engine uses the texture color and the color information to
determine which pixel data to store in the pixel and fragment buffers. (Id. at 27:810 (EX1003)). One of ordinary skill in the art would have found it obvious to
incorporate the pixel engine 406 into the system of Kuo for the reason disclosed by
Griffin, e.g., to perform pixel level calculations including blending, and depth
buffering. (Id. at 16:14-16 (EX1003)) (Stark 79 (EX1005)). The combination of
Kuo and Griffin therefore teaches each limitation of claim 3. (Id. 79 (EX1005)).
7.
(Id. 80
(EX1005)). Kuo does not explicitly disclose that its cache 220, address module
210, and controller module 290 are part of an integrated circuit. However, Griffin
discloses that tiler 200 is a VLSI (Very Large Scale Integration) chip which
performs scan-conversion, shading, texturing, hidden-surface removal, anti-
34
IPR2016-01374 Petition
Patent 6,339,428
aliasing, translucency, shadowing, and blending for multi-pass rendering.
(Griffin at 10:1-7 (EX1003)). As would be understood by one of ordinary skill in
the art, a VLSI chip is an integrated circuit.
(Stark 80 (EX1005)).
Accordingly, one of ordinary skill in the art would have found it obvious to
incorporate the features of Kuos texture cache module 130 into an integrated
circuit. (Id. 80 (EX1005)). The texture cache module 130 is shown in Figure 2
of Kuo, and when modified as discussed above, would include the claimed cache
220, address module 210, controller module 290, and decompression block. (Id.
80 (EX1005)). The combination of Kuo and Griffin therefore teaches each
limitation of claim 8. (Id. 80 (EX1005)).
8.
35
IPR2016-01374 Petition
Patent 6,339,428
b)
Kuo discloses this limitation. (Id. 82 (EX1005)). Kuo discloses that the
texture cache module 130 receives pixel identifiers corresponding to both specific
pixels in a three dimensional graphic image and memory blocks of texels within
the memory module.
Kuo discloses this limitation. (Id. 83 (EX1005)). Kuo discloses that the
texture cache module 130 receives pixel identifiers corresponding to both specific
pixels in a three-dimensional graphic image and memory blocks of texels within
the memory module. (Id. 83 (EX1005)). The texture cache module strips each
of these pixel identifiers into its corresponding memory block (MB) identifier and
texel identifier subcomponents. (Kuo at 3:16-21 (EX1002)). The address module
210 of Kuo then utilizes the MB identifier 530 to identify a memory block within
36
IPR2016-01374 Petition
Patent 6,339,428
the memory module 140 and utilizes the texel identifier 520 to locate a specific
texel within that memory block. (Id. at 4:41-45 (EX1002)). One of ordinary skill
in the art would have understood that the pixel identifiers, in Kuo, including the
memory block (MB) identifier and texel identifier subcomponents, disclose
texture coordinates corresponding to a selected pixel in a received graphics
primitive, as claimed. (Stark 83 (EX1005)).
d)
(Id. 84
(Id. 84
(EX1005)). Moreover, for at least the reasons discussed in Section VII(A)(3) and
37
IPR2016-01374 Petition
Patent 6,339,428
VII(A)(4) above, it would have been obvious to a person of ordinary skill in the art
to incorporate Griffins compression teachings into Kuos disclosed systems. (Id.
84 (EX1005)).
Element (d) of claim 18 requires that when compressed texels
corresponding to the texture coordinates are not present in a cache, copying the
compressed texels from a memory into the cache. The explicit language of claim
18 is different than claim 1 by addressing the absence of texels corresponding to
the texture coordinates as opposed to addressing the absence of texture data in
the cache, as recited in claim 1. However, Kuo in view of Griffin still teaches
element (d) of claim 18. (Id. 85 (EX1005)).
In particular, Kuo discloses that when texels corresponding to the texture
coordinates are not present in a cache, copying the texels from a memory into the
cache. (Id. 86 (EX1005)). For example, Kuo discloses that [i]f the memory
block is not already locally cached (miss), the address module 210 initiates a
request for retrieval of the memory block from the memory module 140 by
transmitting a retrieval signal to the request buffer 250, which operates as a FIFO
queue. In this embodiment, the request buffer 250 can receive up to 4 retrieval
signals at the same time. The retrieval signal includes the MB identifier 530 and a
linear memory module address. (Kuo at 5:8-15 (EX1002)). Then, Kuo discloses
that memory module 140 responds to the receipt of the retrieval signals by
38
IPR2016-01374 Petition
Patent 6,339,428
transmitting the memory block, which corresponds to the MB identifier 530, back
to the TCM 130, which includes cache module 220. (Id. at 5:24-26 (EX1002)).
The address module 210 in Kuo generates a retrieval tag signal that the controller
module 290 uses to control the transmission of the memory blocks from the
retrieval buffer 260 to the cache block associated with the memory block. (Id. at
5:37-54 (EX1002)).
e)
The combination of Kuo and Griffin teaches this limitation at least for the
reasons discussed above in Section VII(A)(4)(g). (Stark 88 (EX1005)).
39
IPR2016-01374 Petition
Patent 6,339,428
g)
The combination of Kuo and Griffin teaches this limitation at least for the
reasons discussed above in Section VII(A)(6)(a). (Id. 89 (EX1005)).
9.
For at least the reasons discussed in Section VII(A)(6) above, and the
additional reasons noted below, the combination of Kuo and Griffin teaches this
limitation. (Id. 90 (EX1005)). Griffin discloses blending the uncompressed
texels with additional color data to produce the pixel color for the selected pixel,
as claimed. For example, Griffin discloses [a]long with texture processing, the
scan convert engine scan converts the triangle edge data (940) and the individual
pixel addresses along with color and depth information are passed to the pixel
engine for processing (942). (Griffin at 28:66-29:3 (EX1003)). One of ordinary
skill in the art would have understood that Griffins disclosure of individual pixel
addresses along with color and depth information, which are received by pixel
engine 406 from the scan convert block, disclose the claimed additional color
data. (Id. at 28:66-29:3 (EX1003)) (Stark 90 (EX1005)). .
40
IPR2016-01374 Petition
Patent 6,339,428
10.
(Id. 92
(EX1005)).
(Id. 92
(EX1005)).
provide fog, among others. (Griffin at 7:30-37 (EX1003)). A fog effect includes
blending a fog color with another color based on a factor, such as a fog factor.
41
IPR2016-01374 Petition
Patent 6,339,428
Griffin further discloses to provide shadows.
(EX1005)).
Griffins blending operations involve retrieving image data from memory, as
claimed. (Id. 93 (EX1005)). For example, Griffin discloses that these operations
include accessing a texture map during a texture mapping operation, accessing a
shadow map during a shadowing operation, and accessing color and/or alpha data
during multi-pass blending operations. (Griffin at 17:1-7 (EX1003)). Griffin also
refers to the image data in memory as textures or texture data. (Id. at 17:7-8
(EX1003)). A person of ordinary skill in the art would have understood that
Griffins blending operations involve retrieving specific image data, such as the
claimed second texture that is mapped to the selected pixel, from memory.
(Stark 93 (EX1005)). It would have been obvious to a person of ordinary skill in
the art to implement Griffins blending teachings in Kuo for producing a color. (Id.
93 (EX1005)).
12.
Kuo relates to the field of computer graphics systems and more particularly
to the generation and processing of textures in computerized graphical images.
(Kuo at 1:6-9 (EX1002)). Kuo discloses that the transmit controller 430 controls
when the texels within a memory block are further processed by the [texture
42
IPR2016-01374 Petition
Patent 6,339,428
cache module] TCM 130. (Id. at 7:17-21 (EX1002)). Kuo therefore discloses a
texturing processor comprising a processing module, as claimed. (Stark 94
(EX1005)).
b)
(Stark 96 (EX1005)).
43
IPR2016-01374 Petition
Patent 6,339,428
at least in order to avoid potentially overwhelming bandwidth requirements.
(Kuo at 1:39-43 (EX1002)) (Stark 96 (EX1005)).
Kuo does not explicitly state that its memory module 140 stores operating
instructions that, when executed by the processing module, cause the processing
module to perform functions, as claimed. One skilled in the art would understand
that Kuos system would have this feature, at least based on the teachings of
Griffin. (Id. 97 (EX1005)). Griffin discloses that shared memory [216] stores
image data and image processing commands on the image processing board 174.
(Griffin at 9:57-58 (EX1003)).
commands are stored in main memory buffers and DMAed (Direct Memory
Accessed) to the image processing board over a PCI bus.
The rendering
commands are then buffered in the shared memory 216 (FIG. 4A) until needed by
the DSP. The rendering commands are read by the tiler 200 (FIG. 4A) when it is
ready to perform image processing operations. (Id. at 28:1-7 (EX1003)). Griffin
therefore explicitly discloses that the tiler, which is a processing module, reads
primitive data and rendering instructions from the shared memory system 216.
(Id. at 27:52-53 (EX1003)) (Stark 97 (EX1005)). It would have been obvious to
a person of ordinary skill in the art that Kuo would act in a similar manner, such
that its memory module 140 stores operating instructions that, when executed by
44
IPR2016-01374 Petition
Patent 6,339,428
the processing module, cause the processing module to perform functions, as
claimed. (Id. 97 (EX1005)).
c)
The combination of Kuo and Griffin teaches this limitation at least for the
reasons discussed above in Sections VII(A)(8)(b) and VII(A)(8)(c). (Id. 98
(EX1005)).
d)
The combination of Kuo and Griffin teaches this limitation at least for the
reasons discussed above in Section VII(A)(8)(d). (Id. 99 (EX1005)).
e)
The combination of Kuo and Griffin teaches this limitation at least for the
reasons discussed above in Section VII(A)(8)(e). (Id. 100 (EX1005)).
f)
The combination of Kuo and Griffin teaches this limitation at least for the
reasons discussed above in Section VII(A)(8)(f). (Id. 101 (EX1005)).
g)
45
IPR2016-01374 Petition
Patent 6,339,428
The combination of Kuo and Griffin teaches this limitation at least for the
reasons discussed above in Section VII(A)(8)(g). (Id. 102 (EX1005)).
13.
The combination of Kuo and Griffin teaches this limitation at least for the
reasons discussed above in Sections VII(A)(9) and VII(A)(12)(b). (Id. 103
(EX1005)).
14.
The combination of Kuo and Griffin teaches this limitation at least for the
reasons discussed above in Section VII(A)(10). (Id. 104 (EX1005)).
15.
The combination of Kuo and Griffin teaches this limitation at least for the
reasons discussed above in Section VII(A)(11). (Id. 105 (EX1005)).
46
IPR2016-01374 Petition
Patent 6,339,428
VIII. CONCLUSION
Based on the foregoing, the challenged claims of the 428 Patent recite
subject matter that is unpatentable. The Petitioner requests institution of an inter
partes review to cancel these claims.
Respectfully Submitted,
/David L. Cavanaugh/
David L. Cavanaugh
Registration No. 36,476
Jonathan Stroud
Registration No. 72,518
Daniel V. Williams
Registration No. 45,221
47
IPR2016-01374 Petition
Patent 6,339,428
Table of Exhibits for U.S. Patent 6,339,428 Petition for Inter Partes Review
Exhibit
1001
1002
1003
1004
1005
1006
1007
1008
Description
US Patent 6,339,428
U.S. Pat. No. 6,011,565 (Kuo) (filed on April 9, 1998;
published on January 4, 2000)
US Pat. 5,880,737 (Griffin) (filed on June 27, 1996;
published on March 9, 1999)
Talisman: Commodity Realtime 3D Graphics for the PC by
Jay Torborg and James T. Kajiya 1996
Declaration of Daniel Stark
File History, Application (7/16/1999)
File History, Notice of Allowability (10/1/2001)
Petitioners Voluntary Interrogatory Responses
IPR2016-01374 Petition
Patent 6,339,428
CERTIFICATE UNDER 37 CFR 42.24(d)
Under the provisions of 37 CFR 42.24(d), the undersigned hereby certifies
that the word count for the foregoing Petition for Inter Partes Review totals 9,100
which is less than the 14,000 words allowed under 37 CFR 42.24(a)(i).
Respectfully submitted,
/Daniel V. Williams/
Daniel V. Williams
Reg. No. 45,221
IPR2016-01374 Petition
Patent 6,339,428
CERTIFICATE OF SERVICE
I hereby certify that on July 8, 2016, I caused a true and correct copy of the
foregoing materials:
Petition for Inter Partes Review of U.S. Patent No. 6,339,428 Under 35
U.S.C. 312 and 37 C.F.R. 42.104
Exhibit List
Exhibits for Petition for Inter Partes Review of U.S. Patent No. 6,339,428
(EX1001-1008)
Power of Attorney
Fee Authorization
Word Count Certification Under 37 CFR 42.24(d)
to be served via Express Mail on the following correspondent of record as listed on
PAIR:
MARKISON & RECKAMP, PC
PO BOX 06229
WACKER DRIVE
CHICAGO IL 60606-0029
/Daniel V. Williams/
Daniel V. Williams
ii