You are on page 1of 78

IPR2016-01643

U.S. Patent No. 6,775,745


Filed on behalf of Unified Patents Inc.
By:
P. Andrew Riley
James D. Stein
Finnegan, Henderson,
Farabow, Garrett & Dunner,
L.L.P.
901 New York Avenue, NW
Washington, DC 200014413
Telephone: 202-408-4266
Facsimile: 202-408-4400

Jonathan Stroud
Unified Patents Inc.
1875 Connecticut Ave. NW, Floor 10
Washington, D.C., 20009
Telephone: 202-805-8931
Email: IV745-IPR@finnegan.com

UNITED STATES PATENT AND TRADEMARK OFFICE

BEFORE THE PATENT TRIAL AND APPEAL BOARD

UNIFIED PATENTS INC.,


Petitioner
v.
INTELLECTUAL VENTURES I LLC,
Patent Owner
Case IPR2016-01643
U.S. Patent No. 6,775,745 B1
METHOD AND APPARATUS FOR
HYBRID DATA CACHING MECHANISM
PETITION FOR INTER PARTES REVIEW

IPR2016-01643
U.S. Patent No. 6,775,745
TABLE OF CONTENTS
I.

Introduction......................................................................................................1

II.

Mandatory Notices Under 37 C.F.R. 42.8 ....................................................1


A.

Real Parties-in-Interest ..........................................................................1

B.

Related Matters......................................................................................2

C.

Lead and Backup Counsel Information.................................................2

III.

Fee Payment.....................................................................................................2

IV.

Statement of Precise Relief Requested for Each Claim Challenged ...............3

V.

VI.

A.

Claims for Which Review is Requested................................................3

B.

Statutory Grounds of Challenge ............................................................3

The 745 Patent ................................................................................................5


A.

Overview of the Disclosure ...................................................................5

B.

Prosecution History ...............................................................................9

C.

Level of Ordinary Skill in the Art .......................................................11

D.

Challenged Claims ..............................................................................11

Claim Construction ........................................................................................15


A.

Reading Extended Segment(s) of DataChallenged Claims


1, 2, 4, 11, 12, and 14 ..........................................................................15

B.

FilesAll Challenged Claims .........................................................17

C.

Frequency Factor(s) .........................................................................17

D.

Self TuningClaims 7, 13, and 16 .................................................18

VII. Statement of Relief Requested: 37 C.F.R. 42.104(b) (Grounds 16).........19

IPR2016-01643
U.S. Patent No. 6,775,745
A.

B.

Ground 1: Karedla anticipates challenged claims 1, 3, 4, 69,


and 1117 ............................................................................................19
1.

Overview of Karedla.................................................................19

2.

Karedla anticipates the subject matter of illustrative


claim 4. ......................................................................................20

3.

Karedla discloses the subject matter of independent


claims 1, 8, 12, and 15. .............................................................28

4.

Karedla discloses decrementing a frequency factor and


self tuning as recited in dependent claims 3, 7, 13, and
16 ...............................................................................................36

5.

Dependent claims 6, 9, 11, 14, and 17 ......................................37

Ground 2: Burton and Karedla render claims 1, 4, 6, 9, 11, 12,


15, and 17 as being obvious under pre-AIA 35 U.S.C. 103(a) ........39
1.

Overview of Burton ..................................................................39

2.

Burton and Karedla render obvious the subject matter of


claim 4 .......................................................................................40

3.

A POSA would have been motivated to combine Burton


and Karedla ...............................................................................49

4.

Burton further discloses the subject matter of


independent claims 1, 8, 12, and 15 ..........................................50

C.

Ground 3: Burton anticipates claims 8 and 9 under pre-AIA 35


U.S.C. 102(e) ....................................................................................56

D.

Ground 4: Karedla and the Robinson FBR algorithm render


claims 3, 6, 7, 13, 15, and 16 as being obvious under pre-AIA
35 U.S.C. 103(a) ...............................................................................57
1.

Overview of the Robinson FBR algorithm as disclosed


by Robinson 885 and the Robinson Article .............................58

2.

The Robinson FBR algorithm discloses each and every


element of the challenged claims ..............................................61
ii

IPR2016-01643
U.S. Patent No. 6,775,745
3.

E.

A POSA would have been motivated to combine the


disclosure of the Robinson FBR algorithm as disclosed in
Robinson 885 and the Robinson Article with Karedla ............62

Grounds 5 and 6: Claim 2 is unpatentable as obvious under preAIA 35 U.S.C. 103(a) by either (i) Karedla and Otterness, or
(ii) Burton, Karedla, and Otterness. ....................................................67

Claim 2 is unpatentable as obvious under pre-AIA 35 U.S.C. 103(a)


by either (i) Karedla and Otterness, or (ii) Burton, Karedla, and
Otterness. .............................................................................................67
1.

Overview ...................................................................................67

2.

Claim 2 is unpatentable as obvious...........................................68

3.

A POSA would have been motivated to combine


Otterness, Karedla, and Burton ................................................68

VIII. Conclusion .....................................................................................................70

iii

IPR2016-01643
U.S. Patent No. 6,775,745
I.

Introduction
Unified Patents Inc. (Unified or Petitioner) requests inter partes review

of claims 14, 69, and 1117 of U.S. Patent No. 6,775,745 (the 745 patent)
(EX1001), now purportedly assigned to Intellectual Ventures I LLC (IV or
Patent Owner), in accordance with 35 U.S.C. 311319 and 37 C.F.R.
42.100 et seq. The 745 patent broadly claims methods and apparatus for
caching data using an algorithm. It describes organizing data using a frequencybased LRU (least recently used) replacement scheme while caching large reads of
data. However, caching methods using a combination of various frequency-based
LRU replacement schemes and large reads of data were known before the earliestclaimed priority date of the 745 patent. This petition establishes that the
challenged claims are unpatentable.
II.

Mandatory Notices Under 37 C.F.R. 42.8


A.

Real Parties-in-Interest

Pursuant to 37 C.F.R. 42.8(b)(1), Petitioner certifies that Unified is the real


party-in-interest, and further certifies that no other party exercised control or could
exercise control over Unifieds participation in this proceeding, the filing of this
petition, or the conduct of any ensuing trial. In this regard, Unified has submitted
voluntary discovery. See EX1010, Petitioners Voluntary Interrogatory Responses.

IPR2016-01643
U.S. Patent No. 6,775,745
B.

Related Matters

Upon information and belief, the 745 patent was asserted in these cases:
1.

Intellectual Ventures I, LLC; Intellectual Ventures II, LLC v.

NetApp, Inc., No. 1:16-cv-10868-IT (D. Mass.); and


2.

Intellectual Ventures I, LLC; Intellectual Ventures II, LLC v.

Lenovo Grp Ltd.; Lenovo (United States) Inc.; Lenovoemc Products USA,
LLC; Emc Corp., No. 1:16-cv-10860-IT (D. Mass.).
C.

Lead and Backup Counsel Information

The signature block of this petition designates lead counsel, backup counsel,
and service information. Petitioner designates P. Andrew Riley (Reg. No. 66,290)
as lead counsel and designates James D. Stein (Reg. No. 63,782) as backup
counsel. All can be reached at Finnegan, Henderson, Farabow, Garrett & Dunner,
LLP, 901 New York Avenue, NW, Washington, DC 20001-4413 (phone: 202-4084000; fax: 202-408-4400). Unified also designates as backup counsel Jonathan
Stroud (Reg. No. 72,518). Petitioner consents to e-mail service at IV745IPR@finnegan.com.
III.

Fee Payment
The required fees are submitted under 37 C.F.R. 42.103(a) and 42.15(a).

If any additional fees are due during this proceeding, the Office may charge such
fees to Deposit Account No. 50-6990.

IPR2016-01643
U.S. Patent No. 6,775,745
IV.

Statement of Precise Relief Requested for Each Claim Challenged


A.

Claims for Which Review is Requested

Petitioner requests review under 35 U.S.C. 311 of claims 14, 69, and
1117 of the 745 patent and cancellation of those claims as unpatentable.
B.

Statutory Grounds of Challenge

This Petition presents the following grounds:


Grounds
Claims 1, 3, 4, 69, and 1117 are anticipated under pre-AIA 35
U.S.C. 102(b) by Ramakrishna Karedla et al., Caching Strategies
Ground 1

to Improve Disk System Performance, Computer, vol. 27, no. 3, pp.


38-46, March 1994 (Karedla, EX1004)
Claims 1, 4, 6-9, and 11-17 are unpatentable as obvious under pre-

Ground 2

AIA 35 U.S.C. 103(a) by U.S. Patent No. 6,738,865 (Burton,


EX1006) in view of Karedla
Claims 8 and 9 are anticipated under pre-AIA 35 U.S.C. 102(e) by

Ground 3

Burton
Claims 3, 6, 7, 13, 15, and 16 are unpatentable as obvious under
pre-AIA 35 U.S.C. 103(a) by Karedla in view of the Robinson

Ground 4

FBR algorithm as disclosed in U.S. Patent No. 5,043,885


(Robinson 885, EX1007)

Ground 5

Claim 2 is unpatentable as obvious under pre-AIA 35

IPR2016-01643
U.S. Patent No. 6,775,745
U.S.C. 103(a) by Karedla in view of U.S. Patent No. 6,460,122
(Otterness, EX1008)
Ground 6

Claim 2 is unpatentable as obvious under pre-AIA 35 U.S.C.


103(a) by Burton in view of Karedla and Otterness

The 745 patent claims priority to U.S. Patent Application Number


09/949,300, which was filed on September 7, 2001. The 745 patent does not claim
priority to any earlier date.
Karedla is a printed publication that was published and publically available
at least by March 1994, before the effective date of the 745 patent. In particular,
Karedla was indexed and available to the public in the United States Library of
Congress by March 22, 1994. EX1012, 3; EX1013. Thus, Karedla qualifies as
prior art under pre-AIA 102(b).
Burton is a United States patent stemming from an application filed on June
9, 2000, before the effective date of the 745 patent. Thus, Burton qualifies as prior
art under at least pre-AIA 102(e).
Karedla details a frequency-based LRU replacement algorithm discussed in
an article by J.T. Robinson titled Data Cache Management Using FrequencyBased Replacement. See EX1004, 40 n. 4 (citing John T. Robinson et al., Data
Cache

Management

Using

Frequency-Based

Replacement,

Performance

IPR2016-01643
U.S. Patent No. 6,775,745
Evaluation Rev., vol. 18, no. 1, at 134142, May 1990 (Robinson Article,
EX1005); see EX1002, 75. The Robinson Article is a printed publication that was
published and publically available at least by May 1990 before the effective date of
the 745 patent. See EX1002, 41. The Robinson Article was indexed and
available to the public in the United States Library of Congress by June 1, 1992.
EX1012, 4; EX1014.
The subject of the Robinson Article (Robinson FBR replacement
algorithm) is disclosed in U.S. Patent No. 5,043,885 (EX1007, Robinson 885)
issued to J.T. Robinson on August 27, 1991. Robinson 885 qualifies as prior art
under pre-AIA 102(b) because it is a United States patent that issued more than a
year before September 7, 2001.
Finally, Otterness was filed on September 30, 1999, so Otterness qualifies as
prior art under pre-AIA 102(e).
V.

The 745 Patent


A.

Overview of the Disclosure

The 745 patent describes methods and apparatus to implement an abstract


algorithm for data managementhere a self-described hybrid caching
mechanism. EX1001, 4:25. This hybrid caching mechanism, an admitted variation
on prior art algorithms, replaces cache data based on a most recently used (MRU)
and least recently used (LRU) list, by considering frequency of use as an
5

IPR2016-01643
U.S. Patent No. 6,775,745
Frequency factors (highlighted in yellow) are assigned to each of the files
in the cache. Id. at 5:66. The frequency factor, in combination with the most
recently usedleast recently used (MRULRU) mechanism identify a least
frequently used (LFU) file which has not been recently used. Id. at 4:3942. In the
annotated Figure 2A, file F6 134 [is] the LRU file with the lowest frequency
factor i.e., the least frequently used (LFU) file which has been least recently used.
Id. at 6:47-49.
The LFU file is discarded when the cache needs additional capacity to store
files. Id. at 4:4344. According to the 745 patent, using the frequency factor in
combination with MRULRU caching, data that gets used heavily, with
intervening periods of non-use, is not discarded. Id. at 10:36.
The exemplary method of caching data by the hybrid caching mechanism is
shown in Figure 4, as reproduced below:

IPR2016-01643
U.S. Patent No. 6,775,745

In addition to the caching mechanism, the 745 patent describes performing


large reads, id. at 3:51, and caching the large reads, id. at 4:5354. According to
the 745 patent, large reads are reads of extended segments of data. Id. at
10:37-38; 11:5455 ([T]the extended read may be a read of 32 Kbytes or larger.).
For example, the read operation 172 of FIG. 4 may be reads of an extended
segment of the directory, id. at 11:18, of an extended segment of the FAT, id. at
11:30, or of an extended segment of the data file, id. at 11:37.
The 745 patent explains that large reads are advantageous because the data
will be present in the cache in order to minimize seeks and reads from the hard
drive. Id. at 3:5052. This is in contrast to small reads, which results because
there is no way that the OS knows ahead of time which piece of which

IPR2016-01643
U.S. Patent No. 6,775,745
program/data will be needed. Id.at 1:6365. As discussed below, cache
mechanisms commonly combine caching with read-ahead strategies and
prefetching to improve disk system performance. See EX1004 at 40 (Examples
include file system caches that prefetch the rest of a file that is being read.
Optical disk drives and CD-ROMs commonly employ large read-ahead caches.);
see EX1002, 42.
The concept of reading extended segments of data is similar to prior art
cache strategies of reading ahead (or prefetching) data to read additional data
related to the requested data. According to the 745 patent, frequency of use and
MRU-LRU, combine with the extended reads [] provide[s] superior caching. Id.
at 10:23.
B.

Prosecution History

The patent examiner, in an Office Action dated August 25, 2003, rejected all
of the independent claims under 35 U.S.C. 103(a) as being unpatentable over an
article written by Donghee Lee et al., titled Implementation and Performance
Evaluation of the LRFU Replacement Policy, in view of U.S. Patent Application
Number 2002/0078300 A1 to Dharap. EX1003, 73. According to the Examiner, the
caching method known as LRFU (Least Recently/Frequently Used) policy was
known in the art. Id. at 74, 75. The applicant never disputed this. See id. at 96. The
examiner, instead, found that certain limitations recited in the dependent claims

IPR2016-01643
U.S. Patent No. 6,775,745
may not have been obvious in view of the prior art of record. Those limitations,
among others, included reading an extended segment of data, wherein the
frequency factor corresponding to the MRU is not considered, and scanning
from a frequency factor corresponding to a LRU file to a frequency factor
corresponding to a MRU file, not including the frequency factor corresponding to
the MRU file. Id. at 77.
In an Amendment filed September 29, 2003, the applicant amended the
independent claims to incorporate the above-noted limitations indicated as
allowable by the Examiner. Id. at 96. In particular, some of the newly-amended
independent claims incorporated the feature of reading extended segments of
data (e.g., challenged claims 1, 4, 12), while other amended claims incorporated
the feature of scanning the frequency factors without considering a frequency
factor associated with a most recently used file (e.g., challenged claim 8) or the
feature of scanning the frequency factor for a LRU file to a frequency factor for a
MRU file not including the frequency factor for the MRU file (e.g., challenged
claim 15). Id. at 96; see EX1001, challenged dependent claims 11 and 14 (reading
extended segment(s) of data); challenged dependent claim 6 (scanning from a
frequency factor corresponding to a LRU file to a frequency factor corresponding
to a MRU file). Subsequently, the Office allowed the pending claims without
giving any additional reasons for allowance. Id. at 99100.
10

IPR2016-01643
U.S. Patent No. 6,775,745
But Karedla, Burton, and Robinson 885, which form the basis of the
grounds presented in this Petition, directly address the limitations identified by the
Examiner. None was presented during prosecution or considered by the Examiner
on the record.
C.

Level of Ordinary Skill in the Art

The factors defining the level of ordinary skill in the art include (1) the types
of problems encountered in the art; (2) the prior art solutions to those problems;
(3) the rapidity with which innovations are made; (4) the sophistication of the
technology; and (5) the educational level of active workers in the field. See In re
GPAC, 57 F.3d 1573, 1579 (Fed. Cir. 1995) (cited in M.P.E.P. 2141.03). The
745 patent claims a priority date of September 7, 2001. At that time, a person
having ordinary skill in the art (hereafter, POSA) of data caching would have
had (i) a B.S. degree in electrical engineering, computer engineering, computer
science, or equivalent training, or (ii) approximately two years of experience or
research related to computer systems. See EX1002, 44.
D.

Challenged Claims

The 745 patent contains seventeen claims, of which claims 1, 4, 8, 12, and
15 are independent. Petitioner requests cancellation of claims 14, 69, and 1117.
Independent claim 4 illustrates the subject matter of the challenged claims:

11

IPR2016-01643
U.S. Patent No. 6,775,745
4. A caching method for enhancing system performance
of a computer, comprising:
reading an extended segment of data in response to a
request from an operating system;
storing copies of files associated with the extended
segment in a cache;
assigning frequency factors to each of the files stored in
the cache, the frequency factors indicating how
often each of the corresponding files are requested
by the operating system;
scanning the frequency factors, the scanning being
performed in response to a target capacity of the
cache being attained;
identifying a least frequently and least recently used file;
and
eliminating the least frequently and least recently used
file to liberate capacity of the cache.
EX1001, 12:5413:4 (claim 4). As discussed above, independent claims 1 and 12
similarly incorporated the limitation of reading an extended segment of data
from claim 4. Id. at 12:33, 13:46; see EX1002, 65. Thus, claims 1 and 12 are
substantially identical to claim 4. See EX1002, 65. Notably, independent claim 1
does not contain the scanning step of claim 4 and further recites the cache being
located in a random access memory (RAM). Compare EX1001, 12:3147 (claim
1) with 12:5413:4 (claim 4); see EX1002, 66. Independent claim 12 is a
12

IPR2016-01643
U.S. Patent No. 6,775,745
computer-readable medium claim that recites the same substantive limitations as
claim 4 but rewritten from perspective of method performed by a computer.
Compare EX1001, 13:4214:15 (claim 12) with 12:5413:4 (claim 4). Independent
claim 12 also recites the cache being located in a random access memory. Id. at
14:12; see EX1002, 67.
Independent claim 8, while substantively identical to claim 4, incorporates
the limitation of scanning the frequency factors without considering a
frequency factor corresponding to a most recently used file. EX1001, 13:2930;
see EX1002, 67. Notably, independent claim 8 is broader in that it generally
recites reading the files without reading extended segments of data. Compare
EX1001, 13:18 with 12:56; see EX1002, 68.
Independent claim 15 is an apparatus claim, and it generally contains
features similar to independent claim 4. See EX1002, 69. However, as noted
above, claim 15 incorporates the limitation of scanned from a frequency factor for
a LRU file to a frequency factor for a MRU file, not including the frequency factor
for the MRU file. EX1001, 14:3942; see EX1002, 69.
The remaining dependent claims add various limitations, summarized in the
following table:
Limitations related to extended segments
wherein the extended segments are Claim 2, which depends from

13

IPR2016-01643
U.S. Patent No. 6,775,745
Limitations related to extended segments
one of 64 Kbytes, 128 Kbytes and 256 independent claim 1. EX1001, 12:48
Kbytes in size

50.

reading extended segment(s) of data

Claim 11, which depends from


independent claim 8. Id. at 13:41.

program instructions for allowing a Claim 14, which depends from


user to adjust a size of the extended independent claim 12. Id. at 14: 2324.
segment that is read.
Limitations related to self tuning
providing a driver for decrementing a Claim 3, which depends from
frequency factor after a time period of independent claim 1. Id. at 12:5253.
non-use of a corresponding file.
wherein the caching method is self Claims 7, 13, and 16, which
tuning

respectively depend from independent


claims 4, 12, and 15. Id. at 13:13,
14:1920, 14:44.
Other limitations

scanning from a frequency factor

Claim 6, which depends from

corresponding to a LRU file to a

independent claim 4. Id. at 13:1011.

frequency factor corresponding to a


MRU file
wherein the storage medium is a hard

Claim 9, which depends from

drive

independent claim 8. Id. at 13:3334.

wherein the least frequently and least

Claim 17, which depends from

recently used file is discarded to free

independent claim 15. Id. at 14:4547.

cache capacity

14

IPR2016-01643
U.S. Patent No. 6,775,745
See EX1002, 70.
VI.

Claim Construction
A claim in an unexpired patent subject to inter partes review shall be given

its broadest reasonable construction in light of the specification of the patent in


which it appears. 37 C.F.R. 42.100(b); Cuozzo Speed Techs., LLC v. Lee, 136
S.Ct. 2131, *2145 (U.S. 2016). Unified proposes the following constructions.
A.

Reading Extended Segment(s) of DataChallenged Claims 1,


2, 4, 11, 12, and 14

The above-identified challenged claims recite reading extended segment(s)


of data. EX1001, 12:33; 12:4850; 12:56; 13:41; 14:2324. The 745 patent
specification defines reading an extended segment of data as synonymous to
performing large reads and uses these two terms interchangeably. Id. at 10:37
38; see generally 745 patent; EX1002, 50. The specification further explains
that, in contrast to performing large reads, many operating systems read[] just
the amount of data requested, resulting in reading a small amount of data and
on an as needed basis. Id. at 1:42, 1:44, 1:65; see EX1002, 50.
Large reads, however, allow additional sectors which are logically
contiguous with the data just read to be cached into memory. EX1001, 5:4344;
see EX1002, 50. For example, FIG. 5 of the 745 patent shows a flowchart
describing the operation of reading files in extended segments:

15

IPR2016-01643
U.S. Patent No. 6,775,745

EX1001, Fig. 5; see EX1002, 51. The specification explains that read operation
222, 226, 230 reads the first block of the requested data and an additional number
of kilobytes of data after. EX1001, 11:2224 (In one embodiment, when the first
block of the file is read an additional 64, 128 or 256 Kbytes of data are read with
it.); 11:2933 (The method then advances to operation 226 where an extended
segment of the FAT is read. Similar to operation 222, when the first cluster is read
an additional 64, 128 or 256 Kbytes comprising a chain of clusters are read with
the first cluster.); 11:3640 (Next, in operation 230, an extended segment of the
data file is read. Here, when the first cluster is read, an additional 64, 128 or 256

16

IPR2016-01643
U.S. Patent No. 6,775,745
Kbytes approximately, of data are included in the read in one embodiment.); see
EX1002, 51.
For these reasons, reading extended segment(s) of data should be construed
to mean reading requested block and additional blocks of data. See EX1002,
52.
B.

FilesAll Challenged Claims

The 745 patent specification defines the term file as referring to a


number of blocks of data i.e., any group of blocks of data. EX1001, 5:6162;
10:35 (same); 10:5253 (same); see EX1002, 54. Thus, files should be
construed to mean a group of blocks of data. EX1002, 55.
C.

Frequency Factor(s)

The 745 patent explains that the frequency factors indicate how often each
of the corresponding files is accessed by the operating system. EX1001, 2:4648;
see EX1002, 57. For example, a frequency factor is assigned to each of the
copied files, where the frequency factor reflects how often each of the copied files
is accessed. EX1001, 2:5961; 10:4547 (noting that frequency factors also
indicate how often each of the corresponding files are requested by the operating
system); see EX1002, 57.
Further, according to the 745 patent, the frequency factors could be
weighted based on other considerations, for example, by weighting certain files

17

IPR2016-01643
U.S. Patent No. 6,775,745
with frequency factors on their initial read into the cache to guarantee they remain
above others. EX1001, 7:810; see EX1002, 58. The specification explains that
in one embodiment, the frequency factors for reading directory and FAT data
would be weighted heavier, i.e., the frequency factor would be incremented by a
factor of more than one for each time the directory or FAT data is accessed.
EX1001, 8:679:3; see EX1002, 58. Weighting the files allows the heavier
weighted data [to be] kept in the cache longer. EX1001, 9:45; see EX1002, 58.
Thus, the term frequency factor should be construed to mean an indicator
for distinguishing data based in part on its frequency of use. See EX1002, 59.
D.

Self TuningClaims 7, 13, and 16

The 745 patent provides a list of exemplary features for adjusting the
caching mechanism. For example, those features include:
[1] how often to decrement the factors, [2] how far back
to search towards the LRU file, [3] how far forward to
search from the LRU file, [4] how much to decrement the
factors by, [5] scheduling the decrementing during
periods of low disk activity, [6] weighting certain files
with frequency factors on their initial read into the cache
to guarantee they remain above others, [7] changing the
size of the reads, and [8] changing the amount of RAM
that the cache can use, etc.

18

IPR2016-01643
U.S. Patent No. 6,775,745
EX1001, 7:411. This passage suggests that the self-tuning process allows the
caching mechanism to adjust such parameters without user input. Compare id. at
7:1113 (In one embodiment, the user is provided an application to customize the
tuning to their particular situation.) with id. at 7:1315 (In another embodiment,
heuristics are built into a driver for self-tuning the caching mechanism.); see
EX1002, 62.
Accordingly, self tuning should be construed to mean changing a
parameter in the caching mechanism without direct user input. See EX1002, 63.
VII. Statement of Relief Requested: 37 C.F.R. 42.104(b) (Grounds 16)
A.

Ground 1: Karedla anticipates challenged claims 1, 3, 4, 69, and


1117

As discussed below, Karedla anticipates claims 1, 3, 4, 69, and 1117 of


the 745 patent.
1.

Overview of Karedla

Karedla is a printed publication publicly available as of March 1994 before


the 745 patents earliest priority date of September 7, 2001. EX1002, 42.
Karedla was indexed and publically available in the Library of Congress by March
22, 1994. EX1012, 3; EX1004, 23. The Library of Congress Control Number
for this journal is LCCN 77008779. EX1012, 3; EX1013. Accordingly, Karedla
is prior art to the 745 patent under pre-AIA 35 U.S.C. 102(b).

19

IPR2016-01643
U.S. Patent No. 6,775,745
Karedla describes the use of caching as a means to increase system
response time and improve the data throughput of the disk subsystem. EX1004,
38. In particular, Karedla describes some popular caching strategies and cache
replacement algorithms known to increase caching performance. Id. According to
Karedla, in order to reduce the number of accesses to a disk drive, caches exploit
the principles of spatial and temporal locality of reference. Id. at 39. Spatial
locality implies that if an object is referenced, then nearby objects will also soon be
accessed. Id. Examples of caching mechanisms that exploit the principle of spatial
locality includes reading ahead, prefetching, larger cache line sizes, etc. See
EX1002, 73. Temporal locality implies that a referenced object will tend to be
referenced again in the near future. EX1004, 39. Examples of caching
mechanisms that exploit the principle of temporal locality include the LRU
replacement algorithms, LFU replacement algorithms, frequency-based LRU
replacement (FBR) algorithms, and segmented LRU (SLRU) replacement
algorithms. EX1004, 39, 43; see EX1002, 73.
2.

Karedla anticipates the subject matter of illustrative claim 4.


a.

Karedla discloses [a] caching method for enhancing


system performance of a computer

Karedla discloses a caching method for enhancing system performance, as


claimed. EX1001, 12:5455; see EX1002, 75. In particular, Karedla discloses the
use of various caching strategies and cache replacement algorithms to improve the
20

IPR2016-01643
U.S. Patent No. 6,775,745
computer system. See EX1002, 76. For example, Karedla describes a caching
strategy that offers the performance of caches twice its size. EX1004, 38; see
EX1002, 76.
b.

Karedla discloses reading an extended segment of


data in response to a request form an operating
system

Karedla discloses exploiting the principle of spatial locality to improve the


caching mechanism. See EX1002, 79. For example, Karedla discloses two
common cache strategies: (1) reading ahead and (2) providing large cache line
sizes. See EX1004, 40; see EX1002, 79. Both of these cache strategies exploit the
principle that, if [a block of data] is referenced, then nearby [blocks of data] will
also soon be accessed. EX1004, 39; see EX1002, 79. As explained in Karedla,
[a] read-ahead strategy known as prefetching exploits the principle of spatial
locality by anticipating future requests for data and bringing it into the cache.
EX1004, 40; EX1002, 80. Thus, Karedlas reading ahead allows the caches to
prefetch the rest of a file that is being read, and is particularly effective when
read requests are sequential. EX1004, 40; see EX1002, 80. Karedla further
explains that having large cache line sizes in large caches results in lower miss
rate, faster cache directory searches, and shorter cache lookup time. EX1004, 40;
see EX1002, 81. Indeed, Karedla explains that [replacement] algorithms
generally are combinations of LRU, LFU, and read-ahead strategies, with varying

21

IPR2016-01643
U.S. Patent No. 6,775,745
thresholds on the amount of data that is prefetched. EX1004, 40 (emphasis
added); see EX1002, 82.
Thus, by reading ahead to prefetch additional blocks of data and storing the
data in larger cache lines, Karedla discloses reading extended segments of data,
as recited in the claims. EX1001, 12:56; see EX1002, 82.
Further, Karedla discloses that the read operation must occur in response to
an input/output (I/O) request. For example, [a]n I/O request to a storage device,
especially a read request, searches the cache first. EX1004, 39; see EX1002, 78.
But if any of its data blocks are not found in the cache, the data blocks must be
obtained from the backing store. EX1004, 39; see EX1002, 78. Thus, Karedla
discloses that the read operation must occur in response to an I/O request from the
operating system. See EX1002, 78.
c.

Karedla discloses storing copies of files associated


with the extended segment in a cache

Karedla discloses storing copies of files associated with the extended


segment in a cache, as claimed. EX1001, 12:5960; see EX1002, 83. For
example, Karedla discloses that caches keep copies of backing store data so that
it can service some requests at faster memory speeds. EX1004, 39; see EX1002,
83. Thus, in Karedla, copies of data from the disk drive are stored in the cache
memory as claimed. See EX1002, 83.

22

IPR2016-01643
U.S. Patent No. 6,775,745
d.

Karedla discloses assigning frequency factors to each


of the files stored in the cache, the frequency factors
indicating how often each of the corresponding files
are requested by the operating system

Karedla discloses assigning frequency factors to each of the files stored in


the cache, the frequency factors indicating how often each of the corresponding
files are requested by the operating system, as claimed. EX1001, 12:6164; see
EX1002, 84. Karedla discloses that frequency of use may be a factor used in
many replacement algorithms. See EX1002, 84. For example, Karedla explains
that LFU algorithm typically maintains a frequency-of-access count for all its
lines. EX1004, 40; see EX1002, 84. Karedla explains that, in the Robinson FBR
algorithm, a reference count related to reference frequency is used. EX1004, 40;
accord EX1005, 135 (adjusting reference counts so as to approximate reference
frequencies); EX1007, Abstract (noting that reference count is related to the
number of times each block in the cache has been referenced); see EX1002, 84.1
Thus, Karedla discloses assigning frequency factors to each of the files stored in
1

In addition, Karedla discloses that the SLRU algorithm uses a 1-bit flag as a

frequency factor in each cache line. EX1004, 43; see EX1002, 85. The 1-bit
flag reflects how frequent each of the files is access by using the values PROT
and PROB to distinguish between lines that have been access only once and lines
that have been accessed multiple times. EX1004, 43; see EX1002, 85.

23

IPR2016-01643
U.S. Patent No. 6,775,745
the cache, the frequency factors indicating how often each of the corresponding
files is requested. See EX1002, 86.
e.

Karedla discloses scanning the frequency factors, the


scanning being performed in response to a target
capacity of the cache being attained

Karedla discloses that the decision as to which cache line to replace is based
on the specific replacement algorithm used. See EX1002, 87. In particular,
Karedla explains that cache replacement algorithm decides which cache line to
discard when replacement is required. EX1004, 39. Cache replacement, also
known as eviction, is the operation of discarding data from the cache to make room
for new data. Id. A POSA reading Karedla would understand that Karedla
discloses that cache replacement is performed in response to a target capacity of
the cache being attained. EX1002, 88. For example, cache replacement is
required to make room in the cache when the cache reaches full capacity, which
corresponds to a target capacity of the cache being attained, as claimed. Id.
Moreover, Karedla, in reviewing some of the more popular cache
replacement algorithms discusses the Robinson FBR algorithm, which improves
on LRU by partitioning the LRU stack into three sections. EX1004, 40; see
EX1002, 89. The lines not referenced frequently will age[] into the bottom part
of the list by LRU replacement and [] finally [be] evicted if not referenced.
EX1004, 40; see EX1002, 89. In order to do this, the Robinson FBR algorithm

24

IPR2016-01643
U.S. Patent No. 6,775,745
scans the LRU stack to find a block to replace. EX1007, 5:1518; see EX1002,
90. For example, Robinson 885 teaches that [f]inding a block to replace then
consists of scanning the blocks (from LRU to MRU) in each such LRU chain (in
ascending count order) until a block is found in the old section. EX1007, 5:1518;
see EX1002, 90. The Robinson Article similarly discusses an efficient method to
find the least recently used block with the smallest reference count. EX1007,
5:1518; see EX1002, 90. A POSA reading Karedla would understand that to
identify the LFU and LRU file, the cache mechanism must scan the frequency
factors of the cache lines. See EX1002, 91. Thus, Karedla discloses scanning the
frequency factors as recited. See EX1002, 91.
f.

Karedla discloses identifying a least frequently and


least recently used file

Karedla discloses identifying a least frequently and least recently used


file, as recited. EX1001, 13:1. For example, Karedla explains that in the Robinson
FBR algorithm, the lines will eventually age into the bottom part of the list by
LRU replacement and will be evicted if not referenced again. EX1004, 40; see
EX1002, 93. The Robinson Article explains that [t]he basis of our [FBR]
algorithm is the LRU replacement algorithm, in which logically the cache consists
of a stack of blocks, with the most recently referenced block always pushed on the
top of the stack. EX1005, 135; see Figure 2.2 reproduced below.

25

IPR2016-01643
U.S. Patent No. 6,775,745

Unlike LRU replacement, the least recently used block will not necessarily be
selected for replacement . Instead, a reference count is maintained for each block
in the cache, and in general the blocks with the smallest reference counts will be
candidates for replacement. Id. at 135. [T]he FBR algorithm can successfully
distinguish frequently referenced file blocks from those less frequently referenced
in an LRU cache. Id. at 141. Thus, the evicted lines are the least frequently and
lest recently used file. Accord EX1004; EX1005; EX1007; see EX1002, 94.
In the SLRU (segmented-LRU) algorithm, the more frequently accessed
lines are placed at the top of the MRU-end of protected segment. EX1004, 43, FIG.
3; see EX1002, 95.

26

IPR2016-01643
U.S. Patent No. 6,775,745
the discarded lines are the least frequently and least recently used file, as recited.
EX1001, 13:1; see EX1002, 98.
g.

Karedla discloses eliminating the least frequently and


least recently used file to liberate capacity of the
cache

Karedla discloses eliminating the least frequently and least recently used
file to liberate capacity of the cache, as recited. EX1001, 13:34. According to
Karedla, [c]ache replacement, also known as eviction, is the operation of
discarding data form the cache to make room for new data. EX1004, 39. As
discussed above, the least frequently and least recently used file is evicted to
make room for additional data. See EX1002, 99.
As Karedla discloses each and every element as recited in claim 4,
independent claim 4 of the 745 patent is anticipated and unpatentable. See
Verdegaal Bros. v. Union Oil Co. of California, 814 F.2d 628, 631 (Fed. Cir.
1987)(as cited in M.P.E.P. 2131). EX1002, 100.
3.

Karedla discloses the subject matter of independent claims


1, 8, 12, and 15.

Independent claims 1, 8, 12, and 15 are substantively identical to


independent claim 4. See EX1002, 101. For example, independent claims 1, 8,
12, and 15 are respectively directed to similar methods, media, and apparatus for
identifying a least frequently and least recently used file. EX1001, 12:31, 13:14,
14:43, 14:25; see EX1002, 101. Karedla discloses that cache could be
28

IPR2016-01643
U.S. Patent No. 6,775,745
implemented in hardware or it could be implemented and managed in software
with some hardware assists. EX1004, 39. Thus, Karedla discloses methods, media
having program instructions, and apparatus for caching files in a computer system.
See EX1002, 102.
a.

Independent claims 1 and 12

In particular, as discussed above, independent claims 1 and 12 incorporate


the subject matter of reading an extended segment of data, similar to independent
claim 4. EX1001, 12:56; 13:46; see EX1002, 103. Karedla, however, discloses
this subject matter as discussed above. See VII.A.2.b; EX1002, 7782, 103.
Independent claim 1 further recites the cache being located in a random
access memory (RAM) of the computer. EX1001, 12:3536. Karedla discloses
this. For example, Karedla discloses that [a] cache buffer is faster memory used
to enhance the performance of a slower memory (a disk drive, for example),
known as the backing store. EX1004, 39. In particular, Karedla explains that I/O
caches are typically implemented and managed in software with some hardware
assists. Id. at 39. Other examples of such caches that are managed entirely by
software and residing in main memory include caches for file systems and
database management systems. EX1005, 134.
Karedla further discusses possible cache location along the I/O data
path. EX1004, 41; see blue in the annotated Figure 1 reproduced below.

29

IPR2016-01643
U.S. Patent No. 6,775,745
and that that I/O caches may be located in a RAM. See EX1002, 108. Indeed, it
was common at that time for cache memory to be RAM. See EX1002, 108; see
EX1008, 1:49 ([A]ll of the hosts data is cached in RAM 110. We refer to this
XOR processor RAM to as the cache memory 110.); EX1008, FIG. 1.
Independent claim 12 a computer readable media claim that recites the same
limitations as claim 4 but rewritten from the method format. Claim 12 also recite
the cache being located in a random access memory. EX1001, 14:12. As
discussed above, Karedla discloses all the limitations recited in claim 12 including
that caches may be implemented and managed in software. EX1004, 39; see
EX1002, 104.
b.

Independent Claim 8

Independent claim 8 does not require the reading of extended segment of


data; instead, it recites the additional limitation of scanning the frequency factor
without considering a frequency factor corresponding to a most recently used
file. EX1001, 13:18, 2630; see EX1002, 110. Karedla discloses that the
Robinson FBR algorithm partitions the LRU stack into three sections whose sizes
are tunable. EX1004, 40; see EX1002, 111; see EX1005, annotated Figure 2.1
below.

31

IPR2016-01643
U.S. Patent No. 6,775,745
the block with the smallest reference count [i.e., least frequently used] in the old
section for replacement, choosing the least recently used such block if there is
more than one. EX1005, 136; see EX1002, 115. Thus, in the Robinson FBR
algorithm, the frequency factor corresponding to the MRU file is not considered.
See EX1002, 115.
The SLRU variation disclosed in Karedla similarly does not consider the
frequency factor corresponding to the MRU file. See EX1002, 116. According to
Karedla, the SLRU list, in practice, runs from the MRU end of the protected
segment to the LRU end of the probationary segment. EX1004, 43; see EX1002,
116. Because the MRU end of the protected segment is protected and only the
LRU probationary entry is chosen for discarding, the SLRU algorithm also does
not consider the frequency factor corresponding to the MRU file in the protected
segment. EX1004, 43; see EX1002, 116.
c.

Independent claim 15

Independent claim 15 does not recite reading extended segment of data,


but instead recites the additional limitation of scanned from a frequency factor for
a LRU file to a frequency factor for a MRU file, not including the frequency factor
for the MRU file. EX1001, 14:4043; see EX1002, 117. In the 745 patent, the
MRU file and LRU file merely act as boundaries for the scanning process. Id. at
6:50 (MRU file acts as a stop for the scanning process.); see EX1002, 118. As

33

IPR2016-01643
U.S. Patent No. 6,775,745
discussed above, a POSA reading Karedla would understand that Karedla
discloses the limitation scanned from a frequency factor for a LRU file to a
frequency factor for a MRU file, not including the frequency factor for the MRU
file. See VII.A.2.e; EX1002, 118119.
For example, the Robinson FBR algorithm, discussed in Karedla,
specifically scans the blocks from LRU to MRU. See EX1007, 5:1518 (Finding
a block to replace then consists of scanning the blocks (from LRU to MRU) in
each such LRU chain (in ascending count order) until a block is found in the old
section.); see EX1002, 119. Thus, a POSA reading Karedla would understand
that Karedla discloses scanned from a frequency factor for a LRU file to a
frequency factor for a MRU file, not including the frequency factor for the MRU
file. See EX1002, 119.
Additionally, a POSA reading Karedla would understand that the disk
drive at the time of 2001, would include a directory structure, a file allocation
table (FAT) and file data. See EX1002, 120. Indeed, it was exceedingly
common at that time for Windows and DOS products to use file allocation tables in
their file systems. EX1002, 120; see U.S. Patent No. 6,629,201 to Dempsey 2

U.S. Patent No. 6,629,201 to Dempsey et al. (Dempsey), filed on May 2, 2001,

claims priority to U.S. Patent Appl. No. 60/204,266, filed on May 15, 2000.

34

IPR2016-01643
U.S. Patent No. 6,775,745
(Dempsey, EX1009), 11:2123 (For example, in the WINDOWS NT operating
system, such data is used for both a FAT file system and an NTFS file system.).
Indeed, Block 110 (Installable File System (e.g. NFTS or FAT)) is one of the
standard components of an operating system, such as WINDOWS NT or
WINDOWS 2000, with which the cache software may be used. EX1009, 4:11
14; Figure 1. Moreover, it is common for disk storage media to use FAT in its disk
file system. EX1002, 120. File systems typically have directories, and directory
structures. EX1002, 120.
Nonetheless, Karedla discloses that I/O data path includes a set of hots
connected to storage devices via controllers. EX1004, 41. For example, the
storage devices may be Seagate Elite drives. Id. Further, Karedla notes that one
modern file system at that time was Windows NTs (then-)new file system, which
used a directory structure, a FAT table, and file data as explained above. EX1002,
121. A POSA reading Karedla would thus understand that its disk drive would
include a directory structure, a file allocation table (FAT) and file data, as

Accordingly, Dempsey has an effective filing date before September 7, 2001. Thus,
Dempsey is prior art under pre-AIA 35 U.S.C. 102(e) and is contemporaneous to
the 745 patent.

35

IPR2016-01643
U.S. Patent No. 6,775,745
recited. EX1002, 121. Thus, Karedla discloses the subject matter recited in claim
15. See EX1002, 121.
4.

Karedla discloses decrementing a frequency factor and


self tuning as recited in dependent claims 3, 7, 13, and 16

Dependent claim 3 recites providing a driver for decrementing a frequency


factor after a time period of non-use of a corresponding file. A POSA reading
Karedla would understand that the frequency factors may be decremented
periodically in the Robinson FBR algorithm, as described in both the Robinson
Article and Robinson 885. See EX1002, 122; see EX1007, 9:712 ([I]n order to
adapt to changes in the underlying frequency distribution, all counts periodically
may be decremented or multiplied by a reduction factor.); EX1005, 136 ([W]e
dynamically maintain the sum of all reference counts by [the] reduction of
reference counts. Thus, Karedla discloses the subject matter recited in claim 3.
See EX1002, 123.
While dependent claims 7, 13, and 16 recite the broader limitation of self
tuning, Karedla discloses self tuning in the above-noted disclosure of
decrementing counts to adapt to changes in the underlying frequency
distribution, EX1007, 9:910, and to prevent blocks with high reference counts
that are never [] replaced, EX1005, 136. See EX1002, 124. In addition,
Karedla discloses that the cache line sizes could be tunable and that one such

36

IPR2016-01643
U.S. Patent No. 6,775,745
LRU file to a frequency factor
corresponding to a MRU file.
9. The method as recited in claim 8,

See discussion above for independent

wherein the storage medium is a hard

claim 4. See VII.A.2.

drive.

For example, EX1004, 39: A cache


buffer is faster memory used to
enhance the performance of a slower
memory (a disk drive, for example),
known as the backing store. See id. at
38: A typical data-storage hierarchy
consists of main memory, magnetic
disks, and optical and tape drives. See
EX1002, 130.

11. The method as recited in claim 8,

See discussion above for independent

wherein the method operation of

claim 4. See EX1002, 130; see

reading the files includes, reading

VII.A.2.b.

extended files.
14. The computer readable media as

See discussion above for independent

recited in claim 12 further comprising:

claim 4. See VII.A.2.

program instructions for allowing a

See id. at 40: [V]endors also offer a

user to adjust a size of the extended

tunable

segment that is read.

minimize both cache pollution and the

read-ahead

threshold

to

latency in transferring large amounts of


data from the disk to the I/O bus.
Also, to avoid cache pollution, vendors
offer user-selectable upper bounds on

38

IPR2016-01643
U.S. Patent No. 6,775,745
the request size that the cache will
process. See EX1002, 130.
17. The apparatus as recited in claim

See discussion above for claim 4. See

15, wherein the least frequently and

EX1002, 130; see VII.A.2.g.

least recently used file is discarded to


free cache capacity.
B.

Ground 2: Burton and Karedla render claims 1, 4, 6, 9, 11, 12, 15,


and 17 as being obvious under pre-AIA 35 U.S.C. 103(a)

As discussed below, Burton in view of Karedla render claims 1, 4, 6, 9, 11,


12, 15, and 17 of the 745 patent as being obvious.
1.

Overview of Burton

U.S. Patent No. 6,738,865 to Burton et al. (Burton) claims priority to U.S.
Appl. No. 09/591,916 filed June 9, 2000, and has an effective filing date before
September 7, 2001. Burton, therefore, is prior art under pre-AIA 35
U.S.C. 102(e).
Burton describes a method, system, and program for caching data.
EX1006, Abstract. In particular, [f]or each entry in cache, a variable indicates
both a time when the cache entry was last accessed and a frequency of accesses to
the cache entry. Id. The variable is used in determining which entry to demote
from cache to make room for subsequent entries. Id. In contrast to Karedla,
Burton expressly discloses a more complete description of caches being stored in

39

IPR2016-01643
U.S. Patent No. 6,775,745
volatile storage device: data are copied from a storage device, such as a hard disk
drive or other non-volatile storage device and placed into a volatile, electronic
memory area referred to as cache. EX1006, 1:1317; see EX1002, 133. Further,
Burton discloses that random access memory (RAM) is a volatile memory. Id.
at 1:2627. Thus, a POSA reading Burton and Karedla would readily understand
that the cache may be located in a volatile storage device such as a random access
memory (RAM). See EX1002, 133.
2.

Burton and Karedla render obvious the subject matter of


claim 4
a.

Burton discloses [a] caching method for enhancing


system performance of a computer

Burton discloses a caching method that enhances system performance of a


computer. See EX1002, 134. For example, Burton discloses a method, system,
and program for caching data. EX1006, Abstract. In fact, in performance tests, it
has been found that the throughput and performance of systems employing the
cache management scheme of the preferred embodiments has improved 510%
over systems that employ the prior art LRU demotion method. Id. at 4:3034.
b.

Burton discloses storing copies of files read in


response to a request from an operating system

Burton discloses storing pages or tracks of data in a cache. See EX1002,


135. For example, Burton discloses that [i]n a memory cache, pages or tracks of
data are copied from a storage device, such as a hard disk drive or other non40

IPR2016-01643
U.S. Patent No. 6,775,745
volatile storage device typically comprised of a magnetic medium, and placed into
a volatile, electronic memory area referred to as cache. EX1006, 1:1317.
Accordingly, [w]hen tracks or data are accessed from the storage device they are
loaded into cache and returned to the application requesting the data. Id. at 1719;
see id. at 2:13 (Data from a device, such as a volatile memory device or nonvolatile storage device, is maintained in entries in a cache.).
Annotated Figure 1 of Burton, reproduced above, illustrates a cache
architecture having a cache 2 that includes cache entries 4a, b, c, d into which
tracks or pages of data from the storage 6 may be placed. Id. at 2:5558; see id. at
2:59 (discussing that a track or page of data is staged into a cache entry 4a, b, c,
d).

41

IPR2016-01643
U.S. Patent No. 6,775,745
achieves this by incrementing the time counter 18 for every I/O access, so
the value added to the LRU rank for subsequently accessed cache entries
increases substantially in a short period of time. See EX1002, 144; EX1006,
3:5558. Thus, the MRU entry will be distinguished from even the frequently
accessed entries: LRU entries recently accessed or added to cache will have a
substantially higher LRU rank than the LRU rank of entries that have not been
accessed recently, even those entries accessed numerous times. See EX1002,
144; EX1006, 3:5861. Further, the LRU rank allows Burton to differentiate the
entries by their frequency of use as the entry that was accessed more frequently
will have a higher LRU rank because its LRU rank will be weighted with previous
accesses. See EX1002, 145; EX1006, 3:6365. Thus, to the extent entries were
last accessed at about the same time, which in terms of cache operations is within
fractions of a second of each other, the more frequently accessed entry will have a
greater weighting. EX1006, 3:654:2.
Thus, Burton discloses assigning frequency factors to each of the files
stored in the cache, the frequency factors indicating how often each of the
corresponding files are requested by the operating system, as recited. See
EX1002, 146.

45

IPR2016-01643
U.S. Patent No. 6,775,745
d.

Burton discloses scanning the frequency factors, the


scanning being performed in response to a target
capacity of the cache being attained

Burton discloses scanning the frequency factors in response to a target


capacity of the cache being attained. See EX1002, 147. For example, Burton
discloses that a determination is made when a number of entries in cache has
reached a threshold. EX1006, 2:67. In response to reaching the threshold, a
determination is made of a plurality of cache entries having a relatively low value
for the variable with respect to the value of the variable for other cache entries.
EX1006, 2:711. According to Burton, the threshold may be based on the cache []
utilization, i.e., number of cached entries, reach[ing] an upper threshold. EX1006,
4:78.
When the cache utilization reaches threshold, Burton discloses that a
scanning process begins to identify the entries that have the lowest LRU rank.
See EX1002, 148; see block 150 of Fig. 3 below.

46

IPR2016-01643
U.S. Patent No. 6,775,745

In response, the cache determines (at block 152) from the last 1024 entries in the
LRU linked list 8, thirty-two entries that have the lowest LRU rank 12a, b, c, d.
The cache 2 then demotes (at block 152) those determined thirty-two entries from
cache 2. EX1006, 4:813. [T]hose entries having the lowest LRU rank marked
for demotion are both the least recently accessed and among those entries least
recently accessed recently, are less frequently accessed. EX1006, 4:1922.
Thus, Burton discloses scanning the frequency factors, the scanning being
performed in response to a target capacity of the cache being attained, as recited.
See EX1002, 150.

47

IPR2016-01643
U.S. Patent No. 6,775,745
e.

Burton discloses identifying a least frequently and


least recently used file

As discussed above, Burton discloses identifying the entries having the


lowest LRU rank. See EX1002, 151. In particular, Burton identifies a plurality
of cache entries having a relatively low value for the variable with respect to the
value of the variable for other cache entries. See EX1002, 151; EX1006, 2:911.
This relatively low value indicates that the cache entry is one of a least recently
accessed entry and/or least frequently accessed entry. Id. at 2:1113. Thus, similar
to the 745 patent, Burton discloses that those entries having the lowest LRU rank
marked for demotion are both the least recently accessed and among those entries
least recently accessed recently, are less frequently accessed. See EX1002, 151;
EX1006, 4:1922.
f.

Burton discloses eliminating the least frequently and


least recently used file to liberate capacity of the
cache

As discussed above, Burton determines the least recently accessed and least
frequently accessed entries for demotion. See EX1002, 154. Burton further
explains that demoting data from cache will make room for subsequently accessed
tracks or data more recently accessed from storage. See EX1002, 154;
EX1006, 1:39; 2:23. Thus, Burton discloses this limitation. See EX1002, 154.

48

IPR2016-01643
U.S. Patent No. 6,775,745
g.

Karedla discloses reading an extended segment of


data in response to a request form an operating
system

Although Burton does not expressly disclose reading an extended segment


of data, Karedla teaches the same approach to reading an extended segment of
data as the 745 patent, as discussed above. See VII.A.2.b; EX1002, 155.
Further, as discussed above, Karedla anticipates all of the features of independent
claim 4, including reading an extended segment of data, as recited. See EX1002,
155; VII.A.
3.

A POSA would have been motivated to combine Burton and


Karedla

Burton and Karedla each relates to the same field of endeavor (using cache
to improve computer system performance). See EX1002, 156, see EX1004, 38;
EX1006, 4:33. Further, Burton and Karedla specifically discuss using frequency
with a LRU list. See EX1002, 156. Karedla also teaches that replacement
algorithms generally are combinations of LRU, LFU, and read-ahead strategies,
with varying thresholds on the amount of data that is prefetched. See EX1002,
156; EX1004, 40. Thus, a POSA would have found it obvious to combine the
disclosure of a cache replacement mechanism based on least recently accessed
and least frequently accessed data in Burton with Karedlas method of reading
ahead to prefetch data into a large cache line size. See EX1002, 157. The
resulting combination would provide: (1) a method for demoting data from cache
49

IPR2016-01643
U.S. Patent No. 6,775,745
based on least recently accessed and least frequently accessed data taught by
Burton and (2) reading extended data taught by Karedla. See EX1002, 157.
Further, a POSA would have been motivated to combine Burtons
replacement mechanism with Karedlas cache strategies to improve the
performance of systems. See EX1002, 158. This combination would have
involved combining familiar elements (e.g., computer, volatile memory device,
non-volatile storage device, etc.) in a simple manner to yield the same predictable
results Burton is directed to produce. See EX1002, 158. Namely, this
combination provides a cost-effective method to increase performance and data
throughput, EX1006, 1:6162; minimize latency in data access, EX1004, 40,
reduc[e] cache lookup time, EX1004, 40; and increases the speed of cache
directory searches EX1004, 40. See EX1002, 158.
4.

Burton further discloses the subject matter of independent


claims 1, 8, 12, and 15

As discussed above, independent claims 1, 8, 12, and 15 are substantively


identical to independent claim 4 and are respectively directed to similar methods,
media, and apparatus for identifying a least frequently and least recently used
file. EX1001, 12:31, 13:14, 14:43, 14:25; See EX1002, 159. Burton discloses
method, system, and program for demoting data from cache based on least
recently accessed and least frequently accessed data. EX1006, Title (54). In

50

IPR2016-01643
U.S. Patent No. 6,775,745
particular, the disclosure of Burton may be implemented as a method, apparatus
or program using standard programming and/or engineering techniques to produce
software, firmware, hardware, or any combination thereof. Id. at 4:3942. Thus,
Burton discloses methods, media having program instructions, and apparatus for
caching files in a computer system as recited in the 745 patent. See EX1002,
159.
a.

Independent claim 1

Burton discloses the limitations recited in independent claim 1. See EX1002,


160. For example, Burton discloses that cache is a volatile, electronic memory
area and random access memory (RAM) is a type of volatile memory.
EX1005, 1:1617; 1:2627. Thus, it is obvious that the cache in Burton may be
located in a random access memory (RAM) of the computer, as recited in claim 1.
See EX1002, 160. As discussed above, Burton in view of Karedla discloses all of
the remaining limitations, such as reading extended segments of data, storing
files in a cache, associating frequency factor to each of the files in the cache,
identifying a least frequently and least recently used file, and eliminating the
least frequently and least recently used file form the cache when a target capacity
of the cache is reached. See EX1002, 161.

51

IPR2016-01643
U.S. Patent No. 6,775,745
b.

Independent claim 8

Independent claim 8, while substantively similar to illustrative claim 4,


recites the additional limitation of scanning the frequency factor without
considering a frequency factor corresponding to a most recently used file.
EX1001, 13:2630; See EX1002, 162. Burton discloses that LRU entries
recently accessed or added to cache will have a substantially higher LRU rank than
the LRU rank of entries that have not been accessed recently, even those entries
accessed numerous times. EX1006, 5861. Thus, according to Burton, the MRU
entry has a LRU rank that is substantially higher than even those entries
accessed numerous times. See EX1002, 163. Further, the cache determines the
least frequently used and least recently used files from the bottom of the LRU
linked list. See EX1002, 163. In particular, Burton identifies thirty-two entries
that have the lowest LRU rank from the last 1024 entries in the LRU linked list
that in practice are likely [to have] thousands of cache entries and entries in the
LRU linked list 10. See EX1002, 163; EX1006, 4:911; 3:67. Thus, Burton
does not consider[] a frequency factor corresponding to a most recently used file,
as required by claim 8. See EX1002, 163.
c.

Independent claim 12

Independent claim 12 is a computer readable medium claim that recites the


same limitations as claim 4, but is rewritten from the method format. See EX1002,

52

IPR2016-01643
U.S. Patent No. 6,775,745
164. Claim 12 recites the cache being located in a random access memory. See
EX1001. As discussed above, Burton in view of Karedla discloses all the
limitations recited in claim 12. See EX1002, 164.
d.

Independent claim 15

Independent claim 15 recites that the frequency factor is scanned from a


frequency factor for a LRU file to a frequency factor for a MRU file, not including
the frequency factor for the MRU file. EX1001, 14:4043. Burton discloses this
limitation. See EX1002, 165. For example, Burton discloses that new entry 8a is
added to the top of the LRU linked list 10. EX1006, 2:5960. Thus, the top of the
LRU linked list 10 is the MRU entry. See EX1002, 165. Further, tracks at the
bottom of the list, represent[] those tracks that were accessed the longest time ago.
EX1006, 1:4748. Thus, the bottom of the LRU linked list 10 is the LRU entry.
See EX1002, 165. Accordingly, Burton discloses that the LRU linked list is
sorted from MRU to LRU. EX1002, 165. Burton further discloses that the bottom
section of the LRU linked list 10 is scanned for the least recently accessed and
least frequently accessed data. See EX1002, 166. For example, Burton scans the
last 1024 entries in the LRU linked list 8 to determine thirty-two entries that
have the lowest LRU rank 12a, b, c, d. EX1006, 4:911; EX1002, 166. Because
only the bottom portion (e.g., the last 1024 entries) are scanned, a POSA reviewing

53

IPR2016-01643
U.S. Patent No. 6,775,745
Burton would understand that the frequency factor of the MRU file is not scanned
in Burtons scanning process. See EX1002, 166.
As discussed above, the LRU linked list is sorted from MRU to LRU.
EX1002, 167. A POSA, reviewing Burton, would understand that in order to find
the least recently used file, the cache will start the scanning process from the LRU
end of the linked list. EX1002, 167. As disclosed in Burton, these entries in the
list may be within fractions of a second of each other, and the LRU rank
provides a way for the cache to determine the entry that was accessed more
frequently. EX1006, 3:674:1, 3:6364; EX1002, 167. Accordingly, because
Burton discloses scanning the bottom section of the LRU-linked list, the scanning
process in Burton is neither haphazard nor from the MRU file to the LRU file. See
EX1002, 165. Indeed, a POSA reading Burton would understand that the
scanning process necessarily must scan from the MRU to the LRU, and there
would be no way for the system to identify this block without such scanning. See
EX1002, 165. Thus, a POSA reading Burton would understand that the scanning
process of Burton is from a frequency factor for a LRU file to a frequency factor
for a MRU file, as recited. See EX1002, 165.
As to the recitation of a disk drive, the disk drive storing data, the data
including a directory structure, a file allocation table (FAT) and file data, it would
have been obvious to a POSA for the reasons discussed above. See EX1002, 168.
54

IPR2016-01643
U.S. Patent No. 6,775,745
11. The method as recited in claim 8,

See discussion above for independent

wherein the method operation of reading claim 4. See EX1002, 169; see
the files includes, reading extended

VII.B.2.g.

files.
17. The apparatus as recited in claim 15, See discussion above for claim 4. See
wherein the least frequently and least

EX1002, 169; see VII.B.2.f.

recently used file is discarded to free


cache capacity.
C.

Ground 3: Burton anticipates claims 8 and 9 under pre-AIA 35


U.S.C. 102(e)

As discussed above, independent claim 8 does not recite reading extended


segment(s) of data. See EX1002, 170. Instead, claim 8 only requires the more
general operation of reading the file. Id. Accordingly, Burton discloses each and
every limitation recited in independent claim 8, as discussed above. Id.
Claim 9 depends from claim 8 and recites that the storage medium is a hard
drive. EX1001. As discussed, Burton discloses that data may be accessed from a
storage device, such as a hard disk drive, tape, optical disk storage device.
EX1006, 4:6162. Thus, Burton anticipates dependent claim 9. See EX1002, 171.

56

IPR2016-01643
U.S. Patent No. 6,775,745
D.

Ground 4: Karedla and the Robinson FBR algorithm render


claims 3, 6, 7, 13, 15, and 16 as being obvious under pre-AIA 35
U.S.C. 103(a)

Karedla and the Robinson FBR algorithm (i.e., Robinson 885 and the
Robinson Article) render claims 3, 6, 7, 13, 15, and 16 of the 745 patent
unpatentable as being obvious. See EX1002, 172.
The above-identified claims could be grouped into two categories. The first
category is directed to self tuning. See EX1002, 173. For example, claim 3 recites
decrementing a frequency factor after a time period of non-use of a corresponding
file, and claims 7, 13, and 16 are directed to self tuning. Id. The second
category is directed to the process of scanning from a frequency factor
corresponding to a LRU file to a frequency factor corresponding to a MRU file,
which includes claims 6 and 15. Id.
As discussed above in VII.A.4VII.A.5, Karedla discloses these elements.
For example, Karedla discloses that the cache line sizes could be tunable and that
caches may be dynamically adaptive. See VII.A.4; EX1002, 125128. Further,
a POSA reading Karedla would also understand that the process of scanning would
be from a LRU file to a MRU file.
The Robinson FBR algorithm disclosed by Robinson 885 and the Robinson
Article expressly teaches a more complete description of these elements. As
Karedla expressly discusses the Robinson FBR algorithm disclosed in the

57

IPR2016-01643
U.S. Patent No. 6,775,745
Robinson Article and leads a POSA directly to it, a POSA would find it obvious to
combine these references. See EX1005, 40; EX1002, 176.
1.

Overview of the Robinson FBR algorithm as disclosed by


Robinson 885 and the Robinson Article

U.S. Patent No. 5,043,885 to Robinson (Robinson 885) claims priority


from U.S. Appl. No. 391,220, which was filed August 8, 1989, and has an effective
filing date before September 7, 2001. John T. Robinson & Murthy V.
Devarakonda, Data Cache Management Using Frequency-Based Replacement,
Performance Evaluation Rev., vol. 18, no. 1, pp. 134142, May 1990 (Robinson
Article) is a printed publication that was published and publically available at
least by May 1990 before the September 7, 2001 effective date of the 745 patent.
Thus, Robinson 885 and the Robinson Article are, therefore, prior art under preAIA 35 U.S.C. 102(e).
The Robinson FBR algorithm is a frequency-based replacement algorithm
for managing caches. EX1005, 134. For example, the Robinson Article states that
replacement choices are made using a combination of reference frequency and
block age. Id. Further, Robinson 885 discloses that the data cache uses dynamic
frequency based replacement and boundary criteria. EX1007, Title (54). In
particular, Robinson 885 discloses methods and apparatus for making cache

58

IPR2016-01643
U.S. Patent No. 6,775,745
block replacement decisions based on a combination of least recently used (LRU)
stack distance and data reference frequencies. Id. at 1:1114.
According to the Robinson Article, [f]or a block read the block is
fetched from disk (i.e., a block in) and the block fetched from the disk is stored
in the cache block. EX1005, 138. If an unused cache block is unavailable, a
block in use is freed according to the [frequency-based] replacement algorithm.
Id. Thus, the FBR algorithm is used to free up capacity of the cache. See EX1002,
174.
The Robinson FBR algorithm includes at least four concepts similar to the
735 patent. See EX1002, 175. First, the basis of the Robinson FBR algorithm is
the LRU replacement algorithm with the most recently referenced (MRU)
block always pushed on the top of the stack. EX1005, 135; see EX1007, 4:3537
(The cache essentially works in LRU fashion where the entry is put in the MRU
position each time a block is referenced.). As shown in Figure 1 of Robinson 885
reproduced below, position 14 in the stack is the MRU and position 16 is the LRU.
See EX1002, 176.

59

IPR2016-01643
U.S. Patent No. 6,775,745

Second, reference counts are maintained for each block in the cache.
EX1005, 135; EX1007, 4: 2526; see EX1002, 177. Third, replacement choices
are confined to those blocks that have aged into an old section of the cache,
which confine replacement choices to blocks that have not been too recently
referenced. EX1005, 135; EX1007, 5:48 (On a miss, a block is selected to be
replaced by finding the block (or blocks) with the smallest count in the old section
and then replacing that block (or least recently used such block if there is more
than one); EX1002, 178. Finding a block to replace then consists of scanning
the blocks (from LRU to MRU) in each such LRU chain (in ascending count order)
until a block is found in the old section. EX1007, 5:1518.
Finally, the Robinson FBR algorithm provides for periodically adjusting
reference counts so as to approximate reference frequencies and avoid practical
60

IPR2016-01643
U.S. Patent No. 6,775,745
problems. EX1005, 135, 136; EX1007, 9:712 ([I]n order to adapt to changes in
the underlying frequency distribution, all counts periodically may be decremented
or multiplied by a reduction factor.).
2.

The Robinson FBR algorithm discloses each and every


element of the challenged claims

As discussed above, the challenged claims could be grouped into two


categories. The first category is directed to self tuning. See EX1002, 173. For
example, claim 3 recites decrementing a frequency factor after a time period of
non-use of a corresponding file, and claims 7, 13, and 16 are directed to self
tuning. Id. The Robinson FBR algorithm discloses this element. For example,
according to Robinson 885, in order to adapt to changes in the underlying
frequency distribution, all counts periodically may be decremented or multiplied
by a reduction factor. EX1007, 9:712; see EX1004, 135 (periodically adjusting
reference counts so as to approximate reference frequencies and avoid practical
problems.); see EX1002, 183.
The second category is directed to the process of scanning from a
frequency factor corresponding to a LRU file to a frequency factor corresponding
to a MRU file, which includes claims 6 and 15. See EX1002, 184. The Robinson
FBR algorithm also discloses this element. For example, Robinson 885 expressly
discloses that [f]inding a block to replace then consists of scanning the blocks

61

IPR2016-01643
U.S. Patent No. 6,775,745
(from LRU to MRU) in each such LRU chain (in ascending count order) until a
block is found in the old section. See EX1007, 5:1518; EX1002, 184.
3.

A POSA would have been motivated to combine the


disclosure of the Robinson FBR algorithm as disclosed in
Robinson 885 and the Robinson Article with Karedla

A POSA would have found it obvious to combine the disclosure of the


Robinson FBR algorithm as disclosed in Robinson 885 and the Robinson Article
with Karedlas method of reading ahead to prefetch data into a large cache line
size. See EX1002, 185. First, Robinson 885, Robinson Article, and Karedla each
relates to the same field of endeavor (using cache to improve computer system
performance). See EX1002, 185; accord EX1007, 3:2830; EX1005, 134;
EX1004, 38.
Indeed, Karedla expressly discusses the Robinson FBR algorithm disclosed
in the Robinson Article and thus leads a POSA directly to it. See EX1005, 40;
EX1002, 185. According to Karedla, [t]hese algorithms generally are
combinations of LRU, LFU, and read-ahead strategies, with varying thresholds on
the amount of data that is prefetched. See EX1005, 40; EX1002, 185. Thus, a
POSA reading Karedla would have found it obvious to combine its disclosure of
cache strategies (e.g., reading-ahead, prefetching, and having large cache line
sizes) with the Robinson FBR algorithm as disclosed in the Robinson Article and
Robinson 885. See EX1002, 185.

62

IPR2016-01643
U.S. Patent No. 6,775,745
The result of such a combination would provide: (1) a method for replacing
data based on least recently accessed and least frequently accessed data; (2)
decrementing a frequency factor and self-tuning as taught by the Robinson Article
and Robinson 885; and (3) reading extended data taught by Karedla. See EX1002,
186.
Further, a POSA would have been motivated to combine the self-tuning
technique disclosed in the Robinson FBR algorithm with Karedlas cache
strategies to potentially improve the performance of systems. See EX1002, 187.
This combination would have involved combining familiar elements (e.g.,
computer, volatile memory device, non-volatile storage device, etc.) in a simple
manner to yield the same predictable results Karedla is directed to produce. See
EX1002, 187. Namely, this combination provides a cost-effective method to
improve system and cache performance. See EX1002, 187; EX1007, 3:2830
(By utilizing the disclosed methods and apparatus cache management
performance can be significantly improved.); EX1005, 134 (Simulation results
show that this algorithm can offer up to 34% performance improvement over
LRU replacement.); EX1004, 38 (A read-ahead strategy known as prefetching
minimize[s] latency in data access by anticipating future requests for data and
bringing it into the cache, and the miss rate is lower for large line sizes in
large caches.)
63

IPR2016-01643
U.S. Patent No. 6,775,745
wherein the caching method is self see VII.D.2.
tuning.
13. The computer readable media as See claim 3 above. See EX1002, 188;
see VII.D.2.

recited in claim 12 further comprising:


program instructions for the caching
method are self-tuning.

[15.P] An apparatus for caching files in Karedla discloses an apparatus for


a computer system, the computer

caching files in a computer system, the

system including a microprocessor, a

computer

random access memory (RAM) and an

microprocessor,

operating system, the apparatus

memory (RAM) and an operating

comprising:

system

system

as

including

random

discussed

access

above.

See

VII.A.3.c; see EX1002, 188; see


VII.D.2.
[15.A] a disk drive, the disk drive

Karedla discloses a disk drive, the disk

storing data, the data including a

drive storing data, the data including a

directory structure, a file allocation

directory structure, a file allocation

table (FAT) and file data; and

table

(FAT)

and

file

data.

See

VII.A.3.c; see EX1002, 188; see


VII.D.2.
[15.B] a cache, the cache being located

Karedla discloses a cache, the cache

in the random access memory (RAM),

being located in the random access

the RAM being configured to cache

memory

files requested by the operating system

configured to cache files requested by

according to a caching mechanism, the

the operating system according to a

caching mechanism assigning

caching

65

(RAM),

the

mechanism,

RAM

the

being

caching

IPR2016-01643
U.S. Patent No. 6,775,745
frequency factors to each of the files in

mechanism assigning frequency factors

the cache, wherein upon a target

to each of the files in the cache, wherein

capacity of the RAM being attained,

upon a target capacity of the RAM

the frequency factors are scanned from

being attained, the frequency factors are

a frequency factor for a LRU file to a

scanned. See VII.A.3.c.

frequency factor for a MRU file, not


including the frequency factor for the

The Robinson FBR algorithm expressly

MRU file.

discloses that the frequency factors are


scanned from a frequency factor for a
LRU file to a frequency factor for a
MRU file, not including the frequency
factor for the MRU file.

For example, Robinson EX1007, 5:15


18 (Finding a block to replace then
consists of scanning the blocks (from
LRU to MRU) in each such LRU chain
(in ascending count order) until a block
is found in the old section.). See
EX1002, 188; see VII.A.3.c and
VII.D.12.

16. The apparatus as recited in claim

See claim 3 above. See EX1002, 188;

15, wherein the caching mechanism is

see VII.D.2.

self tuning.

66

IPR2016-01643
U.S. Patent No. 6,775,745
E.

Grounds 5 and 6: Claim 2 is unpatentable as obvious under preAIA 35 U.S.C. 103(a) by either (i) Karedla and Otterness, or
(ii) Burton, Karedla, and Otterness.

Claim 2 is unpatentable as obvious under pre-AIA 35 U.S.C. 103(a) by


either (i) Karedla and Otterness, or (ii) Burton, Karedla, and Otterness.
1.

Overview

As discussed above in VII.A.2.b, Karedla discusses that large cache line


sizes may have certain advantages over smaller cache line sizes. Karedla, however,
does not expressly discloses the Kbyte sizes of such large cache line sizes.
Otterness does.
U.S. Patent No. 6,460,122 to Otterness et al. (Otterness) claims priority
from U.S. Appl. No. 60/127,231 filed on March 31, 1999, which has an effective
filing date before September 7, 2001. Therefore, Otterness is prior art under preAIA 35 U.S.C. 102(e).
Otterness describes a multiple level cache structure and multiple level
caching method. EX1008, 1:1314. In particular, Otterness provides an
inventive cache line descriptor, id. at 3:415, which may be used in either a
multi-processor or a multi-controller storage environment, id at 3:6263.
Specifically, Otterness discloses that the cache line descriptor (CLD) is used by
the RAID application to keep track of all of the data stored in the cache. Id. at
9:2324. The CLDs provide various pointers to manage data movement. Id. at

67

IPR2016-01643
U.S. Patent No. 6,775,745
9:47. For example, the CLDs may provide a linked list pointer to the next line in
the least recently used (1ru) chain. Id. at 9:6162. According to Otterness, each
cache data line has its own associated CLD. Id. at 10:2627. In the exemplary
embodiment, the cache line stores data (for example 8 kbyte, 16 kbyte, 32 kbyte,
64 kbyte, or the like of data). EX1008, 10:3839.
2.

Claim 2 is unpatentable as obvious

Claim 2 of the 745 patent recites that the extended segments are one of 64
Kbytes, 128 Kbytes and 256 Kbytes in size. As discussed above, Otterness
discloses that the cache line may be 8 kbyte, 16 kbyte, 32 kbyte, 64 kbyte, or the
like of data. EX1008, 10:3839. A POSA reading Otterness would understand
that while Otterness does not expressly lists out cache line sizes of 128 kbyte or
256 kbyte, it would have understand that or the like of data refers to such sizes.
See EX1002, 193. Indeed, it would have been obvious to continue the factor of
two in progression of 8, 16, 32, and 64 to reach both 128 and 256. See EX1002,
193.
3.

A POSA would have been motivated to combine Otterness,


Karedla, and Burton

Otterness, Karedla, and Burton each relates to the same field of endeavor
(using cache to enhance computer system performance). See EX1002, 191.
Karedla specifically discusses that one possible cache location is in the
multidevice controller. See EX1002, 191; EX1004, 41. Thus, a POSA reading
68

IPR2016-01643
U.S. Patent No. 6,775,745
Karedla and Otterness would understand that the multi-controller storage
environment of Otterness is an environment where at least two controllers are
present for a RAID setup. See EX1002, 191; EX1008, 4:5; 4:2021. Indeed,
Karedla discusses multilevel caching and allocating memory between cache at
the host and cache at the controller. EX1004, 42; see EX1002, 191. Thus, a
POSA would have found it obvious to combine the disclosure of multiple level
cache structure of Otterness with the method as taught by Karedla. See EX1002,
191. The result of such a combination would provide (1) a caching method taught
by Karedla and Burton, as discussed above, and (2) cache line sizes capable of
storing 8 kbyte, 16 kbyte, 32 kbyte, 64 kbyte, or the like of data as taught by
Otterness. See EX1002, 192.
Further, a POSA would have been motivated to combine Karedla, Burton,
and Otterness to improve the performance of systems. See EX1002, 194. This
combination would have involved combining familiar elements (e.g., multidevice
controllers, cache memory, storage devices) and familiar concepts (e.g., multilevel
I/O caches, multilevel caching, or the like) in a simple manner to yield the same
predictable results. See EX1002, 194. Namely, these combinations may provide
an efficient way to provide caching at multiple levels within system. See EX1002,
194.

69

IPR2016-01643
U.S. Patent No. 6,775,745
Otterness, therefore, renders obvious claim 2, which recites the extended
segments are one of 64 Kbytes, 128 Kbytes and 256 Kbytes in size. See EX1002,
192193. Because the range of 64 to 256 Kbytes overlaps with the range
disclosed by Otterness, claim 2 is unpatentable as obvious. See In re Wertheim,
541 F.2d 257 (C.C.P.A. 1976); In re Woodruff, 919 F.2d 1575 (Fed. Cir. 1990)
([W]here the claimed ranges overlap or lie inside ranges disclosed by the prior
art a prima facie case of obviousness exists.) (as cited in M.P.E.P. 2144.05).
Thus, a POSA would have found it obvious to combine the disclosure to have
cache line sizes larger than 32 kbytes. See EX1002, 195.
VIII. Conclusion
For these reasons, the challenged claims are unpatentable and Petitioner
respectfully requests that the Board grant this Petition and institute trial.

70

IPR2016-01643
U.S. Patent No. 6,775,745
Date: August 19, 2016

Respectfully submitted,
By: /P. Andrew Riley/
Reg. No. 66,290
Finnegan, Henderson, Farabow, Garrett &
Dunner, LLP
901 New York Avenue, NW
Washington, DC 20001-4413
Telephone: 202-408-4266
Facsimile: 202-408-4400
E-mail: IV745-IPR@finnegan.com
James D. Stein
Reg. No. 63,782
Finnegan, Henderson, Farabow, Garrett &
Dunner, LLP
271 17th Street NW, Suite 1400
Atlanta, GA
Telephone: 404-653-6439
Facsimile: 404-653-6444
E-mail: IV745-IPR@finnegan.com
Jonathan Stroud
Reg. No. 72,518
Unified Patents Inc.
1875 Connecticut Ave. NW, Floor 10
Telephone: 202-805-8931
Facsimile: 650-887-0349
E-mail: jonathan@unifiedpatents.com

71

IPR2016-01643
U.S. Patent No. 6,775,745
CERTIFICATION UNDER 37 C.F.R. 42.24(d)
Under the provisions of 37 C.F.R. 42.24(d), the undersigned hereby
certifies that the word count for the foregoing Petition for Inter Partes Review
totals 13,973, which is less than the 14,000 words allowed under 37 C.F.R.
42.24(a)(1)(i).

/P. Andrew Riley/


P. Andrew Riley
Finnegan, Henderson, Farabow,
Garrett & Dunner, LLP

72

IPR2016-01643
U.S. Patent No. 6,775,745
CERTIFICATE OF SERVICE
The undersigned certifies that the foregoing Petition for Inter Partes
Review and the associated Exhibits 1001 through 1014 were served on August
19, 2016, by Overnight Express Mail at the following address of record for the
subject patent.
Intellectual Ventures I, LLC
3150 139th Ave SE, Building 4
Bellevue, Washington 98005
(425)467-2300
Schwabe, Williamson & Wyatt/SFC
1211 SW Fifth Ave.
Suite 1900
Portland Oregon 97204
(503)222-9981
Dated: August 19, 2016

By: /Lauren K. Young/


Lauren K. Young
Legal Assistant
FINNEGAN, HENDERSON, FARABOW,
GARRETT & DUNNER, L.L.P

73

You might also like