Professional Documents
Culture Documents
Web Security
Performance Measurement
Table Of Contents
1: Introduction
2: Performance Measurement
2.1: Measurement Purpose
2.2: Measurement Techniques
2.3: Metrics
3: Performance Factors
3.1: Client
3.2: Network
3.3: Server
4: Apache Performance
5: Server Clusters
6: Links
Indroduction
Chapter 3 discussed the HTTP protocol used to transmit web documents on the Internet,
and the Apache web server - an example of an implementation using HTTP, and the
means by which web documents are made available.
Some of Apaches configuration directives were introduced. In this chapter we will
discuss some further directives - those responsible for determining the performance of
Apache.
But first we will consider how we can measure the performance of a web server and the
factors that affect the performance of a web server, indeed, we will consider what
actually comprises a web server. This data will allow us to make informed decision
about the setup of our server.
Web Security
COSC1300 - Web Servers and Web Technology
Online Tutorials Materials
Performance Measurement
Copyright 2001 Hugh Williams
All Rights Reserved
COSC1300
Introduction
Measurement Techniques
Table Of Contents
2. Performance Measurement
n this chapter, we examine the techniques and value of
measuring web traffic.
1: Introduction
2: Performance
Measurement
2.1: Measurement Purpose
2.2: Measurement
Techniques
2.3: Metrics
3: Performance Factors
3.1: Client
3.2: Network
3.3: Server
4: Apache Performance
5: Server Clusters
6: Links
Indroduction
Web measurement encompasses many forms of web traffic, and many ways to measure
it. Web traffic data needs to be collected but also analysed to be of any value.
Content Creators
Content creators use web traffic data to understand the browsing behaviour of the users
of their site, including how they access certain resources (the links they choose) and also
what links to advertisers sites are selected and how often. This information may
necessitate a change in design for the site.
Content creators can also gain information about how the users access the site, and how
the site performs under methods of access (eg. telephone modem). This may necessitate
in page redesign to improve the performance of downloads.
Web-Hosting Companies
Web traffic data is critical for web hosting companies. Fundamentally this information
would be used to determine the number of bytes transferred for the site of each client,
which is used to determine how much to charge each client, and how to allocate system
resources.
Network Operators
Network operators, for example an Internet Service Provider (ISP), would use web
traffic data to manage their network. This could include determining the benefit of
installing a proxy cache on the local network, or to compare the latency of users
connected with high-band (eg. cable modem) to those connected with low-band (eg.
telephone modem). Latency details of the network allow the network operator to decide
how to upgrade the network for new users.
Checkpoint
1. What is the primary reason web hosting companies use web traffic data?
Introduction
COSC1300 - Web Servers and Web Technology
Online Tutorials Materials
Measurement Techniques
Copyright 2001 Hugh Williams
All Rights Reserved
COSC1300
Performance Measurement
Metrics
Table Of Contents
1: Introduction
2: Performance Measurement
2.1: Measurement Purpose
2.2: Measurement
Techniques
2.3: Metrics
3: Performance Factors
3.1: Client
3.2: Network
3.3: Server
4: Apache Performance
5: Server Clusters
6: Links
Indroduction
There are many ways to collect web traffic data. By handling requests web servers
collect data through logs, although the information logged this way is usually limited. It
is also possible to specifically monitor the traffic moving on a network.
Logging
Servers generate logs as they process requests from clients. Every entry in a server
access log represents a HTTP request to the server and includes information about the
client. Normally by default server log entries only contain the request method and server
status code to minimise the overhead of logging. It is possible, however, to log more
information, including the request and response headers.
Server logs could be used to analyse user browsing behaviour, to determine, for
example, the most popular page on a site. However the log cannot record how often the
page is requested via cache (and it is more likely that popular pages will be retrieved
from cache), and without denying caching on all pages on a site (which will significant
increase the amount of web traffic - and is what caching attempts to avoid), logs cannot
be used to provide browsing information.
Common Log Format
Logging is usually done in the Common Log Format (CLF). This is a default industry
standard that provides basic information about each HTTP request, as follows:
identity of client, either the IP address or the domain
name
Remote Identity
the account of the client, usually not implemented
Authenticated User the username of the client if authentication is used
Time
the time the request was made, at one second granularity
Request
corresponding to the first line of the HTTP request
Response Code
the server three-digit response code
Content Length
the number of bytes associated with the response
Remote Host
Packet Monitoring
It is possible to use software to directly monitor the traffic on the network. Logs provide
no information about how web traffic affects the TCP and IP layers. A packet monitor
can produce detailed traces of web activity at the HTTP, TCP and IP layers which can
be used to determined the efficiency of the network.
Active Measurement
To allow systematic investigation of the performance of a server, active measurement,
or testing, is required. Requests are generated in a controlled and predetermined manner,
usually with a modified HTTP client, and the performance of the server is observed.
In the process of testing a webserver, the following issues should be addressed:
1. location of the test client
Checkpoint
1. The combined log format has two extra fields of information (beyond that of the
CLF). What are they?
2. Why cant logs be used to accurately determine browsing behaviour?
Performance Measurement
COSC1300 - Web Servers and Web Technology
Online Tutorials Materials
Metrics
Copyright 2001 Hugh Williams
All Rights Reserved
COSC1300
Measurement Techniques
Performance Factors
Table Of Contents
2.3. Metrics
n this section we discuss measurement values of web
server performance.
1: Introduction
2: Performance Measurement
2.1: Measurement Purpose
2.2: Measurement Techniques
2.3: Metrics
3: Performance Factors
3.1: Client
3.2: Network
3.3: Server
4: Apache Performance
5: Server Clusters
6: Links
Web performance can be analyzed from different viewpoints. For instance, a server
administrators perception of performance has to do with fast response time and no
connections refused. On the other hand, a Web masters perception of performance is oriented
towards high connection throughput and high availability. Thus, it is difficult to formulate a
uniform set of metrics to measure web performance that is equally acceptable to both parties.
Here, we concentrate only on the server administrators point of view.
Performance Metrics
Latency and throughput are the two most important performance metrics for web servers. The
rate at which HTTP requests are serviced represents the connection throughput. It is usually
expressed in HTTP operations per second. Due to the large variability in the size of web
resources, it is sometimes expressed in bits per second (bps) as well.
However, it is unfair to simply compute the overall throughput, since different resource types
(say documents, images, database requests etc) have typical sizes and hence differ slightly
from each other in their throughputs. Therefore, a more generalized approach would be to
compute Class throughput for different resource types and finally compute average
throughput.
The following example illustrates how this formula can be used to compute the average
throughput of a web server.
Example: http://yallara.cs.rmit.edu.au:8002 was monitored during a 30 minute
window and 9000 HTTP requests were counted. This server delivers three types of
resources: HTML pages, images and database requests. It was observed that HTML
documents represent 40% of the requests and are about 11200 bytes long on average. Images
accounted 35% of the requests and their average size was 17200 bytes. Database requests
represent the rest 25% of requests, and their average size was 100000 bytes. Compute the
web servers average throughput.
HTML Pages = (9000 X 0.4 X 11200 X 8) / 30 X 60 X 1024 Kbps (Kilobits per second)
= 175 Kbps
Images
= (9000 X 0.35 X 17200 X 8) / 30 X 60 X 1024 Kbps (Kilobits per second)
= 235 Kbps
DB Results = (9000 X 0.25 X 100000 X 8) / 30 X 60 1024 Kbps (Kilobits per second)
= 977 Kbps
Average Throughput = 1387 Kbps
The time required to complete a request is the latency at the server, which is one
component of the total response time. The average latency at the server is the average time it
takes to handle a request.
Apart from the latency at the server, the time communicating over the network and the
processing time at the client machine are also significant components of the client response
time.
Another important metric, though one with a negative effect, is the error rate. An error could
be any failure by the server, for example, an overflow on the pending connection queued at
the server end. This means that, an attempt by a client to connect to the server will be ignored.
Increased error rates are an indication of degrading performance.
Measurement Techniques
COSC1300 - Lecture Notes Web Servers and
Web Technology
Performance Factors
Copyright 2000 RMIT Computer
Science
All Rights Reserved
COSC1300
Metrics
Table Of Contents
3. Performance Factors
n this chapter, we introduce the factors that affect the
performance of the web server.
1: Introduction
2: Performance
Measurement
2.1: Measurement
Purpose
2.2: Measurement
Techniques
2.3: Metrics
3: Performance Factors
3.1: Client
3.2: Network
3.3: Server
4: Apache Performance
5: Server Clusters
6: Links
Introduction
Each of the elements that make up the Web Server, from the software to the hardware,
will have an effect on the performance of the Web Server. But the overall performance
of the full process - the client requesting a resource across the network to the Web
Server - is affected by more than the Web Server. The other crucial elements in this
process are the client, another computer running, typically, browser software, and the
network itself.
Client
The client is the computer from which the request is initially made. A client is typically
a computer running browser software, but also many other applications. Obviously with
so many users on the world wide web, the variation in the configuration of both the
software and hardware of a client is enormous.
Network
The network upon which HTTP transactions are completed is, of course, the Internet.
Because of IP, even though the many networks that the Internet consists of are different,
the Internet can be considered as one homogenous network.
Still, a HTTP message must travel through many types of networks, and many routers.
Each leg of its journey adds to the overall time.
The network is usually the most significant bottleneck in the process of requesting and
receiving a web resource.
Server
The server must receive a request and respond with the resource as quickly as possible.
After the network, the server, for a number of reasons, is often the next major
bottleneck.
Checkpoint
1. Although we can tune our webservers, usually, where does the greatest latency
when data is delivered across the Internet?
Metrics
COSC1300 - Web Servers and Web Technology
Online Tutorials Materials
COSC1300
Performance Factors
Table Of Contents
1: Introduction
2: Performance
Measurement
2.1: Measurement Purpose
2.2: Measurement
Techniques
2.3: Metrics
3: Performance Factors
3.1: Client
3.2: Network
3.3: Server
4: Apache Performance
5: Server Clusters
6: Links
Hardware
The client hardware is the typical hardware of a PC. The general discussion of hardware
here is applicable to servers also.
A computer, primarily, consists of a CPU, RAM (Random Access Memory), a Bus, a
Hard Disk, a Video card, I/O (input/output) and other peripherals.
Central Processing Unit
The Central Processing Unit (CPU) provides the processing power to the computer. The
importance of the CPU is often overrated - it is only significant for process intensive
applications, like rendering images.
Memory
RAM (Random Access Memory, or just Memory) is where all the computing activities
are ideally stored while not being used by the CPU. RAM is potentially a very important
factor on a machine involved in a web transaction. The more RAM the better because a
larger cache can be supported.
The cache is where frequently accessed data (for example web pages) are kept in case
they are required again soon after they are stored.
Bus
The Bus is the part of the motherboard the moves data between the different parts of the
computer. Bus speed is not usually significant on the Internet, but a slow Bus may make
a fast CPU redundant as the CPU is forced to wait for the Bus.
Disk Drive
The Hard Disk is important because it is significantly slower than Memory. Ideally, all
data would be kept in the cache (in Memory), and therefore the Hard Disk would not be
required. Use of the Hard Disk is more applicable to the Web Server which may have to
retrieve a document from the Disk to deliver it to a client (see below).
Video Card
The Video card is important to ensure smooth transitions between graphically presented
software, but is not critical as part of the client hardware.
Input/Output
Input/Output (I/O) is the facility of the computer to send and receive data to other
remote hosts, through the serial (or COM) port. For a modem connection, which is
applicable to most hosts on the Internet, it is important to use the most up to date version
of UART, the chip that controls the serial port. UART provides buffering to manage
data coming from the outside, through the modem connection, and to deliver it to the
system bus.
Operating System
The are three main OS (operating systems) used on modern PCs - Windows, Macintosh,
and Unix (or a variation). Obviously Microsoft Windows dominates the market.
Although for performance it is advisable to use a Unix style operating system (eg.
Linux) over Windows, for most users this is not an option due to the additional expertise
required to install a Unix style OS.
Software
Web browsers provide a quite basic function. A browser makes a TCP virtual
connection to a web server (using a socket pair), and requests a document following the
syntax specified by HTTP. Once the document arrives, it is parsed by the browser, then
displayed.
First the browser parses the URL entered by the user (or recognised when a link is
clicked).
Then the browser checks the cache to see if the document has been previously stored. If
so, either the page is displayed immediately from cache, or a HEAD request is sent to
the server to check whether the cached page is out of date. If the cached document is
still current, it is displayed immediately from cache. If the requested document isnt in
the cache, or the cached version is out of date, then the browser must request the page
Checkpoint
1. The RAM is the most ciritical piece of hardware on the web server machine. Why?
2. Which protocols on the TCP/IP stack does a web browser directly access?
Performance Factors
COSC1300 - Web Servers and Web Technology
Online Tutorials Materials
COSC1300
Client Performance Factors
Table Of Contents
1: Introduction
2: Performance Measurement
2.1: Measurement Purpose
2.2: Measurement
Techniques
2.3: Metrics
3: Performance Factors
3.1: Client
3.2: Network
3.3: Server
4: Apache Performance
5: Server Clusters
6: Links
Hardware
At the IP layer of the TCP/IP model, the connection between a client and a server is
direct. But in reality, all data that is transmitted must pass through a series of networks.
IP is responsible for transmitting data one network at a time.
Between hosts are lines, and between networks are routers. There are many different
types of network hardware that exist, and the variation in quality and type affects the
time it takes for data to travel between the client and server.
Lines
A line is a connection between two points. Every connection on the Internet is made out
of a physical line which may be constructed of metal, fibre optics, or even space. At the
end of each line is a terminator.
The latency of the Internet is not due to the lines which have a maximum speed of that
of light, but due to the terminators at each end of lines, typically a router or modem. The
ideal situation is to have as few terminators as possible between the client and server.
Router
Routers connect two networks, forwarding data between them. Because a router must
examine each packet of data it receives to determine where to send it, significant latency
Protocols
A protocol is a set of instructions, or rules, that allow consistent communication (in one
form or another). On the Internet, we use protocols for devices, such as computers, to
communicate successfully. The value of a protocol is determined by the number of
people who use it, even if the protocol itself is not particularly efficient.
The important protocols of the Internet are the TCP/IP protocol suite, and the protocol to
send documents across the www is HTTP.
TCP
The Transmission Control Protocol was designed to establish a reliable "virtual"
connection for IP packets. To achieve this requires considerable overhead because each
connection must be established with a three-way handshake, and all data exchanged
must be acknowledged.
TCP was created based on the assumptions that connections would be made
infrequently, on each connection a large amount of data would be transferred, and the
correctness of the data was more important than performance. Unfortunately this is
particularly unsuited to HTTP messages which consist of many short-lived messages in
rapid succession.
HTTP
HTTP was designed to transmit HTML documents across the Internet, although HTTP is
not limited to serving HTML documents. This is achieved in a straightforward manner a client makes a request for a document and a server returns it.
The problem with HTTP is it was designed to serve static pages, and this makes it quite
Checkpoint
1. Why does your modem have to translate data from digital to analog and vice
versa?
2. What happens if the nearest DNS server doesnt have a mapping of the requested
domain name to IP address?
COSC1300
Network Performance Factors
Apache Performance
Table Of Contents
1: Introduction
2: Performance Measurement
2.1: Measurement Purpose
2.2: Measurement
Techniques
2.3: Metrics
3: Performance Factors
3.1: Client
3.2: Network
3.3: Server
4: Apache Performance
5: Server Clusters
6: Links
Hardware
Many of the issues discussed under client hardware are relevant to the server. But a
server is not like a client. A client machine will typically be running many software
programs, and needs only to run a browser to connect to the Internet. A server, on the
other hand, should only ever run the software that is absolutely necessary. The needs of
the server machine will also differ to the client hardware because the server must deal
with many requests at one time.
Essentially a web server is a location that stores documents remotely. Upon request, it
serves documents from its memory or disk to its connection to the network. So in terms
of hardware, a server can be quite simple: it doesnt necessarily need a mouse, a
keyboard or a monitor. And it certainly doesnt need a window-style display system.
Web server admin can be done remotely with a telnet session.
Memory
To reduce accesses to disk a server should ideally have enough RAM to store all of the
static web documents in cache (if that is reasonable).
Network Interface Card
The network interface card (NIC) provides the interface between the network and the
server machines bus. Traditionally, data on the network moved much slower than data
on the computer, but this may no longer be the case on local area networks (LAN).
The NIC provides a buffer which, typically, holds information from the computer, until
the network is ready for it. This may be reversed on computers attached to fast LAN.
The larger the buffer the better, to prevent buffer overflow, and therefore the loss of
data.
Disk Drives
The speed of the hard disk could be a major bottleneck in server performance. If access
to the hard disk cant be avoided, then the hard disk should be as fast as possible.
Traditionally SCSI disk drives were the fastest type of drive, but recently EIDE drives
are approaching or equalling the speed of SCSI disks without the substantial cost.
Operating System
There are two realistic options for web server operating systems (OS) - Unix (or a Unix
variant) and Windows NT (or Windows 2000 which is based on Windows NT).
Unix
Unix uses processes to do work where each process is independent and unique. Unix is
multi-user and multi-tasking, so many processes belonging to many users can run at one
time.
Unix uses a kernel to control fundamental OS functions, like interfacing to hardware and
handling scheduling the processes of the users.
For Apache, a master process (called httpd) runs with root permissions, listening to port
80 for incoming requests. When a request arrives, the master httpd process hands the
request to a child httpd process (with nobody permissions) to deal with it. In terms of
performance, the master httpd process must wait for kernel processes (which have a
higher priority), and share whatever CPU time is left over with other user processes.
Therefore it is critical to minimise the number of non-essential user processes with
which the httpd process has to share CPU time.
Windows NT
Windows NT has the advantage that it is closely tied with other Microsoft software, it
has a consistent look and feel, and provides a graphical interface for controlling the web
server.
However Windows NT is not especially good as a web server OS. It does not have a
good performance, it is unstable compared to Unix, and it does not scale well.
Software
Web servers take requests from clients and return a response. The reply could be the
resource the client requested, either a static or dynamically generated page, or an error.
When a web server is lightly loaded, it is more likely that the performance bottleneck
will be in the modem or the Internet. However the performance of a web server tends to
Content
The basic performance principle is to send less data which will translate into less time
the user has to wait. Content creators should be aware of good page design. Simple
layout, minimal graphics and in the correct format, the use of cascading style sheets, all
for the benefit of sending less data.
Checkpoint
1. Tests have shown that NT and linux have a similar performance as operating
systems for web servers. What is the advantage of Linux and Apache over NT and
IIS?
Apache Performance
Copyright 2001 Hugh Williams
All Rights Reserved
COSC1300
Server Performance Factors
Server Clusters
Table Of Contents
4. Apache Performance
n this section we discuss configuration options that can
be used to improve the server performance of Apache.
1: Introduction
2: Performance
Measurement
2.1: Measurement Purpose
2.2: Measurement
Techniques
2.3: Metrics
3: Performance Factors
3.1: Client
3.2: Network
3.3: Server
4: Apache Performance
5: Server Clusters
6: Links
KeepAlive <on|off>
This directive enables persistent connections. In HTTP/1.1 this is the default mode of
connections. However, there are a few reasons to disable it in certain cases. For
example, some old browsers do not support this type of connections, in which case, it is
a resource waste to keep persistent connections.
KeepAliveTimeout <seconds>
This directive specifies the amount of time an Apache process will wait for a client to
issue another request before closing the connection and returning to general service.
This should be a relatively short value, for example 15 seconds. If this time is not long
enough for the clients next request, the persistent connection drops and a new
connection must be established. In other words, we lose the advantages of persistent
connection. On the other hand, if it is too long and the server receives many requests
from other clients, we waste resources by keeping this connection reserved, waiting till
the original client calls again.
MaxKeepAliveRequests <number>
This is similar to the MaxRequestsPerChild directive, and is used to limit the number
of serves per persistent connection. The server will automatically terminate the
persistent connection when the number of requests specified by
MaxKeepAliveRequests is reached. In order to maintain the server performance, this
value should be kept high, and the default is accordingly 100.
This directive is important, not only for performance tuning, but for server security as
well. If a malicious client exploits a persistent connection by sending continuous
requests, it may cause a denial-of-service of the server. Such an attack can be
worker
mpm_winnt
Prefork and worker are for Unix, and mpm_winnt is, obviously, for Windows NT.
Process-level Directives
StartServers (prefork, worker)
This determines the number of child processes Apache will create on the startup.
However, since Apache controls the number of processes dynamically depending on the
server load, altering this directive does not have very much effect.
MinSpareServers (prefork)
This sets the minimum number of Apache processes that must be available at any time.
If processes become busy with client requests, Apache will spawn new processes to
keep the pool of available servers at a minimum value. Raising this value is useful if
your server expects frequent bursts of requests, and needs to serve such bursts rapidly.
MaxSpareServers (prefork)
This sets the maximum number of Apache processes that can be idle at one time; if
many processes are started to handle a burst in demand and then the burst tails off, this
directive will ensure that excessive processes will be killed. At fixed time intervals,
current idle processes are counted, compared with this directive and extra servers are
killed. For a site with a million or more hits per day and experiences bursts of requests, a
reasonable value could be
MaxSpareServers 64
This directive ensures that the system is not overloaded by spare Apache processes, and
is definitely useful if the computer running the web server is used for other purposes as
well.
MaxClients (prefork, worker)
This is the hard upper bound for the number of Apache processes that can ever be
spawned, either to maintain the pool of spare servers or to handle sudden bursts of
requests. Clients that try to connect when all processes are busy will get Server
Unavailable error message.
Setting MaxClients lower helps to increase the performance of client requests that
succeed, at the cost of causing some client requests to fail. It must therefore be tuned
carefully; If settling for a compromise seems difficult, it indicates the server either needs
to be tuned for performance elsewhere, upgraded, or clustered.
For prefork this is the total number of child processes. For worker this is a product of
the total number of child processes by the ThreadsPerChild directive, which should be
a multiple of ThreadsPerChild.
MaxRequestPerChild (prefork, worker, mpm_winnt)
This limits the maximum number of requests a given Apache process will handle before
voluntarily terminating. The objective of this mechanism is to prevent memory leaks
causing Apache to consume increasing amounts of memory. By default, this is set to
zero, meaning that processes will never terminate themselves. A low value for this
directive will cause performance problems as Apache will be frequently terminating and
restarting processes. A more reasonable value for platforms that have memory leak
problems is 1000.
For prefork this is the number of requests each process will serve. For worker and
mpm_winnt this is the number of requests that the threads of a process will serve, in
total.
ThreadPerChild (worker, mpm_winnt)
This directive sets a static value for the number of threads created for each child process.
For mpm_winnt there is only one process, therefore this value represents the total
number of servers, and should be set to deal with the maximum load. For worker there
are multiple processes, so this should be set to deal with a common, or average load.
MinSpareThreads (worker)
This sets the minimum number of Apache threads that must be available at any time. If
threads become busy with client requests, Apache will spawn a new process to keep the
pool of available threads at least at the minimum value. It is usually unnessecary to
adjust this directive
MaxSpareThreads (worker)
This sets the maximum number of Apache threads that can be idle at one time; if many
processes (and therefore threads) are started to handle a burst in demand and then the
burst tails off, this directive will ensure that excessive threads will be killed. At fixed
time intervals, current idle threads are counted, compared with this directive and extra
processes are killed.
Checkpoint
1. What are the major factors that could affect the performance of a heavily loaded
web server?
Server Clusters
Copyright 2000 RMIT Computer Science
All Rights Reserved
COSC1300
Apache Performance
Web Caching
Table Of Contents
1: Introduction
2: Performance Measurement
2.1: Measurement Purpose
2.2: Measurement
Techniques
2.3: Metrics
3: Performance Factors
3.1: Client
3.2: Network
3.3: Server
4: Apache Performance
5: Server Clusters
6: Links
If your server receives requests that cant be handled just by tuning it for heavy loads,
the next best alternative is to use better hardware. Then again, you may reach an upper
bound that existing hardware (and the budget) can offer to your needs. If you still need
higher performance, then the next option is to install a web server cluster. In a web
server cluster, we distribute the load, as evenly as possible, between several servers.
Some sophisticated mechanisms are required to set up such a server cluster. In
particular, examining the access patterns for the site is crucial to the performance tuning
and load balancing process.
There are a number of different ways to set up a server cluster. A few of the most
common methods are given below.
In addition to these, this type of load balancers typically detect when a Web server in the
pool has gone down, and can dynamically redirect the request to an identical Web
server. With DNS load balancing, the client is stuck with a cached IP address of a
downed Web server and cannot be redirected to a new one until the Web browser can
request another IP address from the DNS server.
6. Links
Apache Tomcat Server (for Java)
Everything you wanted to know about CGI
PHP usage stats
More PHP usage stats
Netcraft web server usage survey
Webstone Benchmarking Software
SPECWeb Benchmarking Software
W3C paper on Network Performance Effects of HTTP/1.1, CSS1, and PNG
A comprehensive directory of Web Site management tools
7. References
Web Protocols and Practice, B. Krishnamurthy and J. Rexford, Addison-Wesley, 2001.
Apache Performance
COSC1300 - Lecture Notes Web Servers and
Web Technology
Web Caching
Copyright 2000 RMIT Computer
Science
All Rights Reserved