You are on page 1of 28

Andrew File System

Presented by
S. Thirumurugan
Reg.No:9909105006
Distributed Files Systems (DFS)
• A special case of distributed system
• Allows multi-computer systems to share files
– Even when no other IPC or RPC is needed
• Sharing devices
– Special case of sharing files
• E.g.,
– NFS (Sun’s Network File System)
– Windows NT, 2000, XP
– Andrew File System (AFS) & others …

2
Distributed File system/service requirements
Consistency
Security
Efficiency
• Tranparencies
Heterogeneity
Fault tolerance fileproperties
Transparency Changes
Replication
Unix to aproperties
offers by oneupdate
one-copy client should
semantics not interfere
for as for
Access:Must
Goalmaintain
for
Same
withdistributed
access
operations
the file
operation control
systems
(client
of and isprivacy
programs
other usually
clients are
• Concurrency Service
Service
File can
servicemust
operations be continue
accessed
maintains
on local to
multiple
filesby
operate
- clients
caching
identicaleven
running
is when
copies onof
completely
local
performance
files.
unaware comparable
of to local file system.
(almost)
clients
files simultaneously
make
transparent.any OS errorsordistribution
accessing
hardware
or crash. of files)
platform.or changing
• Replication Location: •the
based
Same same on identity
name file.
space of after
user making
relocation request
of of

•Designat-most-once
Difficult must
Load-sharing
to bebetween
achieve compatible
semantics
the same with
servers for the
makes file systems
distributedservicefile
• Heterogeneity Concurrency
different
•identities
files of remote(client
or processes
properties
OSes
users mustprograms be authenticated
• more
systems
scalable
at-least-once
shouldwhilesee maintaining
semantics
a uniform good
file performance
name space)
• Fault tolerance Isolation
and
• privacy
scalability.
requires secure communication
•Service
Mobility:
interfaces
•requireshas
Local access
Automatic
must
better
idempotent beresponse
relocation
open
operations
of
- precise
files(lower latency)
is possible
• Consistency Service interfaces
specifications are
of APIslocking open to
are published. all processes not
File-level
• Service or
mustrecord-level
(neither resume after a server normachine
Fault tolerance
excluded by client programs
a firewall. system
• Security Other formsadminof concurrency control to minimise
Full crashes.
replication is tables
difficult intoclient nodes
implement.
•vulnerable to impersonation and other
need to be
• Efficiency.. If the contention
changed
service is when files
replicated, it are moved).
can continue
attacks
Caching (of all or part of a file) gives most oftothe
Performance: Satisfactory
operate(except
even during aperformance
server crash. across a
benefits fault tolerance)
specified range of system loads
Scaling: Service can be expanded to meet
additional loads or growth.

File service is most heavily loaded


service in an intranet, so its
functionality and performance 3
are critical
What is file system?

Figure 8.3 File attribute record structure


File length
Creation timestamp
updated
Read timestamp
by system:
Write timestamp
Attribute timestamp
Reference count
Owner
File type
updated
by owner: Access control list

E.g. for UNIX: rw-rw-r--

4
History&Definition

• Originated at Carnegie Mellon University (CMU) in 1982


– Andrew Carnegie and Andrew W. Mellon
• A joint project between CMU and IBM.

 An Andrew file system (AFS) is a location-independent file system


that uses a local cache to reduce the workload and increase the
performance of a distributed computing environment. A first request
for data to a server from a workstation is satisfied by the server and
placed in a local cache. A second request for the same data is
satisfied from the local cache.
 AFS is a secure distributed global file system providing location
independence,scalability and transparent migration capabilities for
data.

5
Purposes and Goals
• Support thousands of concurrent Unix workstations
– 5000 (Kazar)
– 7000 (Howard)
– One file server should support at least 50 clients
• Scalability was the primary design goal
– Performance
– Few network requests
• Goal: provide common view of centralized file system, but
distributed implementation.
– Ability to open & update any file on any machine on network
– All of synchronization issues and capabilities of shared local files

6
Whole-File Serving and Caching
• Whole-file serving: servers transmit entire
files to clients (not blocks, bytes, etc.)
• Whole-file caching: upon receiving a file,
clients cache it
– Applications work with the cached copy
– Session consistency: when a file is closed, an
updated version is sent to the server
– The cache is persistent, surviving system
reboots

7
Environment Assumptions
• Cached files will remain valid for long periods
• The local cache will need to be rather large (textbook:
100 megabytes)
• Observations derived from typical Unix usage:
– Most files are small (under 10 kilobytes)
– Read operations approx. six times more common than writes
– Most files are written and read by a single user; shared files are
typically modified by a single user
– Files are referenced in bursts
• Notably, AFS is explicitly not meant for databases
– Argued that the needs of a distributed file system are different
than those of a distributed database system, so the latter should
be addressed separately
8
Andrew File System (AFS)
• Stateful
• Single name space
– File has the same names everywhere in the world.
• Lots of local file caching
– On workstation disks
– For long periods of time
– Originally whole files, now 64K file chunks.
• Good for distant operation because of local disk
caching

9
Distribution of processes in the Andrew File System
Workstations Servers

User Venus
program
Vice
UNIX kernel

UNIX kernel

User Venus Network


program
UNIX kernel

Vice

Venus
User
program UNIX kernel
UNIX kernel

10
AFS
• Need for scaling led to reduction of client-server
message traffic.
– Once a file is cached, all operations are performed
locally.
– On close, if the file is modified, it is replaced on the
server.
• The client assumes that its cache is up to date!
• Server knows about all cached copies of file
– Callback messages from the server saying otherwise.
• …
11
AFS Architecture

12
AFS Architecture

13
Andrew File System
Architecture
User
admin servers

cache volumes

Desktop computers

AFS Namespace /afs/hq.firm/


User/alice
Solaris/bin
Group/research
Transarc.com/pub

14
AFS

• AFS – Andrew File System


– workstations grouped into cells Client's view 15
– note position of venus and vice
Files
• AFS represents all files by a unique, 96-bit file
identifier (fid)
• Venus (client-side) is responsible for converting
pathnames into fids
• Each file has a version number
• Files can be grouped into logical volumes
– E.g., a volume might be all of the files of a particular
user; or a particular release of all the system binaries
– System administration works with volumes as its unit
(e.g., for backups)
16
File name space seen by clients
of AFS
Local Shared
/ (root)

tmp bin . . . vmunix cmu

bin

Symbolic
links

17
Continued….

Files
• Permission control with access lists
– Each directory has an access list
– Specifies whether a given user or user group may:
• Read any file in the directory
• Write (update) any file in the directory
• Insert new files in the directory
• Delete files from the directory
• Lookup files in the directory
• Lock files in the directory
• Administer the directory (i.e., change the access list)
– Note that traditional Unix permissions (chmod) still
apply as well
18
Implementation of file system calls in AFS
User process UNIX kernel Venus Net Vice
open(FileName, If FileName refers to a
mode) file in shared file space,
pass the req uest to Check list of files in
local cache. If not
Venus. present or there is n o
valid callback promise,
send a request for th e
file to the Vice server
that is custodian of the
volume containing the Tran sfer a copy of the
file. file and a callback
promise to the
workstation. Log the
Place the copy of the callback promise.
file in the local file
Open the lo cal file and system, enter its local
return the file name in the local cache
descriptor to the list and return the lo cal
application. name to UNIX.
read(FileDescriptor, Perform a normal
Buffer, length) UNIX read operation
on the local copy.
write(FileDescriptor, Perform a normal
Buffer, length) UNIX write operation
on the local copy.
close(FileDescriptor) Close the local copy
and notify Venus that
the file has been closed. If th e local copy has
been changed, send a
copy to the Vice server Replace the file
that is the custodian of contents and send a
19the file. callback to all other
clients holdingcallback
promises on the file.
The main components of the Vice service interface
Fetch(fid) -> attr, data Returns the attributes (status) and, optionally, the contents of file
identified by the fid and records a callback promise on it.
Store(fid, attr, data) Updates the attributes and (optionally) the contents of a specified
file.
Create() -> fid Creates a new file and records a callback promise on it.
Remove(fid) Deletes the specified file.
SetLock(fid, mode) Sets a lock on the specified file or directory. The mode of the
lock may be shared or exclusive. Locks that are not removed
expire after 30 minutes.
ReleaseLock(fid) Unlocks the specified file or directory.
RemoveCallback(fid) Informs server that a Venus process has flushed a file from its
cache.
BreakCallback(fid) This call is made by a Vice server to a Venus process. It cancels
the callback promise on the relevant file.
20
DFS – Replication
• Replicas of the same file reside on failure-independent
machines.
• Improves availability and can shorten service time.
• Naming scheme maps a replicated file name to a
particular replica.
– Existence of replicas should be invisible to higher levels.
– Replicas must be distinguished from one another by different
lower-level names.
• Updates
– Replicas of a file denote the same logical entity
– Update to any replica must be reflected on all other replicas.

21
DFS – File Consistency
• Is locally cached copy of the data consistent with
the master copy?
• Client-initiated approach
– Client initiates a validity check with server.
– Server verifies local data with the master copy
• E.g., time stamps, etc.
• Server-initiated approach
– Server records (parts of) files cached in each client.
– When server detects a potential inconsistency, it
reacts
22
Continued….

Cache Consistency
• Cache consistency is ensured through callbacks
• When vice (server) supplies a copy of a file to a venus
(client) process, the file comes with a callback promise
– The callback promise is a token which is either valid or cancelled
• When venus initially receives a file, it is valid
• If another client modifies the file, vice dispatches a callback to all
clients with a callback promise, setting it to cancelled
• The respective venus processes then know their cached version is
out-of-date
• Advantage: clients are immediately made aware when
their cache is out-of-date, without any unnecessary
network communication
• A server initiated process

23
Continued….

Cache Consistency
• What about missed callbacks?
– If the client machine is powered off, it will miss any
callbacks issued
• After boot-up, upon the first use of a cached file, venus
compares its version number with the one at the server
• If the file has been modified, venus knows the cached
version needs to be replaced
– Callbacks may be lost due to network errors
• Once in a while, typically every ten minutes, venus checks
the version number of all its cached files against the server’s
copies
• Again, if the file is modified, venus knows the cached version
it outdated

24
Continued….

Cache Consistency
• Not a perfect system
– If two (or more) clients concurrently open the
same file, modify it, and close it (thereby
sending the modified version back to the
server), all but the last-closed update is
silently lost
– In the situations AFS is designed for, this
situation should be rare

25
Cache Consistency: AFS vs. NFS
• Sun NFS assumes its cached files will remain valid for
three seconds, cached directories for thirty seconds
• Kazar notes this is simple and easy to implement, but
has two disadvantages:
– It leads to a lot of unnecessary network communication; for a
rarely-updated but often-read file, the client may be asking
whether the file is valid every three seconds, and the server will
repeatedly be answering ‘yes’
– Its consistency guarantees are too weak for many applications
• Kazar’s example: a distributed make program
• AFS provides stronger consistency, with less network
communication

26
Performance
• AFS today often works with tens of thousands of clients
• In a benchmark test involving eighteen clients, the server
load of AFS is only 40% that of Sun NFS
• The cache hit ratio was between 97 and 99 percent at
CMU
• Users may experience lag with large files
– Howard: the ‘present’ (1988) transfer rate at CMU is fifty
thousand bytes per second, so a one-megabyte file takes twenty
seconds to fetch from the server (which is ‘reasonably fast’ but
‘sometimes annoying’)
– The textbook mentions that AFS version 3 allows partial file
caching, but does not go into detail

27
Questions?

You might also like