You are on page 1of 8

.

Cover Feature

A Survey of Web Security


Developing security methods for the Web is a daunting task, in part because security concerns arose after the fact. The authors offer a survey of Web security issues, focusing on particular areas of concern, such as server security, mobile code, data transfer, and user privacy.

Aviel D. Rubin
AT&T Labs

Daniel E. Geer Jr.


Certco

ith no insult intended to the early Web designers, security was an afterthought. At the outset, the Webs highest goal was seamless availability. Today, with an internationally connected user network and rapidly expanding Web functionality, reliability and security are critical. Vendors engaged in retrotting security must contend with the Web environments peculiarities, which include location irrelevance, statelessness, code and user mobility, and stranger-to-stranger communication. In this article, we present a survey of Web-specic security issues. Given the Webs rapid ascent, our offering is necessarily a mix of short-lived techniques and long-lived principles. Our focus is on security in the server and host environments, mobile code, data transport, and anonymity and privacy. We do not delve into cryptography, electronic commerce, or intrusion detection because they are not Web-specic, and they are well covered elsewhere.1,2

Configuration basics
The biggest cause of security problems is bad management. In distributed systems, the first place management affects security is in the systems conguration. A bad system conguration can mean disaster. If conguration is not controlled, it is difcult to express management policy in the systems operational characteristics. As system complexity increases, the problem becomes acute: The inability to make systems conform to policy ensures increasing disarray and the exploitable holes that result. The Web-server conguration le lives in the server root. Configuration files are composed entirely of directives and explanatory comments. A directive is just a keyword that the HTTP daemon recognizes, followed by arguments. Directives are insensitive to case and white space. They control which le contains user names and passwords and which file contains group names and passwords, as well as access to the files in the document tree, including default permissions and how to override them locally.

SERVER SECURITY
In the client-server environment, we focus on the server side because the server is the central system and the repository of information resources. The server is thus the locus of threats, whereas the client is largely out of sight. Protecting the client side from the server side is generally not an issue, except where client privacy is a concern, as we discuss below. We describe security issues using a Unix-based Apache server as an exemplar. The Apache server is based on the server at the National Center for Supercomputing Applications (NCSA) and is the most widely deployed server (for more on Apache, see http://apache.org). CERNs server is almost parallel to the Apache server and is well documented. Sadly, many commercial servers have diverged from each other and from the Internets de facto standards. There is no way for us to cover them all. However, regardless of what Web-server software is running, some things are common, such as security conguration, authentication and access issues, and methods of dealing with active content. 34
Computer

Setting up roots
A Web server has two root directories: the server root, which contains the servers control information; and the document root, which contains the content and is usually a subdirectory of the server root. The document root is where a browser goes if the URL contains only the servers namesuch as http://Web.mit. edu. Conguration les for both the server root and the document root are sensitive and must be visible only on a need-to-know basis. The most common mistake system administrators make is to run a Web server as root. Binding the conventional WWW port numberprivileged port 80 requires root privilege at the outset, but running the Web server as root thereafter creates vulnerability. The most common x here is to run the Web server as nobody, but if other programs are also running as nobody, privileges can intermix. Thus, the best practice is to run the server as a genuine user, say, Webserver, with both a unique user ID and member0018-9162/98/$10.00 1998 IEEE

ship in a group, such as WWW. Typically, the home directory of user, Webserver, is the server root and contains the document root. Only the WWW group should have read access to the server log and server conguration les. With this method, the Web servers conguration le species which user to actually run asthat is, the server performs a suid Webserver (sets its user ID as Webserver) after the HTTP daemon has bound the HTTP-privileged port. The owner of the Web-server binary is irrelevant and must not have the suid bit set. By assuming the identication and privileges of a user other than the user who owns the executable, the program can turn privileges on and off, as well as assume the identity of other users. In this way, the Web server runs with a group ID read access to all the les it needs, but it does not have write access to itself or its own conguration les. Under such a regime, a compromised Web server will not lead to deeper penetration of the host system.

Local control issues


A server-side include is HTML code that pulls either a le or the output of a command into the output stream when the user requests the URL containing the page. In other words, it is dynamic rather than static content. The include contains one of three related calls: <inc srv "foobar">, which inserts the contents of a le foobar; <inc srv "|/bin/date">, which inserts the output of /bin/date; or <inc srvurl "|special">, which inserts the output of special, but runs special with the search keywords as arguments the way server scripts do. Server-side includes can be very useful, but they can also be computationally expensive, demolish portability, and open security holes. If they are necessary, the best practice is to disable the execute feature. Many of a Webmasters security directives can be overridden on a per-directory basis. As with serverside includes, the convenience of being able to make local exceptions to global policy is offset by the threat of a security hole being introduced in a distant subdirectorywhich could be controlled by a hostile user. As with server-side includes, Webmasters should disable a subdirectorys ability to override top-level security directives unless that override is required.

Basic authentication is simply a conventional The convenience of username and password that is passed openly being able to make across a network. This creates the obvious risk of password capture. To control access on a perlocal exceptions to directory or per-server basis, a user name can global policy is be combined with IP addresses. In this way, a offset by the threat server can evaluate le permission from the botof a security hole tom upthat is, it looks for directives rst in the local directory, then in the enclosing direcbeing introduced. tory, and so on, back to the document root. This is the reverse of how most le systems work. Like basic authentication, digest authentication veries that both parties know a shared secret (a password), but here the verication is done without sending the password openly. Digest authentication uses a simple challenge to verify a password: response exchange. In this approach, the digest scheme challenges the user using a nonce valuea one-time authenticator (usually a large random number). A valid response contains a checksum (by default, the MD5 checksum) of the username, the password, the given nonce value, the HTTP method, and the requested URL. To authenticate the server to its clients, public-key identity certicates can be used in the transport layer. Although not yet widely adopted, client certicates can also be used to authenticate clients to the server. The use of public-key identity certicates assumes a pervasive, highly available public-key infrastructure, including both certifying authorities to issue certicates and revocation authorities to revoke them. This infrastructure does not exist, and is perhaps only possible on a smaller scale, such as within an organization. When documents are delivered by means of FTP, the FTP daemon (ftpd) is typically set up as an anonymous FTP server. In this case, anonymous actually means unauthenticated. Internet Security Systems Anonymous FTP FAQ thoroughly documents security issues related to the use of anonymous FTP. At a minimum, system administrators should run a good quality FTP daemon and make sure it is chrootd. If the FTP area overlaps with or duplicates the HTTP area, uploads must be segregated from served content. Any Web server will automatically read or execute certain lenames in any directory they serve. A clever attacker will merely use FTP to put a command where the HTTP daemon is sure to nd it.

Scripting
Nearly all Web servers must deal with some type of active content. Common Gateway Interface, or CGI, is a sort of metalanguage, or middleware, that allows the interoperability such content requires. The core idea is sound and provena platform-independent programming language for scripts. In a sense, CGI is to programming what HTML is to documents. The only requirement on a CGI-reachable language
September 1998

Authentication
Web services depend on name services. If a hostile party gains control of the name service, any security based solely on correlating names and network addresses will be for naught.

35

is that it must use the same environment variables as the HTTP daemon, and it must read from stdin and write to stdout. Because CGI talks to programs, it includes arguments to send and return-values to receive; both are carried either in the URL or in HTML forms. Non-script-aliased CGI means that CGI programs might appear in any directory if they have the proper lename extension. Script-aliased CGI means that CGI programs might appear only in an explicitly named directory, typically the subdirectory /cgi-bin in the server root. This is the more common approach for a security-minded site. The server evaluates all URLs from left to right: The protocol component tells how to connect to the server and, once there, the server serially evaluates each piece of the path component. As soon as a leading substring of the URL evaluates to a runnable script or a displayable document, the remainder of the URL is handed to the document as an argument. This is called additional path information. CGI scripts are usually written in Perl, Tcl, Java, Python, or C. Regardless of how many Web pages a server hosts, it runs every CGI script as the same user.

Thus, CGI scripts written by different individuals run under the same user identity and all such scripts have the same permissions with respect to each other. Many celebrated system intrusions can be traced to the attacker sending data that a program cannot handle. As a Web-specic example, consider forms. It is relatively common to have some master form, such as an order blank, and to send the form to the user with some data already included (perhaps in hidden elds). The user lls out the rest of the form and then posts it to the server, which takes some action based on the forms contents, such as processing an order for a Dilbert coffee mug. An even modestly clever attacker might modify the hidden elds to get, say, 50 Dilbert mugs for the price of one. Another variation is to modify a system commands arguments. CGI scripts in any of the common languages (Perl, C, and C++) can be taken advantage of in this way, unless the input data is carefully scrubbed.

SECURING THE HOST


Host security focuses on the host systems conguration and operational practices and provides a foundation for server security.

Three Vulnerabilities
Cory F. Cohen, Shawn V. Hernan, and Derek K. Simmel, CERT Coordination Center

n the basis of incidents reported to the CERT Coordination Center during 1996 and 1997, we identied three vulnerabilities that left systems open to the most frequent and serious attacks. Even long after we published alerts describing these vulnerabilities and their corresponding patches, intruders continued to exploit them to gain unauthorized access. For example, the rst known public discussion of the phf vulnerability1 was in February of 1996, but CERT/CC received reports of incidents involving phf through July of this year.

phf vulnerability The phf vulnerability is named after a seldom used but often installed example cgi-bin script that was distributed with several Web servers, including Apache. The vulnerability lets an intruder execute arbitrary commands with the privileges of the Web server.2 Using user-supplied arguments, the phf script constructs a command that the OS executes as if it were

entered at a shell prompt. Early versions of the script attempted to guard against shell metacharacters such as ; and &, which might cause the command to appear as two commands. Unfortunately, these versions of the script failed to consider that the newline characterwhich is a shell command separatormight be included in the input supplied by the user. By embedding a command following a newline in the input to the vulnerable phf script, an attacker succeeds in getting the Web server to execute that command. Because this vulnerability is commonly exploited to display a copy of the /etc/passwd le, some sites mistakenly believe that if they use strong passwords or shadow passwords, an intruder is unable to use the phf vulnerability to gain access. However, an intruder can choose to execute commands that compromise security in other ways. Other sites assume that even if an intruder can capture and decipher passwords, they are still safe, since their rewall blocks inbound telnet, ftp, or similar kinds of connections. Unfortunately, many of these sites permit arbitrary outbound connections. Therefore, intruders can use the xterm command to open a terminal window on their displays, giving them access

to a shell on the Web-server host. From there, they can attempt to gain greater privileges or access to other machines. Intruders might not even need to probe your site to know if the phf script is accessible, as it might already appear in the indices of popular Internet search engines. More than 650 phf-related incidents involving thousands of hosts have been reported to the CERT/CC since the publication of CERT advisory CA96.06 in March 1996. Many sites had simply installed the example cgi-bin scripts and accepted the default conguration of their Web-server software without question. If they had excluded anything they did not explicitly require, most of the vulnerable phf scripts would not have been installed and exploited.

INN vulnerability Some site managers rely on vigilance and chance to defend their networks. By closely monitoring sources of security information and acting on the information quickly, they believe that they are immune to attacks. Some types of vulnerabilities, however, spread faster than people can respond to them. One example is the INN vulnerability, named after a vulnerable implementation of a Usenet News (NNTP) server. A now-outdated version of the INN

36

Computer

Basic threats
Challenges in host system security include complexity, access control, and accountability. Because it is complicated to anticipate each interaction among security-sensitive and securityinsensitive applications on a single machine, Webmasters typically run security-sensitive subsystems on dedicated machinery. To our knowledge, the formality of trusted systemssystems designed to substitute formal proof of security in place of experimental satisfactionhas to date found little place on the Web. We know of no Web use of formal evaluation criteria such as those in the US Defense Departments Orange Book.3 Webmasters must give root-privilege access the best possible security protection. If attackers obtain superuser access, their system access is unrestricted. This subject is covered in detail in several places.4,5 Controlling access on the host system is not radically different than on any other kind of distributed system. To do it right requires

a sound security policy, along with the proper enforcement mechanisms.6,7 The Web is an intentionally open system to a fault; components are mixed and matched from many sources. Thus, Webmasters must insist on accountability. They must check all actions of import for access authorization and keep strict records on the facts of every transaction. All such incidents must be nonmodiable or be logged off-boardkept somewhere other than on the machine the log protects.

The Webmaster should audit the host system regularly and increase the frequency of audits if the host system changes often and in substantive ways.

The Webmaster should audit the host system regularly and increase the frequency of audits if the host system changes often and in substantive ways. Any system popular enough to be generally dangerous typically has both commercial audit and do-it-yourself freeware such as COPS,8 TAMU,9 and Tripwire.10 Given the Webs burgeoning importance, audit systems can check the more obvious Web-server security issues as well. This is a good area for Webmasters to work closely with system administrators.

news server contains a vulnerability that lets intruders execute a command with news-server privileges by crafting a particular type of message.3 By using the Usenet news-distribution mechanism, a message containing an exploit rapidly reaches news servers all over the world. Consequently, numerous vulnerable sites can be damaged before even the most vigilant system administrators can respond. Unless your rewall is precongured to recognize and stop this vulnerability, it can propagate behind your rewall before you have time to respond. One incident reported to the CERT/CC involved the compromise of 2,000 to 5,000 hosts. Many of these hosts were compromised within a few hours after the intruder released an exploit message to a single news server.

IMAP vulnerability Certain implementations of Internet Message Access Protocol let intruders execute commands with IMAP-server privileges.4 Vulnerable IMAP servers statically allocate space to hold the username; by providing a username that exceeds the buffer length, an intruder can cause the IMAP server to execute any code. Some sites lter at their network perime-

ter only those services that present a known risk or that let remote users gain access, like telnet. These sites might overlook services such as IMAP. Unfortunately, some operating systems install a vulnerable version of an IMAP server by default. Intruders looking for vulnerable IMAP servers have run many large scans of the Internet involving tens of thousands of hosts. There is some circumstantial evidence that indicates that groups of intruders are cooperating to scan large portions of the Internet and are exchanging information about vulnerable hosts. CERT advisory CA-97.09 was released in April 1997. Between September and December of that year, more than 20 percent of incidents reported to CERT/CC involved the IMAP vulnerability. A similar vulnerability,5 described in the July 1998 CERT advisory CA-98.09, shares many of the characteristics of the original IMAP vulnerability and might become just as widely exploited.
References 1. J. Myers, CGI Security: Escape Newlines, Bugtraq Archives, Feb. 1996; http://geekgirl.com//bugtraq/1996_1/0036. html. 2. IBM Emergency Response Service, ERS-SVA-

E01-1996:002.1, Feb. 1996; http://www. ers.ibm.com/tech-info/advisories/sva/1996/. 3. CERT Coordination Center, CERT Advisory CA-97.08, Feb. 1997; http://www. cert.org/advisories/CA-97.08.innd.html. 4. Secure Networks Inc., Remote Vulnerability in imapd and ipop3d, Mar. 1997; at http://www.secnet.com/sni-advisories/ imap.advisory.03.02.97.html. 5. CERT Coordination Center, CERT Advisory CA-98.09, July 1998; http://www. cert.org/advisories/CA-98.09.imapd.html.

Cory F. Cohen is a member of the technical staff at the CERT Coordination Center.

Shawn V. Hernan is a member of the technical staff at the CERT Coordination Center, where he leads the vulnerability handling group.

Derek K. Simmel is a member of the technical staff in the Networked Systems Survivability Program at the Software Engineering Institute. Contact Hernan at svh@cert.org.

September 1998

37

If users must rely on cryptography that can be broken quickly by a motivated attacker, well-designed security protocols are irrelevant.

Notification and recovery

SSL protocol
The best candidate for application-layer security is Netscape Communications Secure Socket Layer (SSL) protocol, which is currently in its third revision. The SSL protocol is stream-based, which means it consists of an initial handshake phase in which secure communications are established, followed by an application-to-application dialog (with encryption applied to the data) and a closing handshake. How it works. When the client connects, the server and client exchange hello messages to establish the protocol version in use, dene optional encryption algorithms, exchange keys, and dene optional data-compression parameters. The server and client can also mutually request X.509 certicates for authentication, including a complete chain of certicates leading to a certication authority. The client generates the bulk encryption keys and sends them to the server encrypted with the servers public key from its certicate. A total of four keys are used, with separate pairs for client-to-server and server-to-client communication. Once SSL completes the initial handshake, it enters into an opaque data mode, in which application data is passed in encrypted, sequenced chunks, each including a cryptographic checksum to prevent tampering. Multiple encryption algorithms including RC4 and DES are supported. Following the interaction, they perform a completion handshake and close the connection. Netscape has clearly designed SSL to be a generic protocol, so it can serve applications other than just HTTP, including (potentially) e-mail and database access. If the Netscape vision is realized, SSL could become a ubiquitous security layer for many different application types. SSL problems. The first two versions of SSL had numerous shortcomings that prevented its wide use, particularly for applications involving substantial risk or funds transfer. For example, the SSL implementations did not use client-side certicates for authentication. Instead, the server authenticated itself to the browser, but there was no reliable mechanism for customer identication. This meant that SSL was mostly used to set up a secure link over which a password was exchanged. Conceptually and technically, this was extremely inelegant. SSL suffered a number of widely publicized aws including protocol aws and serious deciencies in its random key-generation routines. To make matters worse, the freely available version of Netscape Navigator only includes the RC4 encryption algorithm with 40-bit keys, in compliance with US export-control regulations. Several individuals set up brute-force attacks against the 40-bit keys and announced the results, creating a public doubt about SSLs security. However, this is not a failing of SSL, but rather an outcome of the government regulations. The only x for this problem is to require customers

Notication services, which keep running lists of se-curity incidents, play a vital role in host security. Such well-known groups as the Computer Emergency Response Team (CERT) and the Computer Incident Advisory Capability (CIAC) publish bulletins for Webmasters and system administrators. The World Wide Web Consortium provides a similar, though less formal, service. The sidebar, Three Vulnerabilities, presents examples of common intrusions reported to CERT. Another host-security tool is an event-management system, which responds to some types of events with evasive action. These systems are deployed mainly in demanding production environments, although several commercial and freeware event-management systems are available. Finally, because intrusions are inevitable, recovery must be considered. If a Webmaster suspects penetration, it is better to err on the side of prudence. There are a few morning-after resources available. The best place to start is with a checklist for intrusion handling. For Web services that must be available, issues typical of any data center are likely to be reected on the Web service.

SECURING DATA TRANSPORT


There are two fundamentally different ap-proaches to securing data in transit. In the network-layer approach, the encryption and authentication is added directly into the networking stack so that trafc is protected without requiring the application to incorporate it. Traffic reaching the remote system is automatically decrypted and veried by the remote systems networking stack before the operating system passes it to the server application. In the application-level approach, the application itself is modied so that trafc is encrypted before it is submitted to the operating system and network layer. It is then decrypted by the receiving server application. Both approaches have their advantages and disadvantages, and they can be implemented simultaneously. Transaction security is vulnerable at both ends of a secured connection. Credit card information sent across the Web using an application-layer secure transaction is vulnerable to theft once it has been transferred to the remote server. It is also vulnerable at the users keyboard or within the application itself. On the other hand, attackers can defeat network-level transaction security by exploiting transitive trust and host-security aws to steal data before it is securely transmitted over the network. For the Web, application-layer security is the better choice because it makes it easier to dene the trust boundaries between transacting agents. 38
Computer

to purchase the US-only version of Navigatorwhich includes SSL Digital Encryption Standard (SSL/DES). Of course, requiring users to purchase Navigator completely defeats the reason many companies want to use SSL: the large installed base of free Navi-gator versions. SSL possibilities. Netscape is currently formulating a programming interface for integrating SSL directly into Winsock2.0. Because Winsock operates at a virtual session layer, Netscapes proposal adds a few calls to the Winsock management routines that allow any program to request that a connection negotiate SSL security as either a client or a server. The additions to the specication would be completely unintrusive to existing applications, but would let future applications take advantage of security with the addition of a single function call. If this capability is completed and becomes widely used, it will provide capabilities similar to network-level transaction security on an application-by-application basis. SSL has garnered signicant user support, primarily because of the large Navigator client base. Most commercial Web-server packages support SSL, and many free Web servers include hooks for integrating SSL. To encourage developers to embed SSL in their applications, Netscape has made a reference implementation, SSL-ref, available for download.

gerous than a global, homogeneous, generalThe legal purpose interpreter. The fact that this interpreter requirement to use is part of a browsera large, notoriously buggy software packageonly increases the risks. poor quality There are basically three practical techniques to encryption is a secure mobile code: sandboxing, code signing, barrier to the growth and rewalling. The sandbox method limits the of electronic executables privileges to a small set of operations. The code signing method checks to see if the execommerce over cutables source is trustworthy. A combination of public networks. these two methods was implemented in JDK 1.211 and Netscapes Communicator.12 The rewalling approach limits the programs a client can run based on the executables properties. Aside from these approaches, there is growing interest in a technique called proof carrying code,13 where mobile programs carry a proof that certain properties are satised. This technique is still in the theoretical stage. It is uncertain as yet whether it can be used, for example, to manage access control between Java applets and client resources.

ANONYMITY AND PRIVACY


Information about a users activity on the Web is increasingly being recorded and disseminated. Thus, as Web functionality increases, gains achieved in convenience are counterbalanced by privacy losses. Users are often unaware of the deficit. Web users leave a data shadow, with information about what they read, where they shop, what they buy, whom they correspond with, and so on. Cookies, for example, let stateless servers outsource state information to client storage. Thus, a high-volume server can track a signicant amount of information about a users browsing habits. Multiple servers in collaboration can build up even more information. Many targeted advertising companies share information about user profiles and build massive databases containing users and their data shadows. Attempts to introduce technology to protect user privacy include mixes,14 proxy mechanisms, and Crowds.15 David Chaums mixes were originally introduced to counter trafc analysis. The same technology has since been adopted to provide anonymous e-mail16 and anonymous Web browsing.17 A mix is a process with a corresponding public-key pair. It accepts messages, decrypts them, and forwards them. A mix network can also be used, where each mix removes an encryption layer and messages are source-routed through the mix network to the destination. Mixes are well suited to the problem of anonymous e-mail. When users select a set of trusted remailers (email mixes), the message recipient cannot tell where the message originated. There are many such remailers available for use today (see the list at http://www. cs.berkeley.edu/~raph/remailer-list.html). Mixes are
September 1998

Security and export controls


As the SSL experience shows, export controls are a big issue in application-layer transaction security. If users must rely on cryptography that can be broken quickly by a motivated attacker, well-designed security protocols are irrelevant. The legal requirement to use poor quality encryption is a barrier to the growth of electronic commerce over public networks. However, because government encryption policies in some countries are increasingly subjected to political and legal challenges, the situation seems destined to change. If it doesnt, we might eventually see a booming business in offshore development of software modules that can be incorporated into export-restricted software. Export-control politics are one of the single largest factors influencing the securityor rather, insecurityof protocols for Web transaction security. It is hard enough to get vendors to agree on standards without having the government actively working to cripple the results.

MOBILE CODE SECURITY


Mobile code comprises general-purpose executables that run in remote locations. That such general-purpose scripts can run on any Internet-connected computer opens up a world of possibilities for distributed applications. However, such functionality comes at a price. From a security perspective, nothing is more dan-

39

The state of Web security is abysmal. Only a few vendors can deliver security that is invisible and inescapable.

not as well suited for synchronous communication, such as Web browsing, because collaboration between the rst and last mix can reveal the identity of both sender and receiver, based on the content and timing of requests. Also, the use of public-key cryptography impedes the performance of such a system. It remains to be seen whether wide-scale adoption of mixes for Web browsing is possible. A more lightweight approach to privacy is to interpose a well-known proxy between clients and all servers. The proxy rewrites all client requests so that the end server has no way of identifying the origin of requests. This is the approach taken by the Anonymizer (http://www.anonymizer.com/) and Lucent Personalized Web Assistant.18 Unfortunately, unless encryption is used, a system administrator or anyone who can eavesdrop on the local network can still observe all content on the link between a client and server. Furthermore, proxy administrators must be trusted not to store or reveal user-activity information. When users browse the Web in Crowds, any action they take is attributed to the crowd as a whole, thus offering individual users relative anonymity. The Crowds system runs on Windows and Unix platforms and has been deployed on the Internet. Crowds was analyzed against various adversariessuch as local eavesdroppers, system administrators, groups of collaborating crowd members, and end serversand to our knowledge is the only method to date that provides some formal guarantees of anonymity. he state of Web security is abysmal. The mushrooming number of security vendors is decreasing the average quality of their wares; only a few vendors can deliver security that is invisible and inescapable. Yet, if we had fewer vendors to begin with, our fate might have been similar to having a few farmers plant the same strain of corn. Thus far, the Webs inherent tendency to diversity and innovation has spared us a security monoculture (and the inherent risks therein). The trend toward a unied publickey infrastructure with regulatory icing should be watched with this in mind. Web security mechanisms are a collection of clever, ad hoc efforts to retrot security into a system whose popularity overwhelmed its original design. There is evidence, however, that use of the Web for business will inevitably result in a more serious approach to security. Public-key technologys unique applicability to strangerto-stranger communication and the Webs relentless blurring of the intramuralextramural distinction strongly favor PKI as the skeleton on which Web security will hang. A trust management paradigm for securing Web commerce will give way to a risk management one in proportion to the value of the transactions mov-

ing on the Web. This will, in a sense, solve the otherwise thorny problem of good security in a user community burgeoning too fast to be anything but skill-poor. Several online security resources are available, from Lincoln D. Steins ne advice (http://www. genome. wi.mit.edu/WWW/faqs/www-security-faq.html) to the very helpful aggregation and dissemination work at Rutgers University and Purdue Univ- ersitys Coast Laboratory to the Internet Engineering Task Forces Site Security Handbook.19 The security areas we have discussed hereserver and host environments, data transport, mobile code, and anonymity and privacyare very active, but the amount of investment already sunk into these areas is steadily raising the bar on how good a new idea must be before implementation is worthwhile. These are interesting times. y

References 1. A.D. Rubin, D. Geer, and M.J. Ranum, Web Security Sourcebook, John Wiley & Sons, New York, 1997. 2. A. Menezes, P. van Oorschot, and S. Vanstone, Handbook of Applied Cryptography, CRC Press, Boca Raton, Fla., 1997. 3. US Dept. of Defense, Trusted Computer System Evaluation Criteria, National Computer Security Center, DoD 5200.28-STD, Dec. 1985. 4. S. Garnkel and E. Spafford, Practical UNIX Security, OReilly & Associates, Sebastapol, Calif., 1991. 5. D. Curry, UNIX System SecurityA Guide for Users and System Administrators, Addison Wesley Longman, Reading, Mass., 1992. 6. B. Lampson et al., Authentication in Distributed Systems: Theory and Practice, ACM Trans. Computer Systems, Nov. 1992, pp. 265-310. 7. J.G. Steiner, C. Neuman, and J.I. Schiller, Kerberos: An Authentication Service for Open Network Systems, Proc. Winter 1988 General Conf., Usenix Assoc., Berkeley, Calif., 1988, pp. 191202. 8. D. Farmer and E.H. Spafford, The COPS Security Checker System, Proc. Summer 1990 General Conf., Usenix Assoc., Berkeley, Calif., 1990, pp. 165-170; ftp://info.cert.org/pub/tools/cops/. 9. D.R. Safford, D.L. Schales, and D.K. Hess, The TAMU Security Package: An Ongoing Response to Internet Intruders in an Academic Environment, Proc. Fourth Security Symp., Usenix Assoc., Berkeley, Calif., 1993, pp. 91118; ftp://net.tamu.edu/pub/security/TAMU/. 10. G.H. Kim and E.H. Spafford, The Design and Implementation of Tripwire: A File System Integrity Checker, Tech. Report CSD-TR-93-071, Purdue University Coast Laboratory, West Lafayette, Ind., Nov. 1993; ftp://info. cert.org/pub/tools/tripwire/. 11. L. Gong et al., Going Beyond the Sandbox: An Overview of the New Security Architecture in the Java Development Kit 1.2, Proc. USENIX Symp. Internet

40

Computer

12.

13.

14.

15.

16.

17.

18.

19.

Technologies and Systems, Usenix Assoc., Berkeley, Calif., 1997, pp. 103-112. D.S. Wallach, D. Blafanz, D. Dean, and E.W. Felten, Extensible Security Architectures for Java, Proc. 16th ACM Symp. Operating Systems Principles, ACM Press, New York, 1997. G.C. Necula and P. Lee, Safe Kernel Extensions Without Run-Time Checkin, Proc. Second Symp. Operating Systems Design and Implementation, Unix Assoc., Berkeley, Calif., 1996, pp. 229-243. D. Chaum, Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms, Comm. ACM, Feb. 1981, pp. 84-88. M.K. Reiter and A.D. Rubin, Crowds: Anonymous Web Transactions, ACM Trans. Information Systems Security, Apr. 1998; see also http://www.research.att. com/projects/crowds. C. Gulcu and G. Tsudik. Mixing E-mail with Babel, Proc. 1996 Internet Society Symp. Network and Distributed System Security, IEEE CS Press, Los Alamitos, Calif., 1996, pp. 2-16. P.F. Syverson, D.M. Goldschlag, and M.G. Reed, Anonymous Connections and Onion Routing, Proc. 1997 IEEE Symp. Security and Privacy, IEEE CS Press, Los Alamitos, Calif., 1997, pp. 4454. E. Gabber et al., How to Make Personalized Web Browsing Simple, Secure, and Anonymous, tech. report, May 1997; see also http://www.lpwa.com. D.L. Oppenheimer, D.A. Wagner, and M.D. Crabb, Site Security Handbook, RFC 1244/FYI:8, IETF, 1991.

Aviel D. Rubin is a senior technical staff member at AT&T Labs-Research in the secure systems research department and an adjunct professor of computer science at New York University, where he teaches cryptography and computer security. He is the co-author of the Web Security Sourcebook (John Wiley & Sons, 1998). Rubin received a BS, an MSE, and a PhD from the University of Michigan, all in computer science and engineering. He has served on several program committees for major security conferences and as the program chair for the 1998 Usenix security conference. He will also be the program chair for the 1999 Usenix technical conference. Daniel E. Geer Jr. is vice president of Certco, LLC, a market leader in digital certification for electronic commerce. He has a long history in network security and distributed computing management as an entrepreneur, consultant, teacher, and architect. He is the co-author of the Web Security Sourcebook. Contact Rubin at AT&T Labs-Research, 180 Park Ave., Florham Park, NJ 07932; rubin@research. att.com. Contact Geer at Certco, 55 Broad St., New York, NY 10004; geer@world.std.com.

You might also like