You are on page 1of 39

A Seminar Report On Distributed Firewall 09203005

INTRODUCTION

A firewall is system or group of system (router, proxy, gateway…) that implements a set
of security rules to enforce access control between two networks to protect “inside”
network from “outside network. It may be a hardware device or a software program
running on a secure host computer. In either case, it must have at least two network
interfaces, one for the network it is intended to protect, and one for the network it is
exposed to. A firewall sits at the junction point or gateway between the two networks,
usually a private network and a public network such as the Internet.

Hardware Firewall: Computer with Firewall Software:


Hardware firewall providing protection to Computer running firewall software to provide
a Local Network protection

A Firewall is…
 A physical manifestation of your security policy.
 One component of overall security architecture.
 A mechanism for limiting access by network elements and protocols.
A Firewall is not…
 A cure-all for security shortcomings on platforms or in applications.
 Complete security architecture.
 Something that should be configured “on-the-fly”.
 Effective protection against viruses (typically).

M.Tech CSE NITJ Page |1


A Seminar Report On Distributed Firewall 09203005

Conventional firewalls rely on the notions of restricted topology and control entry
points to function. More precisely, they rely on the assumption that everyone on one side
of the entry point--the firewall--is to be trusted, and that anyone on the other side is, at
least potentially, an enemy.

Distributed firewalls are host-resident security software applications that protect


the enterprise network's servers and end-user machines against unwanted intrusion. They
offer the advantage of filtering traffic from both the Internet and the internal network.
This enables them to prevent hacking attacks that originate from both the Internet and the
internal network. This is important because the most costly and destructive attacks still
originate from within the organization.

They are like personal firewalls except they offer several important advantages
like central management, logging, and in some cases, access-control granularity. These
features are necessary to implement corporate security policies in larger enterprises.
Policies can be defined and pushed out on an enterprise-wide basis.

A feature of distributed firewalls is centralized management. The ability to


populate servers and end-users machines, to configure and "push out" consistent security
policies helps to maximize limited resources. The ability to gather reports and maintain
updates centrally makes distributed security practical. Distributed firewalls help in two
ways. Remote end-user machines can be secured. Secondly, they secure critical servers
on the network preventing intrusion by malicious code and "jailing" other such code by
not letting the protected server be used as a launch pad for expanded attacks.

Usually deployed behind the traditional firewall, they provide a second layer of
defense. They work by enabling only essential traffic into the machine they protect,
prohibiting other types of traffic to prevent unwanted intrusions. Whereas the perimeter
firewall must take a generalist, common denominator approach to protecting servers on
the network, distributed firewalls act as specialists.

M.Tech CSE NITJ Page |2


A Seminar Report On Distributed Firewall 09203005

EVOLUTION OF THE FIREWALL


In today's world, most businesses, regardless of size, believe that access to the Internet is
imperative if they are going to compete effectively. Even though the benefits of
connecting to the Internet are considerable, so are the risks. When a business connects its
private network to the Internet, it is not just providing its employees access to external
information and Internet services; it is also providing external users with a means to
access the company's own private information. Horror stories abound in the media
regarding companies that have had proprietary information stolen, modified, or otherwise
compromised by attackers who gained access via the Internet. For this reason, any
business that has ever contemplated connecting to the Internet has been forced to deal
with the issue of network security.

Figure presents a time line of the major firewall architectures


In response to these risks, a whole industry has formed during the last several years to
meet the needs of businesses wanting to take advantage of the benefits of being
connected to the Internet while still maintaining the confidentiality, integrity, and
availability of their own private information and network resources. This industry
revolves around firewall technology.
A firewall provides a single point of defence between two networks—it protects one
network from the other. Usually, a firewall protects the company's private network from
the public or shared networks to which it is connected. A firewall can be as simple as a
router that filters packets or as complex as a multi-computer, multi-router solution that
combines packet filtering and application level proxy services.
Firewall technology is a young but quickly maturing industry. The first generation of
firewall architectures has been around almost as long as routers, first appearing around
1985 and coming out of Cisco's IOS software division. These firewalls are called packet
filter firewalls. However, the first paper describing the screening process used by packet

M.Tech CSE NITJ Page |3


A Seminar Report On Distributed Firewall 09203005

filter firewalls did not appear until 1988, when Jeff Mogul from Digital Equipment
Corporation published his studies.

During the 1989-1990 timeframe, Dave Presotto and Howard Trickey of AT&T Bell
Laboratories pioneered the second generation of firewall architectures with their research
in circuit relays, which are also known as circuit level firewalls. They also implemented
the first working model of the third generation of firewall architectures, known as
application layer firewalls. However, they neither published any papers describing this
architecture nor released a product based upon their work.
As is often the case in research and development, the third generation of firewall
architectures was independently researched and developed by several people across the
United States during the late 1980's and early 1990's. Publications by Gene Spafford of
Purdue University, Bill Cheswick of AT&T Bell Laboratories, and Marcus Ranum
describing application layer firewalls first appeared during 1990 and 1991. Marcus
Ranum's work received the most attention in 1991 and took the form of bastion hosts
running proxy services. Ranum's work quickly evolved into the first commercial product
—Digital Equipment Corporation's SEAL product.
Around 1991, Bill Cheswick and Steve Bellovin began researching dynamic packet
filtering and went so far as to help develop an internal product at Bell Laboratories based
upon this architecture; however, this product was never released. In 1992, Bob Braden
and Annette DeSchon at USC's Information Sciences Institute began independently
researching dynamic packet filter firewalls for a system that they called "Visas." Check
Point Software released the first commercial product based on this fourth generation
architecture in 1994.
During 1996, Scott Wiegel, Chief Scientist at Global Internet Software Group, Inc.,
began laying out the plans for the fifth generation firewall architecture, the Kernel Proxy
architecture. Cisco Centri Firewall, released in 1997, is the first commercial product
based on this architecture.

M.Tech CSE NITJ Page |4


A Seminar Report On Distributed Firewall 09203005

FIREWALL TAXONOMY

Firewalls come in various sizes and flavors. The most typical idea of a firewall is a
dedicated system or appliance that sits in the network and segments an "internal" network
from the "external" Internet. Most home or SOHO networks use an appliance-based
device for broadband connectivity that includes a built-in firewall. In general, firewalls
can be categorized under one of two general types:

• Desktop or personal firewalls


• Network firewalls

The primary difference between these two types of firewalls simply boils down to the
number of hosts that the firewall protects. Within the network firewall type, there are
primary classifications of devices, including the following:

• Packet-filtering firewalls (stateful and nonstateful)


• Circuit-level gateways
• Application-level gateways

The preceding list describes general classes of firewalls but, as discussed later, many
network firewalls represent hybrids of the preceding classifications. Many firewalls have
characteristics that place them in more than one classification.

Figure below shows a breakdown of the various firewall types currently available. This
figure does not provide complete details of the various capabilities within each firewall
type but rather shows the general taxonomy of the different firewalls available in the two
primary types: personal/desktop firewalls and network firewalls.

M.Tech CSE NITJ Page |5


A Seminar Report On Distributed Firewall 09203005

Personal Firewalls

Personal firewalls are designed to protect a single host. They can be viewed as a hardened
shell around the host system, whether it is a server, desktop, or laptop. Typically,
personal firewalls assume that outbound traffic from the system is to be permitted and
inbound traffic requires inspection. By default, personal firewalls include various profiles
that accommodate the typical traffic a system might see. For example, ZoneAlarm has
low, medium, and high settings that allow almost all traffic, selected traffic, or nearly no
traffic, respectively, through to the protected system. In a similar vein, IPTableswhich
you can set up as a personal firewall as well as in a network firewall roleduring the setup
of the Linux system, enables the installer to choose the level of protection for the system
and the customization for ports that do not fall into a specific profile.

One important consideration with personal firewalls is centralized management. Some


vendors have identified that a significant barrier to deployment of personal firewall on
every end system is the need for centralized management so that policies can be
developed and applied remotely to end systems and have developed such capabilities
within their products. Large enterprises are hesitant to adopt this personal firewall
technology for their systems because of the difficulty of maintaining a consistent firewall
policy across the enterprise.

Network Firewalls

Network firewalls are designed to protect whole networks from attack. Network firewalls
come in two primary forms: a dedicated appliance or a firewall software suite installed on
top of a host operating system. Examples of appliance-based network firewalls include
the Cisco PIX, the Cisco ASA, Juniper's NetScreen firewalls, Nokia firewalls, and
Symantec's Enterprise Firewall. The more popular software-based firewalls include
Check Point's Firewall-1 NG or NGX Firewalls, Microsoft ISA Server, Linux-based
IPTables, and BSD's pf packet filter. The Sun Solaris operating system has, in the past,
been bundled with Sun's enterprise firewall, SunScreen. With the release of Solaris 10,
Sun has begun bundling the open source IP Filter (IPF) firewall as an alternative to
SunScreen.

Many network firewalls provide enterprise users the maximum flexibility and protection
in a firewall system. These firewalls have over the past few years incorporated many new
features such as in-line intrusion detection and prevention as well as virtual private
network (VPN) termination capabilities both for LAN-to-LAN VPNs as well as remote-
access-user VPNs. Another feature that has been introduced into network firewalls is a
deep packet-inspection capability. The firewall can identify traffic requirements not just
by looking at Layer 3 and Layer 4 information but by delving all the way into the
application data so that the firewall can make decisions as to how to best handle the
traffic flow. This evolution in firewall design and capabilities has led to the development
of a new firewall product, the integrated firewall, which is covered in more detail in the
next section.

M.Tech CSE NITJ Page |6


A Seminar Report On Distributed Firewall 09203005

Packet Filters

Packet filters are network devices that filter traffic based on simple packet characteristics.
These devices are typically stateless in that they do not keep a table of the connection
state of the various traffic flows through them. To allow traffic in both directions, they
must be configured to permit return traffic. Simple packet filters include Cisco IOS
access lists as well as Linux's ipfwadm facility to name a few. Although these filters
provide protection against a wide variety of threats, they are not dynamic enough to be
considered true firewalls. Their primary focus is to limit traffic inbound while providing
for outbound and established traffic to flow unimpeded. Example shows a simple access
list for filtering traffic. This list is based on the network example in Figure below. The
access list is applied to the inbound side of the filtering device that connects the LAN to
the Internet.

Figure: Simple Access List Sample Network

Example: Simple Access List

access-list 101 permit icmp any 192.168.185.0 0.0.0.255 echo-reply


access-list 101 permit icmp any 192.168.185.0 0.0.0.255 ttl-exceeded
access-list 101 permit tcp any 192.168.185.0 0.0.0.255 established
access-list 101 permit udp any host 192.168.185.100 eq 53
access-list 101 permit udp any eq 123 192.168.185.0 0.0.0.255

Note that inbound return traffic for DNS (53/UDP) and NTP (123/UDP) are explicitly
stated toward the end of the filter list, as are Internet Control Message Protocol (ICMP)
echo-reply and Time-To-Live (TTL)-exceeded responses. Without these statements, these
packets would be blocked even though they are in response to traffic that originated in
the protected LAN. Finally, note the following rule:

access-list 101 permit tcp any 192.168.185.0 0.0.0.255 established

M.Tech CSE NITJ Page |7


A Seminar Report On Distributed Firewall 09203005

This rule is required to allow return traffic from any outside system back to the
192.168.185.0/24 subnet as long as the return traffic has the TCP ACK flag set. Packet
filters typically do not have any stateful capabilities to inspect outbound traffic and
dynamically generate rules permitting the return traffic to an outbound flow. Figure
below shows a simple packet-filtering firewall.

Figure: Packet-Filtering Firewall

NAT Filters

A distinct firewall that existed for a short period is the Network Address Translation
(NAT) firewall.NAT is now a function of a firewall. NAT firewalls automatically provide
protection to systems behind the firewall because they only allow connections that
originate from the inside of the firewall. The basic purpose of NAT is to multiplex traffic
from an internal network and present it to a wider network (that is, the Internet) as though
it were coming from a single IP address or a small range on IP addresses. The NAT
firewall creates a table in memory that contains information about connections that the
firewall has seen. This table maps the addresses of internal systems to an external
address. The ability to place an entire network behind a single IP address is based on the
mapping of port numbers on the NAT firewall. For example, consider the systems shown
in figure below.

M.Tech CSE NITJ Page |8


A Seminar Report On Distributed Firewall 09203005

The hosts on the "inside" of the NAT firewall (192.168.1.1 and 192.168.1.2) are both
trying to access the web server 10.100.100.44. Host 192.168.1.1 opens up TCP port 3844
and connects to the web server 10.100.100.44 at TCP port 80. Host 192.168.1.2 opens
TCP port 4687 and connects to the web server 10.100.100.44 at TCP port 80. The NAT
firewall is configured to translate the entire 192.168.1.0/24 network to the single IP
address 172.28.230.55. When the firewall sees the outbound connections, it rewrites the
IP layer information in the traffic and replaces 192.168.1.1 and 192.168.1.2 with the
single IP address 172.28.230.55. Internally, the NAT firewall maintains a table that keeps
track of the traffic flows and translates both 192.168.1.1 and 192.168.1.2 to the IP
address 172.28.230.55. It does this by means of network sockets that uniquely identify a
given connection. For the example shown in above figure, there are two unique sockets:
192.168.1.1:3844 and 192.168.1.2:4687. When this traffic is seen by the firewall, the
NAT process replaces the 192.168.1 network addresses with the 172.28.230.55 address.

The last entry in the table shows above what a NAT firewall does when a specific source
port is already taken by a previous connection. In this case, the client 192.168.1.1 is
attempting to make a second connection to the web server 10.100.100.44. The client
opens a connection on TCP port 4687; however, TCP port 4687 on the NAT firewall is
already being used by the connection for the client 192.168.1.2. In this case, the NAT
firewall changes not only the source IP address but also the source port and keeps that
mapping in its translation table.

Circuit-level Firewalls

Circuit-level firewalls work at the session layer of the OSI model and monitor
"handshaking" between packets to decide whether the traffic is legitimate. Traffic to a
remote computer is modified to make it appear as though it originated from the circuit-
level firewall. This modification makes a circuit-level firewall particularly useful in
hiding information about a protected network but has the drawback that it does not filter
individual packets in a given connection. Figure below shows an example of a circuit-
level firewall.

M.Tech CSE NITJ Page |9


A Seminar Report On Distributed Firewall 09203005

Figure: Circuit-Level Firewall

Proxy Firewalls

A proxy firewall acts as an intermediary between two end systems in a similar fashion as
a circuit-level gateway. However, in the case of a proxy firewall, the interaction is
controlled at the application layer, as shown in figure below. Proxy firewalls operate at
the application layer of the connection by forcing both sides of the conversation to
conduct the communication through the proxy. It does this by creating and running a
process on the firewall that mirrors a service as though it were running on an end system.
To support various services, the proxy firewall must have a specific service running for
each protocol: a Simple Mail Transport Protocol (SMTP) proxy for e-mail, a File
Transfer Protocol (FTP) proxy for file transfers, and a Hypertext Transfer Protocol
(HTTP) proxy for web services.

Figure: Proxy Firewall

M.Tech CSE NITJ P a g e | 10


A Seminar Report On Distributed Firewall 09203005

Whenever a client wants to connect to a service on the Internet, the packets making up
the connection request are processed by the specific proxy service for that protocol before
being forwarded to the target system. Packets returning from the server on the Internet
are similarly processed by the same proxy service before being forwarded to the internal
system. In many proxy firewalls, a generic proxy service can be used by services that do
not have a service specifically tailored to their needs. However, not all services can use
this generic proxy. If there are no proxy capabilities for a specific service running on the
firewall, no connection to outside servers running that service is possible, or the firewall
utilizes other technologies such as circuit-level filtering to filter the connection.

Because of their inspection capabilities, proxy firewalls can look much more deeply into
the packets of a connection and apply additional rules to determine whether a packet
should be forwarded to an internal host. The disadvantages of a proxy firewall can be in
their complex configuration as well as their speed. Because the firewalls look deep into
the application, they can introduce delay into network connections. Finally, if there is no
specific proxy service for a particular network application and it cannot be made to work
with a generic proxy service and the firewall cannot perform other methods of filtering,
you cannot put that behind the firewall. Most modern firewalls include basic proxy server
architecture in their operation by providing some form of proxy capabilities. For
example, PIX OS 6 and earlier had the fixup command, and IPF provided an FTP proxy
service to handle active FTP connections.

Stateful Firewalls
Modern stateful firewalls combine aspects and capabilities of NAT firewalls, circuit-level
firewalls, and proxy firewalls into one system. These firewalls filter traffic initially based
on packet characteristics like the packet-filtering firewall but also include session checks
to make sure that the specific session is allowed. Unlike proxy or circuit-level firewalls,
stateful firewalls are typically designed to be more transparent (like their packet-filtering
and NAT cousins). However, they include proxy-filtering aspects by inspecting the
application layer data as well through the use of specific services.

M.Tech CSE NITJ P a g e | 11


A Seminar Report On Distributed Firewall 09203005

Stateful firewalls are more complex than their constituent component firewalls; however,
nearly all modern firewalls on the market today are stateful firewalls and represent the
baseline for security in today's networks.

Transparent Firewalls

Transparent firewalls (also known as bridging firewalls) are not a completely new
firewall but rather a subset of stateful firewalls. Whereas nearly all firewalls operate at
the IP layer and above, transparent firewalls sit at Layer 2, the data link layer, and
monitor Layer 3+ traffic. Additionally, the transparent firewall can apply packet-filtering
rules like any other stateful firewall and still appear invisible to the end user. In essence,
the transparent firewall acts as a filtering bridge between two network segments. It
represents an excellent way of applying a security policy in the middle of a network
segment without having to apply a NAT filter. The benefits of a transparent, bridging
firewall fall into three general categories:

• Zero configuration
• Performance
• Stealth

The bridging firewall requires no changes to the underlying network, which is possible
simply because the transparent bridging firewall is plugged in-line with the network it is
protecting. Because it operates at the data link layer, no IP address changes are required.
The firewall can be placed so as to segment a network subnet between low-security and
higher-security systems or to protect a single host if necessary.

Because bridging firewalls tend to be simpler than their Layer 3 cousins, they have a
lower processing overhead. That lower overhead enables them to provide better
performance as well as deeper packet inspection.

Finally, their stealth nature stems directly from the fact that they are Layer 2 devices. The
network interfaces of bridging firewalls have no IP addresses (other than the management
interface) assigned to them and therefore are invisible to an attacker. The firewall cannot
be attacked because, basically, it cannot be reached.

Virtual Firewalls
Virtual firewalls are multiple logical firewalls running on a single physical device. This
arrangement allows for multiple networks to be protected by a unique firewall running a
unique security policy all in one physical appliance. A service provider can provide
firewall services for multiple customers, securing and separating their traffic while
managing the entire system on one device. Service providers do so by defining separate
security domains for each customer with each domain controlled by a separate logical
virtual firewall.

M.Tech CSE NITJ P a g e | 12


A Seminar Report On Distributed Firewall 09203005

PROBLEMS WITH STANDARD FIREWALL

Conventional firewalls rely on the notions of restricted topology and control entry points
to function. More precisely, they rely on the assumption that everyone on one side of the
entry point--the firewall--is to be trusted, and that anyone on the other side is, at least
potentially, an enemy.

Some problems with the conventional firewalls that lead to Distributed firewalls are as
follows.
 Due to the increasing line speeds and the more computation intensive protocols
that a firewall must support; firewalls tend to become congestion points. This gap
between processing and networking speeds is likely to increase, at least for the
foreseeable future; while computers (and hence firewalls) are getting faster, the
combination of more complex protocols and the tremendous increase in the
amount of data that must be passed through the firewall has been and likely will
continue to outpace Moore’s Law .
 There exist protocols, and new protocols are designed, that are difficult to process
at the firewall, because the latter lacks certain knowledge that is readily available
at the endpoints. FTP and RealAudio are two such protocols. Although there exist
application-level proxies that handle such protocols, such solutions are viewed as
architecturally “unclean” and in some cases too invasive.
 Likewise, because of the dependence on the network topology, a PF can only
enforce a policy on traffic that traverses it. Thus, traffic exchanged among nodes
in the protected network cannot be controlled. This gives an attacker that is
already an insider or can somehow bypass the firewall complete freedom to act.

M.Tech CSE NITJ P a g e | 13


A Seminar Report On Distributed Firewall 09203005

 Worse yet, it has become trivial for anyone to establish a new, unauthorized entry
point to the network without the administrator’s knowledge and consent. Various
forms of tunnels, wireless, and dial-up access methods allow individuals to
establish backdoor access that bypasses all the security mechanisms provided by
traditional firewalls. While firewalls are in general not intended to guard against
misbehavior by insiders, there is a tension between internal needs for more
connectivity and the difficulty of satisfying such needs with a centralized firewall.
 IPsec is a protocol suite, recently standardized by the IETF, which provides
network-layer security services such as packet confidentiality, authentication, data
integrity, replay protection, and automated key management.
 This is an artifact of firewall deployment: internal traffic that is not seen by the
firewall cannot be filtered; as a result, internal users can mount attacks on other
users and networks without the firewall being able to intervene.
 Large networks today tend to have a large number of entry points (for
performance, failover, and other reasons). Furthermore, many sites employ
internal firewalls to provide some form of compartmentalization. This makes
administration particularly difficult, both from a practical point of view and with
regard to policy consistency, since no unified and comprehensive management
mechanism exists.
 End-to-end encryption can also be a threat to firewalls, as it prevents them from
looking at the packet fields necessary to do filtering. Allowing end-to-end
encryption through a firewall implies considerable trust to the users on behalf of
the administrators.
 Finally, there is an increasing need for finer-grained access control which
standard firewalls cannot readily accommodate without greatly increasing their
complexity and processing requirements.

M.Tech CSE NITJ P a g e | 14


A Seminar Report On Distributed Firewall 09203005

DISTRIBUTED FIREWALL

Distributed firewalls are host-resident security software applications that protect the
enterprise network's critical endpoints against unwanted intrusion that is, its servers and
end-user machines. In this concept, the security policy is defined centrally and the
enforcement of the policy takes place at each endpoint (hosts, routers, etc). Usually
deployed behind the traditional firewall, they provide a second layer of protection.

Since all the hosts on the inside are trusted equally, if any of
these machines are subverted, they can be used to launch attacks to other hosts, especially
to trusted hosts for protocols like rlogin. Thus there is a faithful effort from the industry
security organizations to move towards a system which has all the aspects of a desktop
firewall but with centralized management like Distributed Firewalls.

Distributed, host-resident firewalls prevent the hacking of both the PC and its use as an
entry point into the enterprise network. A compromised PC can make the whole network
vulnerable to attacks. The hacker can penetrate the enterprise network uncontested and
steal or corrupt corporate assets.

M.Tech CSE NITJ P a g e | 15


A Seminar Report On Distributed Firewall 09203005

Basic working

Distributed firewalls are often kernel-mode applications that sit at the bottom of the OSI
stack in the operating system. They filter all traffic regardless of its origin -- the Internet
or the internal network. They treat both the Internet and the internal network as
"unfriendly". They guard the individual machine in the same way that the perimeter
firewall guards the overall network.

Distributed firewalls rest on three notions:

a) A policy language that states what sort of connections are permitted or


prohibited,
b) Any of a number of system management tools, such as Microsoft's SMS or
ASD, and
c) IPSEC, the network-level encryption mechanism for TCP/IP.

The basic idea is simple. A compiler translates the policy language into some internal
format. The system management software distributes this policy file to all hosts that are
protected by the firewall. And incoming packets are accepted or rejected by each "inside"
host, according to both the policy and the cryptographically-verified identity of each
sender.

M.Tech CSE NITJ P a g e | 16


A Seminar Report On Distributed Firewall 09203005

POLICIES

One of the most often used term in case of network security and in particular distributed
firewall is policy. It is essential to know about policies. A “security policy” defines the
security rules of a system. Without a defined security policy, there is no way to know
what access is allowed or disallowed.

A simple example for a firewall is

• Allow all connections to the web server.


• Deny all other access.

The distribution of the policy can be different and varies with the
implementation. It can be either directly pushed to end systems, or pulled when
necessary.

Pull Technique

The hosts while booting up, pings to the central management server to check whether the
central management server is up and active. It registers with the central management
server and requests for its policies which it should implement. The central management
server provides the host with its security policies.

For example, a license server or a security clearance server can be asked if a certain
communication should be permitted. A conventional firewall could do the same, but it
lacks important knowledge about the context of the request. End systems may know
things like which files are involved, and what their security levels might be. Such
information could be carried over a network protocol, but only by adding complexity.

Push Technique

The push technique is employed when the policies are updated at the central management
side by the network administrator and the hosts have to be updated immediately. This
push technology ensures that the hosts always have the updated policies at anytime.

The policy language defines which inbound and outbound connections on any component
of the network policy domain are allowed, and can affect policy decisions on any layer of
the network, being it at rejecting or passing certain packets or enforcing policies at the
application layer.

M.Tech CSE NITJ P a g e | 17


A Seminar Report On Distributed Firewall 09203005

Many possible policy languages can be used, including file-oriented schemes similar to
Firmato, the GUIs that are found on most modern commercial firewalls, and general
policy languages such as KeyNote. The exact nature is not crucial, though clearly the

Language must be powerful enough to express the desired policy. A sample is shown in
Figure 1.

inside_net = x509{name="*.example.com"};
mail_gw = x509{name="mailgw.example.com"};
time_server = IPv4{10.1.2.3};

allow smtp(*, mail_gw);


allow smtp(mail_gw, inside_net);
allow ntp(time_server, inside_net);
allow *(inside_net, *);

Figure 1: A sample policy configuration file. SMTP from the outside can only reach
the machine with a certificate identifying it as the mail gateway; it, in turn, can speak
SMTP to all inside machines. NTP--a low-risk protocol that has its own application- level
protection--can be distributed from a given IP address to all inside machines. Finally, all
outgoing calls are permitted.

M.Tech CSE NITJ P a g e | 18


A Seminar Report On Distributed Firewall 09203005

IDENTIFIERS

What is important is how the inside hosts are identified. Today's firewalls rely on
topology; thus, network interfaces are designated "inside", "outside", "DMZ", etc. We
abandon this notion, since distributed firewalls are independent of topology.

A second common host designator is IP address. That is, a specified IP address may be
fully trusted, able to receive incoming mail from the Internet, etc. Distributed firewalls
can use IP addresses for host identification, though with a reduced level of security.

Our preferred identifier is the name in the cryptographic certificate used with IPSEC.
Certificates can be a very reliable unique identifier. They are independent of topology;
furthermore, ownership of a certificate is not easily spoofed. If a machine is granted
certain privileges based on its certificate, those privileges can apply regardless of where
the machine is located physically.

M.Tech CSE NITJ P a g e | 19


A Seminar Report On Distributed Firewall 09203005

COMPONENTS OF A DISTRIDUTER FIREWALL

* A central management system for designing the policies.

* A transmission system to transmit these polices.

* Implementation of the designed policies in the client end.

Central management System

Central Management, a component of distributed firewalls, makes it practical to secure


enterprise-wide servers, desktops, laptops, and workstations. Central management
provides greater control and efficiency and it decreases the maintenance costs of
managing global security installations. This feature addresses the need to maximize
network security resources by enabling policies to be centrally configured, deployed,
monitored, and updated. From a single workstation, distributed firewalls can be scanned
to understand the current operating policy and to determine if updating is required.

Policy Distribution

The policy distribution scheme should guarantee the integrity of the policy during
transfer. The distribution of the policy can be different and varies with the
implementation. It can be either directly pushed to end systems, or pulled when
necessary.

Host End Implementation

The security policies transmitted from the central management server have to be
implemented by the host. The host end part of the Distributed Firewall does provide any
administrative control for the network administrator to control the implementation of
policies. The host allows traffic based on the security rules it has implemented.

M.Tech CSE NITJ P a g e | 20


A Seminar Report On Distributed Firewall 09203005

IMPLEMENTING A DISTRIDUTER FIREWALL

The distributed firewall uses a central policy, but pushes enforcement towards the edges.
That is, the policy defines what connectivity, inbound and outbound, is permitted; this
policy is distributed to all endpoints, which enforce it.

To implement a distributed firewall, we need a security policy language that can describe
which connections are acceptable, an authentication mechanism, and a policy distribution
scheme. As a policy specification language, we use the KeyNote trust-management
system, further described in Section 7.1.As an authentication mechanism, we decided to
use IPsec for traffic protection and user/host authentication. When it comes to policy
distribution, we have a number of choices:
 We can distribute the KeyNote (or other) credentials to the various end users. The
users can then deliver their credentials to the end hosts through the IKE protocol.
The users do not have to be online for the policy update; rather, they can
periodically retrieve the credentials from a repository (web server). Since the
credentials are signed and can be transmitted over an insecure connection, users
could retrieve their new credentials even when the old ones have expired.
 The credentials can be pushed directly to the end hosts, where they would be
immediately available to the policy verifier. Since every host would need a large
number, if not all, of the credentials for every user, the storage and transmission
bandwidth requirements are higher than in the previous case.
 The credentials can be placed in a repository where they can be fetched as needed
by the hosts. This requires constant availability of the repository, and may impose
some delays in the resolution of request (such as a TCP connection
establishment).

While the first case is probably the most attractive from an engineering point of view.
Furthermore, some IPsec implementations do not support connection-grained security.
Finally, since IPsec is not (yet) in wide use, it is desirable to allow for a policy-based
filtering that does not depend on IPsec. Thus, it is necessary to provide a policy
resolution mechanism that takes into consideration the connection parameters, the local
policies, and any available credentials, and determines whether the connection should be
allowed.

M.Tech CSE NITJ P a g e | 21


A Seminar Report On Distributed Firewall 09203005

KeyNote

Trust Management is a relatively new approach to solving the authorization and security
policy problem. Making use of public key cryptography for authentication, trust
management dispenses with unique names as an indirect means for performing access
control. Instead, it uses a direct binding between a public key and a set of authorizations,
as represented by a safe programming language. This results in an inherently
decentralized authorization system with sufficient expressibility to guarantee flexibility in
the face of novel authorization scenarios.

Figure : Application Interactions with KeyNote. The Requester is typically a user that
authenticates through some application-dependent protocol, and optionally provides
credentials. The Verifier needs to determine whether the Requester is allowed to perform
the requested action. It is responsible for providing to KeyNote all the necessary
information, the local policy, and any credentials. It is also responsible for acting upon
KeyNote’s response.

One instance of a trust-management system is KeyNote. KeyNote provides a simple


notation for specifying both local security policies and credentials that can be sent over
an untrusted network. Policies and credentials contain predicates that describe the trusted
actions permitted by the holders of specific public keys (otherwise known as principals).
Signed

Credentials, which serve the role of “certificates,” have the same syntax as policy
assertions, but are also signed by the entity delegating the trust

Applications communicate with a “KeyNote evaluator” that interprets KeyNote


assertions and returns results to applications, as shown in Figure 1. However, different

M.Tech CSE NITJ P a g e | 22


A Seminar Report On Distributed Firewall 09203005

hosts and environments may provide a variety of interfaces to the KeyNote evaluator
(library, UNIX daemon, kernel service, etc.).

A KeyNote evaluator accepts as input a set of local policy and credential assertions, and a
set of attributes, called an “action environment,” that describes a proposed trusted action
associated with a set of public keys (the requesting principals). The KeyNote evaluator
determines whether proposed actions are consistent with local policy by applying the
assertion predicates to the action environment. The KeyNote evaluator can return values
other than simply true and false, depending on the application and the action environment
definition.

An important concept in KeyNote (and, more generally, in trust management) is


“monotonicity”. This simply means that given a set of credentials associated with a
request, if there is any subset that would cause the request to be approved then the
complete set will also cause the request to be approved. This greatly simplifies both
request resolution (even in the presence of conflicts) and credential management.
Monotonicity is enforced by the KeyNote language (it is not possible to write non-
monotonic policies).

It is worth noting here that although KeyNote uses cryptographic keys as principal
identifiers, other types of identifiers may also be used. For example, usernames may be
used to identify principals inside a host. In this environment, delegation must be
controlled by the operating system (or some implicitly trusted application), similar to the
mechanisms used for transferring credentials in UNIX or in capability-based systems.

Also, in the absence of cryptographic authentication, the identifier of the principal


requesting an action must be securely established. In the example of a single host, the
operating system can provide this information.

M.Tech CSE NITJ P a g e | 23


A Seminar Report On Distributed Firewall 09203005

KeyNote-Version: 2
Authorizer: "POLICY"
Licensees: "rsa-hex:1023abcd"
Comment: Allow Licensee to connect to local port 23 (telnet) from internal
addresses only, or to port 22 (ssh) from anywhere. Since this is a
policy, no signature field is required.
Conditions: (local_port == "23" && protocol == "tcp" &&
remote_address > "158.130.006.000" &&
remote_address < "158.130.007.255) -> "true";
local_port == "22" && protocol == "tcp" -> "true";
KeyNote-Version: 2
Authorizer: "rsa-hex:1023abcd"
Licensees: "dsa-hex:986512a1" || "x509-base64:19abcd02=="
Comment: Authorizer delegates SSH connection access to either of the
Licensees, if coming from a specific address.
Conditions: (remote_address == "139.091.001.001" &&
local_port == "22") -> "true";
Signature: "rsa-md5-hex:f00f5673"

Figure: Example KeyNote Policy and Credential. The local policy allows a particular user
(as identified by their public key) connect access to the telnet port by internal addresses, or
to the SSH port from any address. That user then delegates to two other users (keys) the
right to connect to SSH from one specific address. Note that the first key can effectively
delegate at most the same rights it possesses. KeyNote does not allow rights amplification;
any delegation acts as refinement.

In our prototype, end hosts (as identified by their IP address) are also considered
principals when IPsec is not used to secure communications. This allows local policies or
credentials issued by administrative keys to specify policies similar to current packet
filtering rules.

In the context of the distributed firewall, KeyNote allows us to use the same, simple
language for both policy and credentials. The latter, being signed, may be distributed over
an insecure communication channel. In KeyNote, credentials may be considered as an
extension, or refinement, of local policy; the union of all policy and credential assertions
is the overall network security policy. Alternately, credentials may be viewed as parts of
a hypothetical access matrix. End hosts may specify their own security policies, or they
may depend exclusively on credentials from the administrator, or do anything in between
these two ends of the spectrum. Perhaps of more interest, it is possible to “merge”
policies from different administrative entities and process them unambiguously, or to
layer them in increasing levels of refinement. This merging can be expressed in the
KeyNote language, in the form of intersection (conjunction) and union (disjunction) of
the component sub-policies.

M.Tech CSE NITJ P a g e | 24


A Seminar Report On Distributed Firewall 09203005

Although KeyNote uses a human-readable format and it is indeed possible to write


credentials and policies that way, our ultimate goal is to use it as an interoperability-layer
language that “ties together” the various applications that need access control services.
An administrator would use a higher-level language or GUI to specify correspondingly
higher-level policy and then have this compiled to a set of KeyNote credentials. This
higher-level language would provide grouping mechanisms and network-specific
abstractions that are not present in KeyNote. Using KeyNote as the middle language
offers a number of benefits:

 It can handle a variety of different applications (since it is application-


independent but customizable), allowing for more comprehensive and mixed-
level policies (e.g., covering email, active code content, IPsec, etc.).
 Provides built-in delegation, thus allowing for decentralized administration.
 Allows for incremental or localized policy updates (as only the relevant
credentials need to be modified, produced, or revoked).

KeyNote-Version: 2
Authorizer: "rsa-hex:1023abcd"
Licensees: "IP:158.130.6.141"
Conditions: (@remote_port < 1024 && @local_port == 22) -> "true";
Signature: "rsa-sha1-hex:bee11984"

Figure : An example credential where an (administrative) key delegates to an IP address.


This would allow the specified address to connect to the local SSH port, if the connection
is coming from a privileged port. Since the remote host has no way of supplying the
credential to the distributed firewall through a security protocol like IPsec, the distributed
firewall must search for such credentials or must be provided with them when policy is
generated/updated.
IMPLEMENTATION

Most Of the work is done on Open BSD operating system. OpenBSD provides an
attractive platform for developing security applications because of the well-integrated
security features and libraries (an IPsec stack, SSL, KeyNote, etc.). However, similar
implementations are possible under other operating systems.

System is comprised of three components:


 a set of kernel extensions, which implement the enforcement mechanisms,
 a user level daemon process, which implements the distributed firewall policies,
 and a device driver, which is used for two-way communication between the
kernel and the policy daemon

M.Tech CSE NITJ P a g e | 25


A Seminar Report On Distributed Firewall 09203005

Figure: The Figure shows a graphical representation of the system, with all its
components. The core of the enforcement mechanism lives in kernel space and is
comprised of the two modified system calls that interest us, connect and accept. The
policy specification and processing unit lives in user space inside the policy daemon
process. The two units communicate via a loadable pseudo device driver interface.
Messages travel from the system call layer to the user level daemon and back using the
policy context queue.

Kernel Extensions
For our working prototype we focused our efforts on the control of the TCP connections.
Similar principles can be applied to other protocols; for unreliable protocols, some form
of reply caching is desirable to improve performance.

In the UNIX operating system users create outgoing and allow incoming TCP
connections using connect and accept system calls respectively. Since any user has access
to these system calls, some “filtering” mechanism is needed. This filtering should be
based on a policy that is set by the administrator. Filters can be implemented either in
user space or inside the kernel. Each has its advantages and disadvantages.

A user level approach, as depicted in Figure below, requires each application of interest
to be linked with a library that provides the required security mechanisms, e.g., a
modified libc. This has the advantage of operating system-independence, and thus does
not require any changes to the kernel code. However, such a scheme does not guarantee
that the applications will use the modified library, potentially leading to a major security
problem.

M.Tech CSE NITJ P a g e | 26


A Seminar Report On Distributed Firewall 09203005

Figure: Wrappers for filtering the connect and accept system calls are added to a system
library. While this approach offers considerable flexibility, it suffers from its inability to
guarantee the enforcement of security policies, as applications might not link with the
appropriate library.

A kernel level approach, as shown in the left side of Figure , requires modifications to the
operating system kernel. This restricts us to open source operating systems like BSD and
Linux. The main advantage of this approach is that the additional security mechanisms
can be enforced transparently on the applications.

As we mentioned previously, the two system calls we need to filter are connect and
accept. When connect is issued by a user application and the call traps into the kernel, we
create what we call a policy context (see Figure), associated with that connection.

The policy context is a container for all the information related to that specific
connection. We associate a sequence number to each such context and then we start
filling it with all the information the policy daemon will need to decide whether to permit
it or not. In the case of the connect, this includes the ID of the user that initiated the
connection, the destination address and port, etc.

Any credentials acquired through IPsec may also be added to the context at this stage.
There is no limit as to the kind or amount of information we can associate with a context.
We can, for example, include the time of day or the number of other open connections of
that user, if we want them to be considered by our decision–making strategy.

Once all the information is in place, we commit that context. The commit operation adds
the context to the list of contexts the policy daemon needs to handle. After this, the
application is blocked waiting for the policy daemon reply.

Accepting a connection works in a similar fashion. When accept enters the kernel, it
blocks until an incoming connection request arrives. Upon receipt, we allocate a new
context which we fill in similarly to the connect case. The only difference is that we now
also include the source address and port. The context is then enqueued, and the process
blocks waiting for a reply from the policy daemon.

M.Tech CSE NITJ P a g e | 27


A Seminar Report On Distributed Firewall 09203005

typedef struct policy_mbuf policy_mbuf;


struct policy_mbuf {
policy_mbuf *next;
int length;
char data[POLICY_DATA_SIZE];
};
typedef struct policy_context policy_context;
struct policy_context {
policy_mbuf *p_mbuf;
u_int32_t sequence;
char *reply;
policy_context *policy_context_next;
};
policy_context *policy_create_context(void);
void policy_destroy_context(policy_context *);
void policy_commit_context(policy_context *);
void policy_add_int(policy_context *, char *, int);
void policy_add_string(policy_context *, char *, char *);
void policy_add_ipv4addr(policy_context *, char *, in_addr_t *);

Figure 6: The connect(2) and accept(2) system calls create contexts which contain
information relevant to that connection. These are appended to a queue from which the
policy daemon will receive and process them. The policy daemon will then return to the
kernel a decision on whether to accept or deny the connection.

In the next section we discuss how messages are passed between the kernel and the
policy daemon.

Policy Device

To maximize the flexibility of our system and allow for easy experimentation, we
decided to make the policy daemon a user level process. To support this architecture, we
implemented a pseudo device driver, /dev/policy that serves as a communication path
between the user–space policy daemon, and the modified system calls in the kernel.

Our device driver supports the usual operations (open, close, read, write, and ioctl).
Furthermore, we have implemented the device driver as a loadable module. This
increases the functionality of our system even more, since we can add functionality
dynamically, without needing to recompile the whole kernel.

If no policy daemon has opened /dev/policy, no connection filtering is done. Opening the
device activates the distributed firewall and initializes data structures. All subsequent
connect and accept calls will go through the procedure described in the previous section.
Closing the device will free any allocated resources and disable the distributed firewall.

M.Tech CSE NITJ P a g e | 28


A Seminar Report On Distributed Firewall 09203005

When reading from the device the policy daemon blocks until there are requests to be
served. The policy daemon handles the policy resolution messages from the kernel, and
writes back a reply. The write is responsible for returning the policy daemons decision to
the blocked connection call, and then waking it up.

It should be noted that both the device and the associated messaging protocol are not tied
to any particular type of application, and may in fact be used without any modifications
by other kernel components that require similar security policy handling.

Finally, we have included an ioctl call for “house–keeping”. This allows the kernel and
the policy daemon to re–synchronize in case of any errors in creating or parsing the
request messages, by discarding the current policy context and dropping the associated
connection.

Policy Daemon
The third and last component of our system is the policy daemon. It is a user level
process responsible for making decisions, based on policies that are specified by some
administrator and credentials retrieved remotely or provided by the kernel, on whether to
allow or deny connections. Policies, as shown in Figure 2, are initially read in from a file.
It is possible to remove old policies and add new ones dynamically. In the current
implementation, such policy changes only affect new connections.

Communication between the policy daemon and the kernel is possible, using the policy
device. The daemon receives each request (see Figure 7) from the kernel by reading the
device. The request contains all the information relevant to that connection. Processing of
the request is done by the daemon using the KeyNote library, and a decision to accept or
deny it is reached. Finally the daemon writes the reply back to the kernel and waits for
the next request. While the information received in a particular message is application-
dependent, the daemon itself has no awareness of the specific application.

When using a remote repository server, the daemon can fetch a credential based on the ID
of the user associated with a connection, or with the local or remote IP address. A very
simple approach to that is fetching the credentials via HTTP from a remote web server.
The credentials are stored by user ID and IP address, and provided to anyone requesting
them. If credential “privacy” is a requirement, one could secure this connection using
IPsec or SSL. To avoid potential deadlocks, the policy daemon is not subject to the
connection filtering mechanism.

M.Tech CSE NITJ P a g e | 29


A Seminar Report On Distributed Firewall 09203005

u_int32_t seq; /* Sequence Number */


u_int32_t uid; /* User Id */
u_int32_t N; /* Number of Fields */
u_int32_t l[N]; /* Lengths of Fields */
char *field[N]; /* Fields */
Figure 7: The request to the policy daemon is comprised of the following fields: a
sequence number uniquely identifying the request, the ID of the user the connection
request belongs to, the number of information fields that will be included in the request,
the lengths of those fields, and finally the fields themselves.

EXAMPLE SENARIO
To better explain the interaction of the various components in the distributed firewall, we
discuss the course of events during two incoming TCP connection requests, one of which
is IPsec–protected. The local host where the connection is coming is part of a distributed
firewall, and has a local policy as shown in Figure 8.

KeyNote-Version: 2
Authorizer: "POLICY"
Licensees: ADMINISTRATIVE_KEY

Figure 8: End-host local security policy. In our particular scenario, the policy simply
states that some administrative key will specify our policy, in the form of one or more
credentials. The lack of a Conditions field means that there are no restrictions imposed on
the policies specified by the administrative key.

In the case of a connection coming in over IPsec, the remote user or host will have
established an IPsec Security Association with the local host using IKE. As part of the
IKE exchange, a KeyNote credential as shown in Figure 9 is provided to the local host.
Once the TCP connection is received, the kernel will construct the appropriate context.
This context will contain the local and remote IP addresses and ports for the connection,
the fact that the connection is protected by IPsec, the time of day, etc.

This information along with the credential acquired via IPsec will be passed to the policy
daemon. The policy daemon will perform a KeyNote evaluation using the local policy
and the credential, and will determine whether the connection is authorized or not. In our
case, the positive response will be sent back to the kernel, which will then permit the TCP
connection to proceed. Note that more credentials may be provided during the IKE
negotiation (for example, a chain of credentials delegating authority).

If KeyNote does not authorize the connection, the policy daemon will try to acquire
relevant credentials by contacting a remote server where these are stored. In our current

M.Tech CSE NITJ P a g e | 30


A Seminar Report On Distributed Firewall 09203005

implementation, we use a web server as the credential repository. In a large-scale


network, a distributed/replicated database could be used instead.

The policy daemon uses the public key of the remote user (when it is known, i.e., when
IPsec is in use) and the IP address of the remote host as the keys to lookup credentials
with; more specifically, credentials where the user’s public key or the remote host’s
address appears in the Licensees field are retrieved and cached locally (Figure 3 lists an
example credential that refers to an IP address). These are then used in conjunction with
the information provided by the kernel to re-examine the request. If it is again denied, the
connection is ultimately denied.

KeyNote-Version: 2
Authorizer: ADMINISTRATIVE_KEY
Licensees: USER_KEY
Conditions:(app_domain == "IPsec policy" &&
encryption_algorithm == "3DES" &&
local_address == "158.130.006.141")-> "true";
(app_domain =="Distributed Firewall" &&
@local_port == 23 && encrypted == "yes" &&
authenticated == "yes") -> "true";
Signature: ...

Figure 9: A credential from the administrator to some user, authorizing that user to
establish an IPsec Security Association (SA) with the local host and to connect to port 23
(telnet) over that SA. To do this, we use the fact that multiple expressions can be included
in a single KeyNote credential. Since IPsec also enforces some form of access control on
packets, we could simplify the overall architecture by skipping the security check for TCP
connections coming over an IPsec tunnel. In that case, we could simply merge the two
clauses (the IPsec policy clause could specify that the specific user may talk to TCP port
23 only over that SA).

THREAD COMPARISON

M.Tech CSE NITJ P a g e | 31


A Seminar Report On Distributed Firewall 09203005

Distributed firewalls have both strengths and weaknesses when compared to conventional
firewalls. By far the biggest difference, of course, is their reliance on topology. If your
topology does not permit reliance on traditional firewall techniques, there is little choice.
A more interesting question is how the two types compare in a closed, single-entry
network. That is, if either will work, is there a reason to choose one over the other?

Service Exposure and Port Scanning


Both types of firewalls are excellent at rejecting connection requests for inappropriate
services. Conventional firewalls drop the requests at the border; distributed firewalls do
so at the host. A more interesting question is what is noticed by the host attempting to
connect. Today, such packets are typically discarded, with no notification. A distributed
firewall may choose to discard the packet, under the assumption that its legal peers know
to use IPSEC; alternatively, it may instead send back a response requesting that the
connection be authenticated, which in turn gives notice of the existence of the host.

Firewalls built on pure packet filters cannot reject some "stealth scans" very well. One
technique, for example, uses fragmented packets that can pass through unexamined
because the port numbers aren't present in the first fragment. A distributed firewall will
reassemble the packet and then reject it. On balance, against this sort of threat the two
firewall types are at least comparable.

Application-level
Proxies
Some services require an application-level proxy. Conventional firewalls often have an
edge here; the filtering code is complex and not generally available on host platforms. As
noted, a hybrid technique can often be used to overcome this disadvantage.

In some cases, of course, application-level controls can avoid the problem entirely. If the
security administrator can configure all Web browsers to reject ActiveX, there is no need
to filter incoming HTML via a proxy.

In other cases, a suitably sophisticated IPSEC implementation will suffice. For example,
there may be no need to use a proxy that scans outbound FTP control messages for PORT
commands, if the kernel will permit an application that has opened an outbound
connection to receive inbound connections. This is more or less what such a proxy would
do.

A more serious issue concerns inadvertent errors when using application-level policies.
An administrator might distribute configuration restrictions for Netscape Navigator or
Microsoft's Internet Explorer. But what would happen if a user, in all innocence, were to
install the Opera browser? The only real solution here is a standardized way to express
application-level policies across all applications of a given type. We do not see that
happening any time soon.

M.Tech CSE NITJ P a g e | 32


A Seminar Report On Distributed Firewall 09203005

Denial of Service

There is a variety of DOS attacks, and not all can be handled by either concept of firewall
systems. Although there can be made no restrictive assumptions about the overall
network topology it is most likely that a set of hosts inside the network policy domain
will be located physically near to each other and thus use the same connection to the
untrusted network. Neither a conventional firewall, nor a distributed one can prevent
DOS attacks on the networks perimeter efficiently, although we could emphasize the
intentional spread of mission critical hosts on physically separated networks to make such
an attack more difficult to an adversary. On the other side, distributed firewalls can
behave quite well on DOS attacks which depend on IP spoofing mechanisms, given the
assumption that the authorization mechanisms do not rely on IP addresses as credentials.
On contrary, with the use trusted repository for either credentials, policies or both, it
should be clear that the network devices employing these mechanisms will be subject to
extensive attacks, given the overall dependence of end points on its availability.

The "smurf" attack primarily consumes the bandwidth on the access line from an ISP to
the target site. Neither form of firewall offers an effective defense. If one is willing to
change the topology, both can be moderately effective. Conventional firewalls can be
located at the ISP's POP, thus blocking the attack before it reaches the low-bandwidth
access line. Distributed firewalls permit hosts to be connected via many different access
lines, thus finessing the problem.

It may be possible to chew up CPU time by bombarding the IKE process with bogus
security association negotiation requests. While this can affect conventional firewalls,
inside machines would still be able to communicate. Distributed firewalls rely much
more on IKE, and hence are more susceptible.

Conversely, any attack that consumes resources on conventional firewalls, such as many
email attachments that must be scanned for viruses, can bog down such firewalls and
affect all users. For that matter, too much legitimate traffic can overload a firewall. As
noted, distributed firewalls do not suffer from this effect.

IP spoofing

Reliance on network addresses is not a favored concept. Using cryptographic


mechanisms most likely prevents attacks based on forged source addresses, under the
assumption that the trusted repository containing all necessary credentials has not been
subject to compromise in itself. These problems can be solved by conventional firewalls
with corresponding rules for discarding packets at the network perimeter but will not
prevent such attacks originating from inside the network policy domain.

Malicious software

M.Tech CSE NITJ P a g e | 33


A Seminar Report On Distributed Firewall 09203005

With the spread use of distributed object-oriented systems like CORBA, client-side use
of Java and weaknesses in mail readers and the like there is a wide variety of threats
residing in the application and intermediate level of communication traffic. Firewall
mechanisms at the perimeter can come useful by inspecting incoming emails for known
malicious code fingerprints, but can be confronted with complex, thus resource-
consuming situations when making decisions on other code, like Java.

Using the framework of a distributed firewall and especially considering a policy


language which allows for policy decision on the application level can circumvent some
of these problems, under the condition that contents of such communication packets can
be interpreted semantically by the policy verifying mechanisms. Stateful inspection of
packets shows up to be easily adapted to these requirements and allows for finer
granularity in decision making. Furthermore malicious code contents may be completely
disguised to the screening unit at the network perimeter, given the use of virtual private
networks and enciphered communication traffic in general and can completely disable
such policy enforcement on conventional firewalls.

Intrusion Detection

Many firewalls detect attempted intrusions. If that functionality is to be provided by a


distributed firewall, each individual host has to notice probes and forward them to some
central location for processing and correlation.

The former problem is not hard; many hosts already log such attempts. One can make a
good case that such detection should be done in any event. Collection is more
problematic, especially at times of poor connectivity to the central site. There is also the
risk of coordinated attacks in effect causing a denial of service attack against the central
machine.

Our tentative conclusion is that intrusion detection is somewhat harder than with
conventional firewalls. While more information can be gathered, using the same
techniques on hosts protected by conventional firewalls would gather the same sort of
data.

Insider Attacks
SENARIO
Given the natural view of a conventional firewall on the networks topology as consisting
of an inside and outside, problems can arise, once one or more members of the policy
network domain have been compromised. Perimeter firewalls can only enforce policies
between distinct networks and show no option to circumvent problems which arise in the
situation discussed above.

Given a distributed firewalls independence on topological constraints supports the


enforcement of policies whether hosts are members or outsiders of the overall policy
domain and base their decisions on authenticating mechanisms which are not inherent

M.Tech CSE NITJ P a g e | 34


A Seminar Report On Distributed Firewall 09203005

characteristics of the networks layout. Moreover, compromise of an endpoint either by an


legitimate user or intruder will not weaken the overall network in a way that leads
directly to compromise of other machines, given the fact that the deployment of virtual
private networks prevents sniffing of communication traffic in which the attacked
machine is not involved.

On the other side, on the end-point itself nearly the same problems arise as in
conventional firewalls: Assuming that a machine has been taken over by an adversary
must lead to the conclusion that the policy enforcement mechanisms them self may be
broken. The installation of backdoors on this machine can be done quite easily once the
security mechanisms are flawed and in the lack of a perimeter firewall, there is no trusted
entity anymore which might prevent arbitrary traffic entering or leaving the compromised
host.

Additionally use of tools like SSH and the like allow tunneling of other applications
communication and can not be prevented without proper knowledge of the decrypting
credentials, moreover given the fact that in case an attack has shown up successfully the
verifying mechanisms in them self may not be trusted anymore.

At first glance, the biggest weakness of distributed firewalls is their greater susceptibility
to lack of cooperation by users. What happens if someone changes the policy files on
their own?

Distributed firewalls can reduce the threat of actual attacks by insiders, simply by making
it easier to set up smaller groups of users. Thus, one can restrict access to a file server to
only those users who need it, rather than letting anyone inside the company pound on it.

It is also worth expending some effort to prevent casual subversion of policies. If


policies are stored in a simple ASCII file, a user wishing to, for example, play a game
could easily turn off protection. Requiring the would-be uncooperative user to go to more
trouble is probably worthwhile, even if the mechanism is theoretically insufficient.

For example, policies could be digitally signed, and verified by a frequently-changing


key in an awkward-to-replace location. For more stringent protections, the policy
enforcement can be incorporated into a tamper-resistant network card.

CONCLUSION

M.Tech CSE NITJ P a g e | 35


A Seminar Report On Distributed Firewall 09203005

Network security policy specification remains under the control of the network
administrator in distributed firewall network system. Its enforcement, however, is left up
to the hosts in the protected network. Security policy is specified using KeyNote policies
and credentials, and is distributed to the users and hosts in the network. Since
enforcement occurs at the endpoints, various shortcomings of traditional firewalls are
overcome:
 Security is no longer dependent on restricting the network topology. This allows
considerable flexibility in defining the “security perimeter,” which can easily be
extended to safely include remote hosts and networks (e.g., telecommuters,
extranets).
 Since we no longer solely depend on a single firewall for protection, we eliminate
a performance bottleneck. Alternately, the burden placed on the traditional
firewall is lessened significantly, since it delegates a lot of the filtering to the end
hosts.
 Filtering of certain protocols (e.g., FTP) which was difficult when done on a
traditional firewall, becomes significantly easier, since all the relevant
information is present at the decision point, i.e., the end host.
 The number of outside connections the protected network is no longer a cause for
administration nightmares. Adding or removing links has no impact on the
security of the network. “Backdoor” connections set up by users, either
intentionally or inadvertently, also do not create windows of vulnerability.
 Insiders may no longer be treated as unconditionally trusted. Network
compartmentalization becomes significantly easier.
 End-to-end encryption is made possible without sacrificing security, as was the
case with traditional firewalls. In fact, end-to-end encryption greatly improves the
security of the distributed firewall.
 Application-specific policies may be made available to end applications over the
same distribution channel.
 Filtering (and other policy) rules are distributed and established on an as-needed
basis; that is, only the hosts that actually need to communicate need to determine
what the relevant policy with regard to each other is. This significantly eases the
task of policy updating, and does not require each host/firewall to maintain the
complete set of policies, which may be very large for large networks.
Furthermore, policies and their distribution scales much better with respect to the
network size and user base than a more tightly-coupled and synchronized
approach would.

On the other hand, distributed firewall architecture requires high quality administration
tools. Also, note that the introduction of a distributed firewall infrastructure in a network
does not completely eliminate the need for a traditional firewall. The latter is still useful
in certain tasks:
 It is easier to counter infrastructure attacks that operate at a level lower than the
distributed firewall. Note that this is mostly an implementation issue; there is no
reason why a distributed firewall cannot operate at arbitrarily low layers, other
than potential performance degradation.

M.Tech CSE NITJ P a g e | 36


A Seminar Report On Distributed Firewall 09203005

 Denial-of-service attack mitigation is more effective at the network ingress points


(depending on the particular kind of attack).
 Intrusion detection systems are more effective when located at a traditional
firewall, where complete traffic information is available.
 The traditional firewall may protect end hosts that do not (or cannot) support the
distributed firewall mechanisms. Integration with the policy specification and
distribution mechanisms is especially important here, to avoid duplicated filters
and windows of vulnerability.
 Finally, a traditional firewall may simply act as a fail-safe security mechanism.

Since most of the security enforcement has been moved to the end hosts, the task of a
traditional firewall operating in a distributed firewall infrastructure is significantly eased.

A final point is that, from an administrative point of view, fully distributed firewall
architecture is very similar to a network with a large number of internal firewalls.

FUTURE WORK

M.Tech CSE NITJ P a g e | 37


A Seminar Report On Distributed Firewall 09203005

There are a number of possible extensions that be done in the process of building a more
general and complete system. The future work can be done on these points.

• High quality administration tools NEED to exist for distributed firewalls to be


accepted.
• Allow per-packet scanning as opposed to per-connection scanning
• Policy updating and revocation
• Credential discovery

M.Tech CSE NITJ P a g e | 38


A Seminar Report On Distributed Firewall 09203005

REFERENCES

Books

1. Sonnenreich, Wes, and Tom Yates, Building Linux and OpenBSD Firewalls,
Singapore: Addison Wiley

2. Zwicky, D. Elizabeth, Simon Cooper, Brent D. Chapman, Building Internet Firewalls


O'Reilly Publications

3. Wes Noonan and Ido Dubrwasky, Firewall Fundamentals, Pearson Education

White Papers and Reports

1. Bellovin, M. Steven “Distributed Firewalls", login, November 1999, pp. 39-47


http://www.research.att.com/~smb/papers/distfw.html

2. Dr. Hancock, Bill "Host-Resident Firewalls: Defending Windows NT/2000 Servers


and Desktops from Network Attacks"

3. Bellovin, S.M. and W.R. Cheswick, "Firewalls and Internet Security: Repelling the
Wily Hacker", Addison-Wesley, 1994.

4. Ioannidis, S. and Keromytis, A.D., and Bellovin, S.M. and J.M. Smith, "Implementing
a Distributed Firewall", Proceedings of Computer and Communications Security (CCS),
pp. 190-199, November 2000, Athens, Greece.

And several URLs.

M.Tech CSE NITJ P a g e | 39

You might also like