10 Policies to a More Secure Network

Each year I lecture at a number of events and consult for a large number of clients, and one recurring question I hear is how to leverage your investment in the network infrastructure (switches/routers) to secure the network against a variety of threats. The key solution to this quandary is not to focus on the unique characteristics of each threat. Instead we should focus on the common vectors that these threats leverage for success, a reliance on the client system using server protocols.

A client acting as a server sounds like it shouldn’t be an awful thing, but it can be if left unchecked. Let’s look at a few examples.


  1. A number of viruses use their own mail engine to replicate versus trying to understand what the victim’s computer uses for email

  2. Worms that spread across data networks often start a TFTP server on the infected system and use TFTP to copy themselves from system to system

  3. Man-in-the-middle (MITM) attacks often use ARP cache poisoning of victim computers to ‘claim’ the MAC address of the router for their own system

Understanding how the network layer is used in each of these attacks can help us build a defensible network layer using a very simple approach.

- – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – -
   Buyer’s Guide: Sales Force Automation

- – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - -

I’ve taken the position that “clients should not be servers”. That sounds simple, but what does this mean? In data networks, there are a number of IP services (represented by protocols on the network) that should only be provided by specific systems located in the data center and managed by the IT organization. If these IP services are accidentally or maliciously attached to the edge of the network, they often disrupt the operation of the network by causing performance, reliability or security problems. A common example is the case of the rogue DHCP server. It could occur on your network for a number of reasons: a contractor that forgot he left a DHCP server running on his laptop from his last demo, an administrative assistance noticing an unplugged cable and attaching the DSL modem to the corporate network, or a person with malicious intent wanting to route traffic on your network through their device. The first two examples result in downtime for the systems affected and IT having to chase down the ghost in the network. The third example may require external assistance.

Rogue DHCP servers present only one common vector we can easily rectify at the network edge. Below are the top protocols for which you should consider policy control at the edge of your network. Each represents a common mistake or exploit vector in the majority of enterprise networks deployed today.

DHCP Server Protocol Automatically mitigate the accidental or malicious connection of a DHCP server to the edge of your network to prevent DoS or data integrity issues.
DNS Server Protocol DNS is critical to network operations. Automatically protect your name servers from malicious attack or unauthorized spoofing/redirect.
Routing Topology Protocols RIP, OSPF, BGP and MPLS topology protocols should only originate from authorized router connection points to ensure reliable network operations.
Router Source MAC & Router Source IP Address Routers and default gateways should not be moving around your network without approved change processes being authorized. Prevent DoS, spoofing, data integrity and other router security issues.
SMTP/POP Server Protocol Prevent data theft and worm propagation.
SNMP Protocol Only approved management stations or management data collection points need to be speaking SNMP. Prevent unauthorized users from using SNMP to view/read/write management information.
FTP/TFTP Server Protocol Ensure file transfers and firmware upgrades are only originating from authorized file and configuration management servers.
Web Server Protocol Stop malicious proxies and/or application-layer attacks by ensuring only the right Web servers can connect from the right location at the right time.
Legacy Protocols IPX, AppleTalk, DECnet or other protocols should no longer be running on your network—prevent clients from using them. Some organizations even take the approach that unless a protocol is specifically allowed, it is denied.

These suggested policies improve security and will harden your network against a variety of threats. Optionally you can create a variety of network profiles based on the person’s or device’s use of the network. These profiles, more accurately described as ‘roles’, allow today’s network manager to better manage resources and manage their threat model. I’ve used the convention “Role Based Network Access Control” to describe this function as it mirrors the concepts in “Role Based Access Controls” published by the US National Institutes of Standards and Technology. You can view their version here: http://csrc.nist.gov/groups/SNS/rbac/

I hope you find this information helpful and would welcome your feedback – did you find this helpful or are you deploying some version of this on your network today? I’d be happy to answer your questions.

Remember, every network is unique. Evaluate your network and consider each policy before deploying it.

Mark Townsend

About Mark Townsend

Mark Townsend's career has spanned the past two decades in computer networking, during which he has contributed to several patents and pending patents in information security. He has established himself as an expert related to networking and security in enterprise networks, with a focus on educational environments. Mark is a contributing member to several information security industry standards associations, most notably the Trusted Computing Group (TCG). Townsend's work in the TCG Trusted Network Connect (TNC) working group includes co-authoring the Clientless Endpoint Support Profile. Townsend is currently developing virtualization solutions and driving interoperability testing within the industry. Prior to his current position, he has served in a variety of roles including service and support, marketing, sales management and business development. He is considered an industry expert and often lectures at universities and industry events, including RSA and Interop. Mark is also leveraging his background and serving his community as Chairman of the local school board, a progressive school district consistently ranked in the top school districts of New Hampshire, with the district high school ranked as a "Best High School" by US News & World Report.

,

3 Responses to 10 Policies to a More Secure Network

  1. Dave Griffith November 1, 2009 at 7:49 pm #

    My response is simply summarized in two words.

    How ridiculous.

    You’re supporting a one way valve in a protocol that was specifically designed to be peer to peer. Meaning a communication between equals. One host is the same as another host. That’s how modern networks function. That step back in time is like wishing your computer was on the end of a POTS line talking by modem to a BBS! One way traffic only. What a waste of a wire.

    There are numerous applications that add greatly to the functionality of computers. Communication and file sharing programs being the ones that jump immediately to mind.

    Seems to me that you’re an office sysadmin who’s after the ultimate way of preventing employees doing anything interesting with their computers.

    Fortunately however, modern peer to peer stuff and modern chat programs, and modern VOIP systems are purpose designed to get around office firewalls, because of the accidental imposition of exactly this sort of policy.

    Living behind a NAT/Port-translation gateway puts exactly this restriction on people and the folk who write the software have developed brilliant ways around it.

    In a similar fashion, even if you managed to persuade numerous network operators to litter their networks with one-way valves, the ‘rogue’ software would evolve a way around it.

    In the end it would have no effect on the negative usage of server sockets and the only medium term impact would be against positive application, for the simple reason that the creators of beneficial ‘over the counter’ software are always the slowest to adapt to new barriers. So, the longest disadvantage would be to the more desirable applications.

    Dave G.

  2. Mark Townsend
    Mark Townsend November 3, 2009 at 7:12 am #

    Dave,

    I appreciate your response, but I’d like the ability to respond to some of your points.

    What I may not have been clear about is that for the above protocols, the claim is to disable the client from servicing other clients (users), part of a “defense in depth” strategy. Let me use an example to clarify.

    Let’s start with DHCP protocol. DHCP protocol uses two UDP ports, 67 and 68 per RFC 2131. UDP 67 is sourced by the DHCP server, while UDP 68 is sourced by the client. My previous claim does not mean to shut down DHCP and run static IP addresses. The proposal is that DHCP server protocol is only provided/sourced “from” your allowed area – for example your DHCP server in your data center.

    You can easily apply a short port filter on your user ports preventing user machines from functioning as a DHCP server, drop ingress packets from the users that ‘source’ UDP 67. The user can still source UDP 68 to exchange information and receive an IP address. I have worked with several hundred customers that have positive (saved money) experiences with these simple port filters. Why is a rogue DHCP server so bad?

    A rogue DHCP server partially disables networks. They are not easily found as the administrators commonly have to wait for the help desk calls to arrive, the ever so common “the internet is down”. It is also very unlikely that the help desk will experience many calls at once. This is because not every computer is on the same cycle to renew their IP address (there are exceptions such as the Monday morning 8AM effect). The second factor is that for each broadcast request from the client, the response from the correct DHCP server is in a ‘race’ with the rogue DHCP server’s response. The client will accept the first DHCP offer and ignore other offers. This is standard functionality on client OS’s.

    While the calls sporadically occur during the period, the help desk is busy diagnosing the condition with the user who may not be proficient in computer management. Also, most help desk operators start the call suspecting the computer’s configuration, not a rogue condition on the network. It is often only after learning more systems in that VLAN are having IP address conflicts that an analyzer is deployed. The analyzer has to be activated in that VLAN to start trapping for the rogue DHCP server – for some companies that is a person deployed to that area, sometimes in another building. Let’s not forget the most important cost to the company, there is a bunch of workers that are unable to complete their jobs.

    We first found the results operating the network at Interop in Las Vegas in 2008. In 2007, IP addresses were one third of the trouble tickets. The most common of those tickets was chasing down a vendor that connected to the InteropNet and still had a DHCP server from their last demo running on their machine. Other persons in the same network would start getting the wrong IP address and start calling the Help Desk.

    In 2008 and in 2009, we applied the very 10 policies above and have eliminated those trouble tickets and hardened the network. The man hours we saved allowed the teams to explore other objectives.

    Here’s a great blog entry from Computer World reader talking exactly about their problems: http://blogs.computerworld.com/node/2728

    I hope this clarifies my position.

    Thanks and best regards,

    -Mark

  3. Kent November 3, 2009 at 2:16 pm #

    David/Mark,
    The policies or controls that you both identify center around the threats stemming from the potential of misuse of common protocols required for everyday network communications. David brings up the point of dynamic application protocol usage and misuse but doesn’t clearly differentiate the application from the service protocols. For example port 80 is for web servers only because this is the common service port for the web server – there are no hard and fast rules that mandates or enforces this common port assignment. The assumption is that these “policies” or “one way network valves?” can not possibly be implemented in such a way to effectively reduce misues of “allowed” common port assignment and because of this won’t allow for innovation over time because there is no way to enforce application protocol usage on the application development perspective. David’s agruably flawed assumption then is expanded to drive the point that the static controls basically have a diminishing value because of morhphic nature of todays dynmaic protocal assignment and usage. The goal of any control is that they are implemented “in such a way” so that it can reduce the probability of misuse(also simple and even accidental misconfiguration) in the first place. The next flaw of the arguement is that all controls are implemented in a static manner.

    I tend to agree with Mark in that point he makes the that all networks require certain basic protocol services that need to be protected at all times. These are protocols that can be very cleanly defined and controlled – these can be charaterised as being static. DHCP, DNS,Spanningtree, routing protocols, packet tagging(QOS,diffserv policeing)… – these types of service protocols can be and should be controlled by a network adminstrator as simple misuse can inhibit network function drastically. I agree with David that there are also sets of application protocols that enable innovative usage of the network, like VOIP, torrents, or storage area networking… Yes -there are also software development foundries, especially those being delivered as threat vectors or for malicious intent that will morph their own application network protocol characteristics(L4 and below) specifically to circumvent the controls that are put in place. Can controls be put in place to be perfect all the time? No. Controls provide incremental value which do need to be continually adapted very much in tune that adaptation of the network applications themselves. Do you create firewall rule sets and leave them static? Or do we find we need to administer the ruleset over time?

    The differentiating point really comes down to how, where and what is the cost/effort for these controls to be implemented to be both effective and practical. This becomes the holy grail to temper this complicated beast of network security. I don’t believe that the holy grail can exist solely as agents on end systems – increasingly endstation are becoming more abstract(and out of the control of the system adminstrator) Case in point consider your network endstations. Is the endpoint a pc, mac, laptop,printer,iphone,timeclock, video camera – we are just starting to see the proliferation of network attached end systems and their uses. How many of these end systems can be effectively controlled via software and agents. How effectively and timely can this diverse system be administered. This leads me do believe that first, the network and the controls implemented will need to be able to provide increased visability as to the use and misuse of a network system(all protocol usage and characterisitics). Once the visability is provided, the proper implemetation of controls can be applied to be both effective and practical. Once this process is defined, it can be automated – and then this starts moving in the direction of finding the holy grail of nework security.
    There is not a simple policy for all ends and never can be.

    Marks 10 policies maybe should very well be called “10 steps” in the right direction but only if implemented appropriately.
    k/

Leave a Reply


*