When you set up a File Server, there are advantages to configuring multiple Network Interface Cards (NICs). However, there are many options to consider depending on how your network and services are laid out. Since networking (along with storage) is one of the most common bottlenecks in a file server deployment, this is a topic worth investigating. Throughout this blog post, we will look into different configurations for Windows Server 2008 (and 2008 R2) where a file server uses multiple NICs. Next, we’ll describe how the behavior of the SMB client can help distribute the load for a file server with multiple NICs. We will also discuss SMB2 Durability and how it can recover from certain network failure in configuration where multiple network paths between clients and servers are available. Finally, we will look closely into the configuration of a Clustered File Server with multiple client-facing NICs. 2 – Configurations We'll start by examining 8 distinct configurations where a file server has multiple NICs. These are by no means the only possible configurations, but each one has a unique characteristic that is used to introduce a concept on this subject. 2.1 – Standalone File Server, 2 NICs on server, one disabled This first configuration shows the sad state of many File Servers out there. There are multiple network interfaces available, but only one is actively being used. The other is not connected and possibly disabled. Most server hardware these days does include at least two 1GbE interfaces, but sometimes the deployment planning did not include the additional cabling and configuration to use it. Ironically, a single 1GbE (which provides roughly 100Mbytes per second of throughput) is a common bottleneck for your file server, especially when reading data from cache or from many disk spindles (physical disk throughput is the other most common bottleneck). Having a single NIC has an additional performance downside if that NIC does not support Receive-side Scaling (RSS). When RSS is not available, a single CPU services all the interrupts from a network adapter. For instance, If you have an 8-core file server using a single non-RSS NIC, that NIC will affinitize to one of the 8 cores, making it even more likely to become a bottleneck. To learn more about deploying RSS, check http://www.microsoft.com/whdc/device/network/NDIS_RSS.mspx. 2.2 – Standalone File Server, 2 NICs on server, teamed One simple and effective solution for enabling the multiple NICs on a File Server is NIC Teaming, also known as “Link Aggregation” or “Load Balancing and Failover (LBFO)”. These solutions, providers by vendors like Intel, Broadcom and HP, effectively combine multiple physical NICs into one logical or virtual NIC. The details vary based on the specific solution, but most will provide an increase of throughput and also tolerance to the failure of a NIC or to a network cable being accidentally unplugged. The NIC team typically behaves as a single NIC, requiring only a single IP address. Once you configure the team itself, the Windows Server and File Server configuration proceeds as if you had only one NIC. However, NIC teaming is not something included with Windows Server 2008 or Windows Server 2008 R2. Support for these solutions (the hardware, the drivers and the configuration tools) is provided by the hardware manufacturer. You can find Microsoft’s support policy for these types of solutions at http://support.microsoft.com/kb/254101 and http://support.microsoft.com/kb/968703. 2.3 – Standalone File Server, 2 NICs on server, single subnet If you don’t have a NIC teaming solution available but you are configuring a File Server, there are still ways to make it work. You can simply enable and configure both NICs, which will each need their own IP address. If everything is configured properly, both IP addresses will be published to DNS under the name of File Server. The SMB client will then be able to query DNS for the file server name, find that it has multiple IP addresses and choose one of them. Due to DNS round robin, chances are the clients will be spread across the NICs on the file server. There are several Windows Server components contributing to make this work. First, there’s the fact the File Server will listen on all configured network interfaces. Second, there’s the dynamic DNS that automatically registers all the IP addresses under the server’s name (if configured properly). Third, there’s the fact that DNS will naturally round robin through the different addresses registered under the same name. Last but not least, there is the File Client that will use one of the available IP addresses, giving a priority to the first address on the list (to honor the DNS round robin) but using one of the others if the first one does not respond quickly. The SMB client will use only one of the IP addresses at once. More on this later. What’s more, due to a feature called SMB2 durability, it’s possible that the SMB client will recover from the failure of a NIC or network path even if it’s right in the middle of reading or writing a file. More on this later as well. It's important to note that applications other the file server and file client might not behave properly with this configuration (they might not listen on all interfaces, for instance). You might also run into issues updating routing tables, especially in the case of a failure of a NIC or removal of a cable. These issues are documented in KB article 175767. For these reasons, many will not recommend this specific setup with a single subnet. 2.4 – Standalone File Server, 2 NICs on server, multiple subnets Another possible configuration is for each of the File Server NICs to connect to a different set of clients. This is useful to give you additional overall throughput, since you get traffic coming into both NICs. However, in this case, you are using different subnets. A typical case would be a small company where you have the first floor clients using one NIC and the second floor using the other. While both of the IP addresses get published to DNS (assuming everything is configured correctly) and each of the SMB clients will learn of both, only one of them will be routable from a specific client. From the SMB client’s perspective, that is fine. If one of them works, you will get connected. However, keep in mind that this configuration won’t give your clients a dual path to the File Server. Each set of clients has only one way to get to the server. If a File Server NIC goes bad or if someone unplugs one of the cables from the File Server, some of your clients will lose access while others will continue to work fine. 2.5 – Standalone File Server, 2 NICs on server, multiple subnets, router In larger networks, you will likely end up with various networks/subnets for both clients and servers, connected via a router. At this point you probably did a whole lot of planning, your server subnets can easily be distinguished from your client subnets and there’s a fair amount of redundancy, especially on the server side. A typical configuration on the server side would include dual top-of-rack switches on the server side, aggregated to a central switching/routing infrastructure. If everything is configured properly, the File Servers will have two IP addresses each, both published to the dynamic DNS. From a client perspective, you have a File Server name with multiple IP addresses. The clients here see something similar to what clients see in configuration 2.3, except for the fact that the IP addresses for the client and servers are on different subnets. It is worth noting that, in this configuration and all the following ones, you could choose to leverage NIC teaming, as described in configuration 2.2, if that is an option available from your hardware vendor. This configuration might bring additional requirements, since each of the NICs in the team are going into a different switch. The configuration of Windows Server itself would be simplified due to the single IP address, although additional teaming configuration with the vendor tool will be added. 2.6 – Standalone File Server, 2 NICs on “clients” and servers, multiple subnets This last standalone File Server configuration shows both clients and servers with 2 NICs each, using 2 distinct subnets. While this configuration is unusual for regular Windows clients, servers are commonly configured this way for added network fault tolerance. Here are a few examples of such server workloads: You can imagine the computers below as part of the configuration 2.5 above, only with more servers to the right of the router this time. 2.7 – Clustered File Servers, 2 NICs on servers, multiple subnets, router If you are introducing dual network interfaces for fault tolerance, you are also likely interested in clustering. This configuration takes config 2.6 and adds an extra file server to create a failover cluster for your file services. If you are familiar with failover clustering you know that, in addition to the IP addresses required for the cluster nodes themselves, you would need IP addresses for each cluster service (like File Server A and File Service B). More on this later. Although we’re talking about Clustered File Services with Cluster IP addresses, the SMB clients will essentially see a File Server name with multiple IP addresses for each clustered file service. In fact, the clients here see something similar to configurations 2.3, 2.5 and 2.6. It’s worth noting that, if File Server 1 fails, Failover Clustering will move File Service A to File Server 2, keeping the same IP addresses. 2.8 – Clustered File Server, 2 NICs on “clients” and servers, multiple subnets This last configuration focuses on file clients that are servers themselves, as described in configuration 2.6. This time, however, the File Servers are clustered. If you are interested in high availability for the file servers, it’s likely you would be also clustering the other server, if the workload allows for it. 3 – Standalone File Server 3.1 – SMB Server and DNS A Windows Server file server with multiple NICs enabled and configured for dynamic DNS will publish multiple IP addresses to DNS for its name. In the example below, for instance, a server with 3 NICs each in a different subnet is shown. You can see how this got published on DNS. For instance, FS1 shows with 3 DNS A records for 192.168.1.2, 192.168.2.2 and 192.168.3.2. It’s important to know also that the SMB server will listen on all interfaces by default. 3.2 – SMB Client and DNS From an SMB client perspective, the computers will query DNS for the name of the File Server. In the case of the example above, you will get an ordered list of IP addresses. You can query this information using the NSLOOKUP tool, as shown in the screenshot below. The SMB client will attempt to connect to all routable IP addresses offered. If more than one routable IP address is available, the SMB client will connect to the first IP address for which a response is received. The first IP address in the list is given a time advantage, to favor the DNS round robin sequence. To show this in action, I created a configuration where the file server has 3 IP addresses published to DNS. I then ran a script to copy a file to a share in the file server, then flush the DNS cache, wait a few seconds and start over. Below are the sample script and screenshot, showing how the SMB client cycles through the 3 different network interfaces as an effect of DNS round robin. @ECHO OFF 3.3 – SMB2 Durability When the SMB2 client detects a network disconnect or failure, it will try to reconnect. If multiple network paths are available and the original path is now down, the reconnection will use a different one. Durable handles allow the application using SMB2 to continue to operate without seeing any errors. The SMB2 server will keep the handles for a while, so that the client will be able to reconnect to them. Durable handles are opportunistic in nature and offer no guarantee of reconnections. For durability to occur, the following conditions must be met: For Windows operating systems, SMB2 is found on Windows Vista, Windows 7 (for client OSes), Windows Server 2008 and Windows Server 2008 R2 (for server OSes). Older versions of Windows will have only SMB1, which does not have the concept of durability. To showcase SMB2 durability, I used the same configuration shown in the previous screenshot and copied a large number of files to a share. While the copy was going, I disabled Network1, the client network interface that was being used by the SMB client. SMB2 durability kicked in and the SMB client moved to use Network3. I then disabled Network3 and the client started using Network2. You can see in the screenshot below that, with only one of the three interfaces available, the copy is still going. 4 – Clustered File Server 4.1 – Cluster Networks, Cluster Names and Cluster IP Addresses In a cluster, in addition to the regular node name and IP addresses, you get additional names for the cluster itself and every service (Cluster Group) you create. Each name can have one or more IP addresses. You can add an IP address per public (client-facing) Cluster Network for every Name resource. This includes the Cluster name itself, as shown below: For each File Server resource, you have a name resource, at least one IP address resource, at least one disk resource and the file server resource itself. See the example below for the File Service FSA, which uses 3 different IP addresses (192.168.1.101, 192.168.2.101 and 192.168.3.101). Below is a screenshot of the name resource properties, showing the tab where you can add the IP addresses: 4.2 – How Cluster IP addresses are published to DNS Note that, in the cluster shown in the screenshots, we have 5 distinct names, each of them using 3 IP addreses, since we are using 3 distinct public Cluster Networks. You have the names of the two nodes (FS1 and FS2), the name of the cluster (FS) and the names of the two Cluster File services (FSA and FSB). Here’s how this shows in DNS, after everything is properly configured: 4.3 – Cluster Name and Cluster IP Address dependencies When your clients are using a File Server resource with multiple routable IP addresses, you should make sure the IP addresses are defined as OR resources, not AND resources, in your dependency definitions. This means that, even if you lose all but one one IP address, the file service is still operational in that node. The default is AND and this will cause the file service to fail over upon the failure of any of the IP addresses, which is typically not desired. Below you can see the File Service resource with only one of the three IP addresses failed. There is an alert, but the file service is still online and will not failover unless all IP addresses fail. 5. Conclusion Network planning and configuration plays a major role in your File Server deployment. I hope this blog post has allowed you to consider increasing the throughput and the availability of your File Server by enabling and configuring multiple NICs. I encourage you to experiment with these configurations and features in a lab environment.
1 - Overview
:LOOP
IPCONFIG /FLUSHDNS
PING FS1 –N 1 | FIND “Reply”
NET USE F:=\\FS1\TEST >NUL
COPY C:\WINDOWS\SYSTEM32\MRT.EXE F:\
NET USE F: /DELETE >NUL
CHOICE /T 30 /C NY /D Y /M "Waiting a few seconds... Continue"
IF ERRORLEVEL 2 GOTO LOOP
:END
MRTG singkatan dari The Multi Router Traffic Grapher, merupakan tools yang berfungsi untuk mengamati besar trafik yang melewati suatu interface tertentu dimana yang bekerja ialah protokol SNMP, kemudian dari data yang diperoleh tersebut, akan dibuat suatu grafik sehingga mudah diamati oleh seorang admin jaringan.
Step by step yang perlu dilakukan ialah :
1. aktifkan SNMP pada windows
masuk ke control panel –> add or remove programs –> add/remove windows components –> centang management and monitoring tools, klik details
2. download dan install Perl
3. download paket mrtg kemudian install/ekstrak
4. rename folder hasil ekstraknya dengan mrtg
5. lalu buat satu folder baru dan beri nama mrtghtml
6. membuat setting konfigurasi dengan cfgmaker
masuk ke cmd, misal mrtg berada pada drive D
Microsoft
C:\>d:
D:\>cd mrtg\bin
D:\mrtg\bin>perl cfgmaker public@202.158.170.1 --global "WorkDir: d:\mrtghtml" --output server.cfg
akan dihasilkan sebuah file server.cfg
7. sebelum me-run mrtg, sebaiknya dibuat indexmaker agar semua interface yang diamati tersebut bisa dilihat secara bersamaan, ga satu per satu, istilah laen yaitu mengumpulkan… D:\mrtg\bin>perl indexmaker --output index.htm server.cfg
diperoleh file index.htm, lalu copy-kan ke direktory D:\mrtghtml
8. masuk ke D:\mrtg\bin kemudian edit file server.cfg pake notepad.
Tambahkan :
RunAsDaemon: Yes
Interval: 5
Agar mrtg selalu merefresh tiap 5 menit sekali
Options[_]: bits
ShortLegend[_]: b/s
Untuk mengubah parameter skala bytes menjadi bits
9. menjalankan mrtg
D:\mrtg\bin>perl mrtg server.cfg
Sekian saja proses instalasi mrtg untuk windows. Untuk melihat grafiknya, masuk ke D:\mrtghtml kemudian klik dua kali pada index.htm. Bila ada kerusakan gambar pada links bagian bawah, maka cara memperbaiki, kita perlu mengkopikan images pada D:\mrtg\images dan paste-kan di D:\mrtghtml
Referensi :
http://oss.oetiker.ch/mrtg/
http://www.netmon.org/24x7x365.htm
http://www.progsoc.uts.edu.au/lists/progsoc/2000/April/msg00146.html
bandwith imitation cisco
Limit Bandwidth per-host Cisco Router
Sekedar share to reader.. di Kantorku kebutuhan akan bandwidth terus meningkat dari awal 2MB untuk main link internet, sekarang sudah 8 MB, dan kebutuhan itu terus meningkat seiring user dikantor terus bertambah dan haus akan informasi lewat internet. Maka itu saya membuat bandwidth limit untuk mengatur ip public di router gateway.
Fungsi pengaturan bandwidth (QoS) yang saya terapkan yaitu untuk menjaga supaya server-server operasional yang menggunakan ip public tidak tersedot bandwidthnya oleh yg laen terutama oleh proxy server yang digunakan untuk browsing, sehingga perlu dikelompokkan dalam 4 class untuk membatasi b/w hal ini. Berikut konfigurasi yang saya tambahkan di router gateway.
# Pasang Access-list untuk membatasi tiap host yang akan di shapping
access-list 102 permit ip any host 114.4.14.244
access-list 103 permit ip any host 114.4.14.245
access-list 104 permit ip any host 114.4.14.248
access-list 105 permit ip any host 114.4.14.242
access-list diatas dibagi dalam 4 untuk masing-masing host yang akan di filter/limit
#Pasang Class yang akan di shapping
class-map match-any SERVER-1
description SERVER-1
match access-group 102
class-map match-any SERVER-2
description SERVER-2
match access-group 103
class-map match-any SERVER-3
description SERVER-3
match access-group 104
class-map match-any SERVER-4
description SERVER-4
match access-group 105
#Pasang Policy Global shapping berikut:
policy-map BW-SHAPPING
class SERVER-1
police 3000000 35000 35000 conform-action transmit exceed-action drop violate-action drop
class SERVER-2
police 3000000 35000 35000 conform-action transmit exceed-action drop violate-action drop
class SERVER-3
police 1000000 35000 35000 conform-action transmit exceed-action drop violate-action drop
class SERVER-4
police 1000000 35000 35000 conform-action transmit exceed-action drop violate-action drop
Penjelasan diatas SERVER-1 dan SERVER-2 akan dibatasi download dan upload sebesar 3MB sedangkan untuk SERVER-3 dan SERVER-4 akan dibatasi maksimal 1MB.
#Pasang service policy di interface outside to provider
interface FastEthernet0/0
service-policy input BW-SHAPPING
service-policy output BW-SHAPPING
!
Untuk melihat input class bandwith yang sudah di shapping lakukan perintah berikut
show policy-map interface Fa0/0 input class SERVER-1
command diatas untuk melihat shapping class SERVER-1.
done
Transparent Web Proxying with Cisco, Squid, and WCCP
Transparent Web Proxying with Cisco, Squid, and WCCP
Kerry Thompson, July 2010Contents
IntroductionA Basic Network and Web Proxy
Cisco Configuration
Squid Configuration
Linux Network Configuration
Testing
Closing Notes
References
Introduction
There are a number of good reasons for companies to deploy proxies for user access to the Internet. Amongst these are- Monitoring of web sites and traffic volumes
- Restricting web access - by user, web sites, time of day, etc.
- Using caching to reduce traffic volumes
- Managing bandwidth
There are also a number of challenges faced when implementing proxies. Probably the top one is the job of configuring all of the web browsers to use the proxy, and then comes the problem of what to do if the proxy fails.
This article proesents a solution of web proxying which is transparent to the end-user - it requires no browser configuration. It is also resilient to failure, in that if the proxy server fails then web access continues to be provided without disruption.
A Basic Network and Web proxy
In the network drawing below I show a basic network with access to the Internet, this is a very common configuration for small business networks.
Figure 1: A basic network and proxy
Figure 2: A basic network with DMZ-protected proxy
WCCP Overview
Most Cisco routers support a protocol called Web Cache Control Protocol, or WCCP. This protocol is used by a proxy server, such as a LInux server running the Squid proxy, to tell the router that it is alive and ready to process web access requests. WCCP uses the UPD protocol on port 2048 - it is essentially a one-way communication from the proxy to the router.
Figure 3: WCCP between the proxy and router
- You can have multiple proxy servers. In fact, you can have almost any number if your router is big enough to handle them. THis means for large organisation the load will be spread amongst them improving performance.
- Access is resilient to failure. If a proxy fails, then the router will immediately start using another (if you've got more than one configured), otherwise it will stop using proxies and forward requests directly to the Internet. The router can also be configured to block Internet web access if there are no running proxies available.
- Optimised hashing of URLs. When you have more than one proxy a user will request a web page that will then be cached by a proxy. The next time any user requests the same page, the router will send the request to the same proxy with the cached copy of the page.
WCCP proxy traffic flows are a little bit unusual, and can be very confusing to begin with. The following drawing shows the main flows for a WCCP proxy:
Figure 4: WCCP traffic flows
There's some interesting things to note about the traffic flows here.
- The Squid proxy sends a WCCP packet to the router every 10 seconds to tell the router that the proxy is alive and ready to receive web requests. You can now see here that it is easy to have multiple proxy servers that can work with the router.
- When a client makes a request for an Internet web page, it sends it directly to the Internet via the outer, as shown in (1) above.
- The router captures the request, encapsulates it in a GRE packet, and forwards it to the proxy as shown in (2) above.
- The linux system un-encapsulates the GRE packet and sends the request to the Squid proxy by performing a Destination NAT operation on the packet - note that Squid now receives the original packet with its original source and destination IP addresses.
- The Squid proxy now fetches the web page from the Internet server in the normal fashion shown in (3) above - it uses its own IP address as the source and the original destination IP address for the destination. Note that the router does not intercept and attempt to proxy this request.
- Once Squid has downloaded the page, it saves the data in its own cache, then replies directly back to the client on the internal network. And this is the tricky thing right here - when Squid replies it uses the IP address of the Internet server as the source in the packet, and the client IP address as the destination, this is shown in (4) above.
In the remainder of this paper I will briefly show the Cisco, Linux, and Squid configurations required to get this working.
Cisco Configuration
In this example, I will have 2 proxies configured on the internal network (192.168.1.0/24) with IP addresses of 192.168.1.252 and 192.168.1.253. The first step is to define an access list containing the addresses of the proxies, and assign this as the list of WCCP proxies:access-list 10 permit 192.168.1.252
access-list 10 permit 192.168.1.253
ip wccp web-cache group-list 10
Next we define another access-list to define direct or WCCP-proxied internet access. The proxies on 192.168.1.252 & 253 are denied access to WCCP, all other hosts on 192.168.1.0/24 are proxied when going to port 80, all others are denied. Denial implies direct access to the remote web server. access-list 120 remark ACL for WCCP proxy access
access-list 120 remark Squid proxies bypass WCCP
access-list 120 deny ip host 192.168.1.253 any
access-list 120 deny ip host 192.168.1.252 any
access-list 120 remark LAN clients proxy port 80 only
access-list 120 permit tcp 192.168.1.0 0.0.0.255 any eq 80
access-list 120 remark all others bypass WCCP
access-list 120 deny ip any any
!
! Assign ACL to WCCP
ip wccp web-cache redirect-list 120
Now set WCCP version 2: ip wccp version 2
Verify the configuration - it should be active on version 2 with no caches connected until the Squid proxy is configured. Router#sh ip wccp
Global WCCP information:
Router information:
Router Identifier: -not yet determined-
Protocol Version: 2.0
Service Identifier: web-cache
Number of Service Group Clients: 0
Number of Service Group Routers: 0
Total Packets s/w Redirected: 0
Process: 0
Fast: 0
CEF: 0
Redirect access-list: 120
Total Packets Denied Redirect: 0
Total Packets Unassigned: 0
Group access-list: -none-
Total Messages Denied to Group: 0
Total Authentication failures: 0
Total Bypassed Packets Received: 0
Router#
At this point, client browsers which are not configured to use the Squid proxy explicitly may not be able to reach Internet web sites if the Squid proxy is registered with the router. If this is an issue for the users then the best option to disable & enable WCCP proxying is to remove the configuration from the interface (Fastethernet/0 in this case): int f0
!
no ip wccp web-cache redirect in
and to enable it: int f0
!
ip wccp web-cache redirect in
Squid Configuration
Now we need to configure a Squid proxy on a Linux server. I won't cover the basic installation - just the configuration part, so I assume you know a little bit about configuring Squid. To start with, check that Squid is installed and is working as a proxy by setting it up in your browser and fetching a few web pages through it. First of all, check that your Squid has been built ready for WCCP proxying. Run squid -v and verify that the following options are included:--enable-linux-netfilter
--enable-wccpv2
If those options aren't there then you'll have to download the squid source code and build it from scratch with these options included in the ./configure build command. Now to configure WCCP for your Squid proxy. In this example I add a new listening port (port 3127) to Squid for transparent proxying, leaving the default port of 3128 available for normal proxying. Add the following lines to /etc/squid/squid.conf:
# additional port for transparent proxy
http_port 3127 transparent
# WCCP Router IP
wccp2_router 192.168.1.254
# forwarding 1=gre 2=l2
wccp2_forwarding_method 1
# GRE return method gre|l2
wccp2_return_method 1
# Assignment method hash|mask
wccp2_assignment_method hash
# standard web cache, no auth
wccp2_service standard 0
Restart the Squid proxy once the changes have been made, and verify the following: - Squid is listening on port 3128 & serving normal proxy requests
- Squid is listening on 3127
- Check no errors in Squid logs
Linux Network Configuration
Now that Squid is working, we need to get requests redirected from the Cisco router to the proxy. This is done by the router encapsulating the request packet within a GRE packet, hten forwarding it to the IP address of the Squid proxy. On the router side, this is automatic. But we need to configure the Linux system to receive these GRE-encapsulated packets, un-encapsulate them, and forward them to the listening proxy. I'm using a RedHat Linux system here, so the configuration files are those used by RedHat. Create a new interface, gre0 for the GRE interface, create the file /etc/sysconfig/network-scripts/ifcfg-gre0 with the following contents:DEVICE=gre0
TYPE=GRE
BOOTPROTO=none
MY_INNER_IPADDR=172.16.1.1
PEER_OUTER_IPADDR=192.168.1.254
PEER_INNER_IPADDR=172.16.1.2
NETMASK=255.255.255.252
ONBOOT=yes
IPV6INIT=no
USERCTL=no
Run "ifdown gre0" and "ifup gre0" to test it, then run "ifconfig gre0" and verify the IP addressing. Enable IP forwarding, disable route packet filters, configure DNAT in IPtables Run the following commands: # bring up GRE interface
ifup gre0
# enable IP forwarding, disable route packet filters
# between interfaces
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 0 > /proc/sys/net/ipv4/conf/default/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/gre0/rp_filter
# The following line redirects all http packets which exit gre0
# to port 3127 on the local Squid server.
iptables -F -t nat
iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 \
-j DNAT --to-destination 192.168.1.253:3127
You'll need to run these are system boot time, add the commands to the start section of the /etc/init.d/squid script. Testing
tcpdump is your friend when testing this configuration. Check the flows in order shown in Figure 4 above and verify that each one works in order. Remember that the Squid proxy will use the IP address of the Internet web server when replying back to the client, so be aware of this. If your proxy is behind a firewall you will probably have to disable anti-spoofing mechanisms to allow the proxy to spoof the web server's IP address.Most problems seem to occur in the Linux GRE & NAT configuration. And don't forget to check the Squid logs for errors.
Closing Notes
In this paper I've described a method of transparently caching web requests using a Squid proxy and WCCP-enabled Cisco router. As described in the introduction this solution can be used to implement security controls and bandwidth management without having to reconfigure client systems to explicitly use a proxy server.cisco router failover connection
There are often requests for information regarding Cisco failovers. The following seems to work with 12.4 and higher. It should also work with 12.3, but this has not been tested.
Cisco provides a little known item called a tracking object. None of the following can be done through the SDM. It must be done through the CLI, however, it's recommended that you do everything through the CLI. The SDM has some interesting inadequacies (for another article).
You will need at least two remote IPs to use as a test. For example, you can use 4.2.2.1 for the first and 4.2.2.2 for the second. These two IPs are never down. You will need to create a host static route for each of the IPs going out the interface to test. There is an example of this in the config. Also included are the overload statements.
Hopefully the following helps. If you know if a better way to do this, please share it here. This was created using a known good configuration of a real client.
LEGEND
- your_first_test_ip = the ip you will use to test your primary connection.
- your _second_test_ip = the ip you will use to test your secondary connection.
- your_primary_firsthop_ip = the first outside hop of your primary connection. Default route for primary connection.
- your_secondary_firsthop_ip = the first outside hop of your secondary connection. Default route for secondary connection.
- primary_interface = the interface name of your primary Internet connection.
- secondary_interface = the interface name of your secondary Internet connection.
- internal_ip_range = ip range of your internal devices.
This sample config assumes 2 connections. Once secondary and one primary.
Config:
IP sla monitor 1
type echo protocol ipIcmpEcho your_first_test_ip source-ipaddr your_source_ip
timeout 2000
threshold 2000
frequency 3
IP sla monitor schedule 1 life forever start-time now
IP sla monitor 2
type echo protocol ipIcmpEcho your_second_test_ip source-ipaddr your_source_ip
timeout 2000
threshold 2000
frequency 3
IP sla monitor schedule 2 life forever start-time now
track 100 rtr 1 reachability
track 200 rtr 2 reachability
IP route 0.0.0.0 0.0.0.0 your_primary_firsthop_ip track 100
IP route 0.0.0.0 0.0.0.0 your_secondary_firsthop_ip track 200
IP route your_first_test_ip 255.255.255.255 your_primary_firsthop_ip
IP route your_second_test_ip 255.255.255.255 your_secondary_firsthop_ip
IP nat inside source route-map primary interface primary_interface overload
IP nat inside source route-map secondary interface secondary_interface overload
IP access-list extended primary-route
10 permit ip internal_ip_range 0.0.0.255 any
IP access-list extended secondary-route
10 permit ip internal_ip_range 0.0.0.255 any
route-map primary permit 10
match ip address primary-route
set ip next-hop your_primary_firsthop_ip
route-map secondary permit 10
match ip address secondary-route
set ip next-hop your_secondary_firsthop_ip
Two notes for troubleshooting.
Sh track
- This will show you what state your tracking objects are in.
sh ip access-list
- Watch for hits to your acl's. This way you can verify your natting.
netflow cisco ios
Configuring NetFlow Export on an IOS Device
Follow the steps below to configure NetFlow export on a Cisco IOS device.
Refer the Cisco Version Matrix for information on Cisco platforms and IOS versions supporting NetFlow |
Enabling NetFlow Export
Enter global configuration mode on the router or MSFC, and issue the following commands for each interface on which you want to enable NetFlow:
interface {interface} {interface_number}
ip route-cache flow
bandwidth
exit
In some recent IOS releases Cisco Express Forwarding has to be enabled. Issue the command ip cef in global configuration mode on the router or MSFC for this. |
This enables NetFlow on the specified interface alone. Remember that on a Cisco IOS device, NetFlow is enabled on a per-interface basis. The bandwidth
command is optional, and is used to set the speed of the interface in kilobits per second. Interface speed or link speed value is used to later calculate percentage utilization values in traffic graphs.
Exporting NetFlow Data
Issue the following commands to export NetFlow data to the server on which NetFlow Analyzer is running:
Command | Purpose |
---|---|
ip flow-export destination {hostname|ip_address} 9996 | Exports the NetFlow cache entries to the specified IP address. Use the IP address of the NetFlow Analyzer server and the configured NetFlow listener port. The default port is 9996. |
ip flow-export source {interface} {interface_number} | Sets the source IP address of the NetFlow exports sent by the device to the specified IP address. NetFlow Analyzer will make SNMP requests of the device on this address. |
ip flow-export version 5 [peer-as | origin-as] | Sets the NetFlow export version to version 5. NetFlow Analyzer supports only version 5, version 7 and version 9. If your router uses BGP you can specify that either the origin or peer AS is included in exports - it is not possible to include both. |
ip flow-cache timeout active 1 | Breaks up long-lived flows into 1-minute fragments. You can choose any number of minutes between 1 and 60. If you leave it at the default of 30 minutes your traffic reports will have spikes. It is important to set this value to 1 minute in order to generate alerts and view troubleshooting data. |
ip flow-cache timeout inactive 15 | Ensures that flows that have finished are periodically exported. The default value is 15 seconds. You can choose any number of seconds between 10 and 600. However, if you choose a value greater than 250 seconds, NetFlow Analyzer may report traffic levels that are too low. |
snmp-server ifindex persist | Enables ifIndex persistence (interface names) globally. This ensures that the ifIndex values are persisted during device reboots. |
For more information on BGP reporting in NetFlow Analyzer, look up the section on Configuring NetFlow for BGP |
Verifying Device Configuration
Issue the following commands in normal (not configuration) mode to verify whether NetFlow export has been configured correctly:
Command | Purpose |
---|---|
show ip flow export | Shows the current NetFlow configuration |
show ip cache flow | These commands summarize the active flows and give an indication of how much NetFlow data the device is exporting |
show ip cache verbose flow |
A Sample Device Configuration
The following is a set of commands issued on a router to enable NetFlow version 5 on the FastEthernet 0/1 interface and export to the machine 192.168.9.101 on port 9996.
router#enable |
*repeat these commands to enable NetFlow for each interface
Please note that NetFlow data export has to be enabled on all interfaces of a router in order to see accurate IN and OUT traffic. Suppose you have a router with interface A and B. Since NetFlow, by default, is done on an ingress basis, when you enable NetFlow data export on interface A, it will only export the IN traffic for interface A and OUT traffic for interface B. The OUT traffic for interface A will be contributed by the NetFlow data exported from interface B. |
Turning off NetFlow
Issue the following commands in global configuration mode to stop exporting NetFlow data:
Command | Purpose |
---|---|
no ip flow-export destination {hostname|ip_address} {port_number} | This will stop exporting NetFlow cache entries to the specified destination IP address on the specified port number |
interface {interface} {interface_number} | This will disable NetFlow export on the specified interface. Repeat the commands for each interface on which you need to disable NetFlow. |
no ip route-cache flow | |
exit |
For further information on configuring your IOS device for NetFlow data export, refer Cisco's NetFlow commands documentation |