zoukankan      html  css  js  c++  java
  • Linux 双网卡绑定技术

     bond技术是在linux2.4以后加入内核。
     一般步骤是
    1.把bonding模块加入内核,

    2 编辑要绑定的网卡设置,去除地址设定

    3 添加bond设备,设置地址等配置

    重启网络 

    5 在交换机上做支持

    具体信息看  内核文档 Documentation/networking/bonding.txt 

    参考实例:

    Linux 双网卡绑定一个IP地址,实质工作就是使用两块网卡虚拟为一块,使用同一个IP地址,是我们能够得到更好的更快的服务。其实这项技术在Sun和Cisco 中早已存在,被称为Trunking和Etherchannel技术,在Linux的2.4.x的内核中也采用这这种技术,被称为bonding。

    1、bonding 的原理:

    什 么是bonding需要从网卡的混杂(promisc)模式说起。我们知道,在正常情况下,网卡只接收目的硬件地址(MAC Address)是自身Mac的以太网帧,对于别的数据帧都滤掉,以减轻驱动程序的负担。但是网卡也支持另外一种被称为混杂promisc的模式,可以接 收网络上所有的帧,比如说tcpdump,就是运行在这个模式下。bonding也运行在这个模式下,而且修改了驱动程序中的mac地址,将两块网卡的 Mac地址改成相同,可以接收特定mac的数据帧。然后把相应的数据帧传送给bond驱动程序处理。

    2、bonding模块工作方式:

    bonding mode=1 miimon=100。miimon是用来进行链路监测的。 比如:miimon=100,那么系统每100ms监测一次链路连接状态,如果有一条线路不通就转入另一条线路;mode的值表示工作模式,他共有0-6 七种模式,常用的为0,1,6三种。
    mode=0:平衡负载模式,有自动备援,但需要”Switch”支援及设定。
    mode=1:自动备援模式,其中一条线若断线,其他线路将会自动备援。
    mode=6:平衡负载模式,有自动备援,不需要”Switch”支援及设定。
    mode=0 (balance-rr)

    Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

    mode=1 (active-backup)

    Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

    mode=2 (balance-xor)

    XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

    mode=3 (broadcast)

    Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

    mode=4 (802.3ad)

    IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. Pre-requisites: 1. Ethtool support in the base drivers for retrieving

    the speed and duplex of each slave. 2. A switch that supports IEEE 802.3ad Dynamic link

    aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

    mode=5 (balance-tlb)

    Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave. Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.

    mode=6 (balance-alb)

    Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

    3、debian系统的安装配置

    3.1、安装ifenslave

    1. apt-get install ifenslave  

    3.2、让系统开机自动加载模块bonding

    1. sudo sh -c "echo bonding mode=1 miimon=100 >> /etc/modules"  

    3.3、网卡配置

    1. sudo vi /etc/network/interfaces  
    2. #实例内容如下:  
    3. auto lo  
    4. iface lo inet loopback  
    5.   
    6. auto bond0  
    7. iface bond0 inet static  
    8. address 192.168.1.110  
    9. netmask 255.255.255.0  
    10. gateway 192.168.1.1  
    11. dns-nameservers 192.168.1.1  
    12. post-up ifenslave bond0 eth0 eth1  
    13. pre-down ifenslave -d bond0 eth0 eth1  

    3.4、重启网卡,完成配置

    1. #如果安装ifenslave后你没有重启计算机,必须手动加载bonding模块。  
    2. sudo modprobe bonding mode=1 miimon=100  
    3. #重启网卡  
    4. sudo /etc/init.d/networking restart  

    4、redhat系统的安装配置

    4.1、安装ifenslave
    redhat默认一般已经安装。未安装的要先安装。

    1. yum install ifenslave  

    4.2、让系统开机自动加载模块bonding

    1. sudo sh -c "echo alias bond0 bonding >> /etc/modprobe.conf"  
    2. sudo sh -c "echo options bond0 miimon=100 mode=1 >> /etc/modprobe.conf"  

    4.3、网卡配置

    1. sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0  
    2. #eth0配置如下  
    3. DEVICE=eth0  
    4. ONBOOT=yes  
    5. BOOTPROTO=none  
    6.   
    7. sudo vi /etc/sysconfig/network-scripts/ifcfg-eth1  
    8. #eth1配置如下  
    9. DEVICE=eth1  
    10. ONBOOT=yes  
    11. BOOTPROTO=none  
    12.   
    13. sudo vi /etc/sysconfig/network-scripts/ifcfg-bond0  
    14. #bond0配置如下  
    15. DEVICE=bond0  
    16. ONBOOT=yes  
    17. BOOTPROTO=static  
    18. IPADDR=192.168.1.110  
    19. NETMASK=255.255.255.0  
    20. GATEWAY=192.168.1.1  
    21. SLAVE=eth0,eth1  
    22. TYPE=Ethernet  
    23.   
    24. #系统启动时绑定双网卡  
    25. sudo sh -c "echo ifenslave bond0 eth0 eth1 >> /etc/rc.local"  

    4.4、重启网卡,完成配置

    1. #如果安装ifenslave后你没有重启计算机,必须手动加载bonding模块。  
    2. sudo modprobe bonding mode=1 miimon=100  
    3. #重启网卡  
    4. sudo /etc/init.d/network restart  

    5、交换机etherChannel配置

    使用mode=0时,需要交换机配置支持etherChannel。

    1. Switch# configure terminal  
    2. Switch(config)# interface range fastethernet 0/1 - 2  
    3. Switch(config-if-range)# channel-group 1 mode on  
    4. Switch(config-if-range)# end  
    5. Switch#copy run start  

    参考
    1 http://sapling.me/unixlinux/linux_two_nic_one_ip_bonding.html
    2 http://www.linux-corner.info/bonding.html

    Linux 网卡绑定技术

    简单的说就是几个网卡绑在一起成为一个虚拟的网卡,这个网卡一般命名为bond0,1,2...用到的技术是bonding,有下面的帖子总结的好,贴之
     
    ===================================
    Linux下双网卡绑定技术实现负载均衡和失效保护

         保持服务器的高可用性是企业级 IT 环境的重要因素。其中最重要的一点是服务器网络连接的高可用性。网卡(NIC)绑定技术有助于保证高可用性特性并提供其它优势以提高网络性能。 

          我 们在这介绍的Linux双网卡绑定实现就是使用两块网卡虚拟成为一块网卡,这个聚合起来的设备看起来是一个单独的以太网接口设备,通俗点讲就是两块网卡具 有相同的IP地址而并行链接聚合成一个逻辑链路工作。其实这项技术在Sun和Cisco中早已存在,被称为Trunking和Etherchannel技 术,在Linux的2.4.x的内核中也采用这这种技术,被称为bonding。bonding技术的最早应用是在集群——beowulf上,为了提高集 群节点间的数据传输而设计的。下面我们讨论一下bonding 的原理,什么是bonding需要从网卡的混杂(promisc)模式说起。我们知道,在 正常情况下,网卡只接收目的硬件地址(MAC Address)是自身Mac的以太网帧,对于别的数据帧都滤掉,以减轻驱动程序的负担。但是网卡也支持另 外一种被称为混杂promisc的模式,可以接收网络上所有的帧,比如说tcpdump,就是运行在这个模式下。bonding也运行在这个模式下,而且 修改了驱动程序中的mac地址,将两块网卡的Mac地址改成相同,可以接收特定mac的数据帧。然后把相应的数据帧传送给bond驱动程序处理。 
        说了半天理论,其实配置很简单,一共四个步骤:
    实验的操作系统是 Redhat Linux Enterprise 3.0
    绑定的前提条件: 芯片组型号相同,而且网卡应该具备自己独立的BIOS芯片

    1.编辑虚拟网络接口配置文件,指定网卡IP 
    1. vi /etc/sysconfig/ network-scripts/ ifcfg-bond0
    2. [root@rhas-13 root]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 ifcfg-bond0
    2 #vi ifcfg-bond0 
    1. 将第一行改成 DEVICE=bond0
    2. # cat ifcfg-bond0
    3. DEVICE=bond0
    4. BOOTPROTO=static
    5. IPADDR=172.31.0.13
    6. NETMASK=255.255.252.0
    7. BROADCAST=172.31.3.254
    8. ONBOOT=yes
    9. TYPE=Ethernet
    这里要主意,不要指定单个网卡的IP 地址、子网掩码或网卡 ID。将上述信息指定到虚拟适配器(bonding)中即可。
    [root@rhas-13 network-scripts]# cat ifcfg-eth0 
    1. DEVICE=eth0
    2. ONBOOT=yes
    3. BOOTPROTO=dhcp
    4. [root@rhas-13 network-scripts]# cat ifcfg-eth1
    5. DEVICE=eth0
    6. ONBOOT=yes
    7. BOOTPROTO=dhcp

    3 # vi /etc/modules.conf 
    编辑 /etc/modules.conf 文件,加入如下一行内容,以使系统在启动时加载bonding模块,对外虚拟网络接口设备为 bond0 
     
    加入下列两行 
    1. alias bond0 bonding
    2. options bond0 miimon=100 mode=1
    说明:miimon是用来进行链路监测的。 比如:miimon=100,那么系统每100ms监测一次链路连接状态,如果有一条线路不通就转入另一条线路;mode的值表示工作模式,他共有0,1,2,3四种模式,常用的为0,1两种。
    1.    mode=0表示load balancing (round-robin)为负载均衡方式,两块网卡都工作。
    2.    mode=1表示fault-tolerance (active-backup)提供冗余功能,工作方式是主备的工作方式,也就是说默认情况下只有一块网卡工作,另一块做备份.  
    bonding只能提供链路监测,即从主机到交换机的链路是否接通。如果只是交换机对外的链路down掉了,而交换机本身并没有故障,那么bonding会认为链路没有问题而继续使用
    4 # vi /etc/rc.d/rc.local 
    加入两行 
    1. ifenslave bond0 eth0 eth1
    2. route add -net 172.31.3.254 netmask 255.255.255.0 bond0

    到这时已经配置完毕重新启动机器.
    重启会看见以下信息就表示配置成功了
    ................ 
    Bringing up interface bond0 OK 
    Bringing up interface eth0 OK 
    Bringing up interface eth1 OK 
    ................


    下面我们讨论以下mode分别为0,1时的情况

    mode=1
    工作在主备模式下,这时eth1作为备份网卡是no arp的
      1. [root@rhas-13 network-scripts]# ifconfig 验证网卡的配置信息
      2. bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
      3.           inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
      4.           UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
      5.           RX packets:18495 errors:0 dropped:0 overruns:0 frame:0
      6.           TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
      7.           collisions:0 txqueuelen:0
      8.           RX bytes:1587253 (1.5 Mb) TX bytes:89642 (87.5 Kb)
      9.   
      10. eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
      11.           inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
      12.           UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
      13.           RX packets:9572 errors:0 dropped:0 overruns:0 frame:0
      14.           TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
      15.           collisions:0 txqueuelen:1000
      16.           RX bytes:833514 (813.9 Kb) TX bytes:89642 (87.5 Kb)
      17.           Interrupt:11
      18.   
      19. eth1 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
      20.           inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
      21.           UP BROADCAST RUNNING NOARP SLAVE MULTICAST MTU:1500 Metric:1
      22.           RX packets:8923 errors:0 dropped:0 overruns:0 frame:0
      23.           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
      24.           collisions:0 txqueuelen:1000
      25.           RX bytes:753739 (736.0 Kb) TX bytes:0 (0.0 b)
      26.           Interrupt:15
        那也就是说在主备模式下,当一个网络接口失效时(例如主交换机掉电等),不回出现网络中断,系统会按照cat /etc/rc.d/rc.local里指定网卡的顺序工作,机器仍能对外服务,起到了失效保护的功能.

    mode=0    
    负载均衡工作模式,他能提供两倍的带宽,下我们来看一下网卡的配置信息
    1. [root@rhas-13 root]# ifconfig
    2. bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
    3. inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
    4. UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
    5. RX packets:2817 errors:0 dropped:0 overruns:0 frame:0
    6. TX packets:95 errors:0 dropped:0 overruns:0 carrier:0
    7. collisions:0 txqueuelen:0
    8. RX bytes:226957 (221.6 Kb) TX bytes:15266 (14.9 Kb)
    9. eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
    10. inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
    11. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
    12. RX packets:1406 errors:0 dropped:0 overruns:0 frame:0
    13. TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
    14. collisions:0 txqueuelen:1000
    15. RX bytes:113967 (111.2 Kb) TX bytes:7268 (7.0 Kb)
    16. Interrupt:11
    17. eth1 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
    18. inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
    19. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
    20. RX packets:1411 errors:0 dropped:0 overruns:0 frame:0
    21. TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
    22. collisions:0 txqueuelen:1000
    23. RX bytes:112990 (110.3 Kb) TX bytes:7998 (7.8 Kb)
    24. Interrupt:15
     
          在这种情况下出现一块网卡失效,仅仅会是服务器出口带宽下降,也不会影响网络使用.




    通过查看bond0的工作状态查询能详细的掌握bonding的工作状态
    1. [root@rhas-13 bonding]# cat /proc/net/bonding/bond0
    2. bonding.c:v2.4.1 (September 15, 2003)
    3. Bonding Mode: load balancing (round-robin)
    4. MII Status: up
    5. MII Polling Interval (ms): 0
    6. Up Delay (ms): 0
    7. Down Delay (ms): 0
    8. Multicast Mode: all slaves
    9. Slave Interface: eth1
    10. MII Status: up
    11. Link Failure Count: 0
    12. Permanent HW addr: 00:0e:7f:25:d9:8a
    13. Slave Interface: eth0
    14. MII Status: up
    15. Link Failure Count: 0
    16. Permanent HW addr: 00:0e:7f:25:d9:8b
         Linux下通过网卡邦定技术既增加了服务器的可靠性,又增加了可用网络带宽,为用户提供不间断的关键服务。用以上方法均在redhat的多个版本测试成功,而且效果良好.心动不如行动,赶快一试吧!

    参考文档:
    /usr/share/doc/kernel-doc-2.4.21/networking/bonding.txt
     
     
    -----------------------------

    Finally, today I had implemented NIC bounding (bind both NIC so that it works as a single device). Bonding is nothing but Linux kernel feature that allows to aggregate multiple like interfaces (such as eth0, eth1) into a single virtual link such as bond0. The idea is pretty simple get higher data rates and as well as link failover. The following instructions were tested on:

    1. RHEL v4 / 5 / 6 amd64
    2. CentOS v5 / 6 amd64
    3. Fedora Linux 13 amd64 and up.
    4. 2 x PCI-e Gigabit Ethernet NICs with Jumbo Frames (MTU 9000)
    5. Hardware RAID-10 w/ SAS 15k enterprise grade hard disks.
    6. Gigabit switch with Jumbo Frame

     


    Say Hello To bounding DriverThis server act as an heavy duty ftp, and nfs file server. Each, night a perl script will transfer lots of data from this box to a backup server. Therefore, the network would be setup on a switch using dual network cards. I am using Red Hat enterprise Linux version 4.0. But, the inductions should work on RHEL 5 and 6 too.

    Linux allows binding of multiple network interfaces into a single channel/NIC using special kernel module called bonding. According to official bonding documentation:

    The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.

    Step #1: Create a Bond0 Configuration File

    Red Hat Enterprise Linux (and its clone such as CentOS) stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create a bond0 config file as follows:
    # vi /etc/sysconfig/network-scripts/ifcfg-bond0
    Append the following linest:

    1. DEVICE=bond0
    2. IPADDR=192.168.1.20
    3. NETWORK=192.168.1.0
    4. NETMASK=255.255.255.0
    5. USERCTL=no
    6. BOOTPROTO=none
    7. ONBOOT=yes

    You need to replace IP address with your actual setup. Save and close the file.

    Step #2: Modify eth0 and eth1 config files

    Open both configuration using a text editor such as vi/vim, and make sure file read as follows for eth0 interface

    1. # vi /etc/sysconfig/network-scripts/ifcfg-eth0
    Modify/append directive as follows:
    1. DEVICE=eth0
    2. USERCTL=no
    3. ONBOOT=yes
    4. MASTER=bond0
    5. SLAVE=yes
    6. BOOTPROTO=none

    Open eth1 configuration file using vi text editor, enter:

    1. # vi /etc/sysconfig/network-scripts/ifcfg-eth1
    Make sure file read as follows for eth1 interface:
    1. DEVICE=eth1
    2. USERCTL=no
    3. ONBOOT=yes
    4. MASTER=bond0
    5. SLAVE=yes
    6. BOOTPROTO=none

    Save and close the file.

    Step # 3: Load bond driver/module

    Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:

    1. # vi /etc/modprobe.conf
    Append following two lines:
    1. alias bond0 bonding
    2. options bond0 mode=balance-alb miimon=100
    Save file and exit to shell prompt. You can learn more about all bounding options by clicking here).Step # 4: Test configuration

    First, load the bonding module, enter:

    1. # modprobe bonding
    Restart the networking service in order to bring up bond0 interface, enter:
    1. # service network restart
    Make sure everything is working. Type the following cat command to query the current status of Linux kernel bounding driver, enter:
    1. # cat /proc/net/bonding/bond0
    Sample outputs:
    1. Bonding Mode: load balancing (round-robin)
    2. MII Status: up
    3. MII Polling Interval (ms): 100
    4. Up Delay (ms): 200
    5. Down Delay (ms): 200
    6. Slave Interface: eth0
    7. MII Status: up
    8. Link Failure Count: 0
    9. Permanent HW addr: 00:0c:29:c6:be:59
    10. Slave Interface: eth1
    11. MII Status: up
    12. Link Failure Count: 0
    13. Permanent HW addr: 00:0c:29:c6:be:63

    To kist all network interfaces, enter:

    1. # ifconfig
    Sample outputs:
    1. bond0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
    2. inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
    3. inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
    4. UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
    5. RX packets:2804 errors:0 dropped:0 overruns:0 frame:0
    6. TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0
    7. collisions:0 txqueuelen:0
    8. RX bytes:250825 (244.9 KiB) TX bytes:244683 (238.9 KiB)
    9. eth0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
    10. inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
    11. inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
    12. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
    13. RX packets:2809 errors:0 dropped:0 overruns:0 frame:0
    14. TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0
    15. collisions:0 txqueuelen:1000
    16. RX bytes:251161 (245.2 KiB) TX bytes:180289 (176.0 KiB)
    17. Interrupt:11 Base address:0x1400
    18. eth1 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
    19. inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
    20. inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
    21. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
    22. RX packets:4 errors:0 dropped:0 overruns:0 frame:0
    23. TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
    24. collisions:0 txqueuelen:1000
    25. RX bytes:258 (258.0 b) TX bytes:66516 (64.9 KiB)
    26. Interrupt:10 Base address:0x1480

    Read the official bounding howto which covers the following additional topics:

    • VLAN Configuration
    • Cisco switch related configuration
    • Advanced routing and troubleshooting
    This blog post is 1 of 2 in the "Linux NIC Interface Bonding (aggregate multiple links) Tutorial" series. Keep reading the rest of the series:
    Table of Contents:
    1. Red Hat (RHEL/CentOS) Linux Bond or Team Multiple Network Interfaces (NIC) into a Single Interface
    2. Debian / Ubuntu Linux Configure Bonding [ Teaming / Aggregating NIC ]

    --------------------------------------------------

     
    在学习Suse的时候,我们会遇到很多的情况,我研究了一些Suse的问题,今天所要讲的就是怎样进行Suse双网卡绑定,通过本文希望你能过学习记住Suse双网卡绑定的过程。
     
    1, 比较简单的方法
    ---------------------------------------------------------- 
     
    将两块Fabric网卡绑定为bond1
    # vi /etc/sysconfig/network/ifcfg-bond1
    -------------------- 
    BOOTPROTO='static'
    IPADDR='10.69.16.102'
    NETMASK='255.255.255.0'
    STARTMODE='onboot'
    BONDING_MASTER='yes'
    BONDING_MODULE_OPTS='mode=1 miimon=200'
    BONDING_SLAVE0='eth1'
    BONDING_SLAVE1='eth2'
    -------------------- 
     
    删掉原来的网卡配置文件,重启网络服务
    cd /etc/sysconfig/network/
    rm ifcfg-eth1
    rm ifcfg-eth2
    rcnetwork restart
    使用ifconfig命令检查网卡绑定是否成功。如果已经启用bond0的IP地址,而且原来的两个网卡没有附着IP,而且mac地址一致,则说明绑定成功。
     
    2,比较正规的方法
    ---------------------------------------------------------- 
     
    步骤 1     进入到网络配置目录下:
    # cd /etc/sysconfig/network
     
    步骤 2     创建ifcfg-bond0配置文件。
    # vi ifcfg-bond0
     
    在ifcfg-bond0配置文件中添加如下内容。
    #suse 9 kernel 2.6 ifcfg-bond0
    BOOTPROTO='static'
    device='bond0'
    IPADDR='10.71.122.13'
    NETMASK='255.255.255.0'
    NETWORK='10.71.122.0'
    BROADCAST='10.71.122.255'
    STARTMODE='onboot'
    BONDING_MASTER='yes'
    BONDING_MODULE_OPTS='mode=1 miimon=200'
    BONDING_SLAVE0='eth0'
    BONDING_SLAVE2='eth1'
     
    步骤 3     配置完成,保存该文件并退出。
     
    步骤 4     创建ifcfg-eth0配置文件。
    (装完SUSE9操作系统后/etc/sysconfig/network会有两块网卡MAC地址命名的文件,直接把下面的ifcfg-eth0文件内容覆盖那两个配置文件,不用新建ifcfg-eth0,ifcfg-eth1,SUSE10下则按下面操作)
    # vi ifcfg-eth0
     
    在ifcfg-eth0配置文件中添加如下内容。
    DEVICE='eth0'
    BOOTPROTO='static'
    STARTMODE='onboot'
     
    步骤 5     保存该文件并退出。
     
    步骤 6     创建ifcfg-eth1配置文件。
    # vi ifcfg-eth1
     
    在ifcfg-eth1配置文件中添加如下内容。
    DEVICE='eth1'
    BOOTPROTO='static'
    STARTMODE='onboot'
     
    步骤 7     保存该文件并退出。
     
    步骤 8     重启系统网络配置,使配置生效。
    # rcnetwork restart
     
    3,SUSE厂家主流推荐的方法,个人也比较推崇!
    ----------------------------------------------------------
     
    一、配置加在网卡驱动
     
    在/etc/sysconfig/kernel中的
    MODULES_LOADED_ON_BOOT参数加上网卡的驱动,例如
    MODULES_LOADED_ON_BOOT=”tg3 e1000”
     
    注意:大多数情况下不需要配置这一步骤,只有某些网卡不能在启动过程中驱动初始较慢没有识别导致绑定不成功,也就是有的slave设备没有加入绑定,才需要配置。
     
    二、创建要绑定的网卡配置文件
    /etc/sysconfig/network/ifcfg-eth*,其中*为数字,例如ifcfg-eth0 , ifcfg-eth1等等。
     
    每个文件的内容如下:
    BOOTPROTO='none'
    STARTMODE='off'
     
    三、创建bond0的配置文件
    /etc/sysconfig/network/ifcfg-bond0
     
    内容如下:
    -------------------- 
    BOOTPROTO='static'
    BROADCAST='192.168.1.255'
    IPADDR='192.168.1.1'
    NETMASK='255.255.255.0'
    NETWORK='192.168.1.0'
    STARTMODE='onboot'
    BONDING_MASTER='yes'
    BONDING_MODULE_OPTS='mode=1 miimon=100 use_carrier=1 '
    -------------------- 
     
    #其中mode=1为active-backup模式,mode=0为balance_rr模式
    BONDING_SLAVE0='eth0'
    BONDING_SLAVE1='eth1'
     
    四、对于active-backup模式,需要在BONDING_MODULE_OPTS参数中加上制定主设备的参数,例如:
     
    BONDING_MODULE_OPTS='mode=1 miimon=100 use_carrier=1 primary=eth0'
     
    五、重新启动networkf服务
     
    rcnetwork restart
     
    六、注意事项
     
    (1)在某些情况下网卡驱动的初始化的时间可能会比较长,从而导致bonding不成功,那么可以修改
     
    /etc/sysconfig/network/config配置文件的WAIT_FOR_INTERFACES参数,将其值改成30。
     
    (2)配置完bonding之后,可以通过在客户端ping,然后在服务器端拔插网线来验证是否已经正常工作。
     
    (3)cat /proc/net/bonding/bond0可以查看bonding的状态。这样你就完成了Suse双网卡绑定。
     
     
    from:http://os.51cto.com/art/200911/165875.htm


    ======================================================
     
    参考:
    1. http://www.chinaunix.net/jh/4/371049.html
    2. http://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-interfaces-nic-into-single-interface.html
      http://os.51cto.com/art/200911/165875.htm
  • 相关阅读:
    linux中服务器定时程序设定
    Linux中java项目环境部署,简单记录一下
    四则运算使用栈和后缀表达式
    PAT乙1003
    L7,too late
    PAT乙1002
    L6,Percy Buttons
    如何计算递归算法的时间复杂度
    c#打印(转)
    C中数组与指针【转】
  • 原文地址:https://www.cnblogs.com/timssd/p/4781138.html
Copyright © 2011-2022 走看看