Overview
在前面一篇文章的基础上,现在我将通过OVN创建一个基础的三层网络。创建的最终结果将是一对logical switches通过一个logical router相连。另外,该路由器会通过OVN配置DHCP service,用于提供IP地址。
Re-Architecting the Logical Components
因为我们创建的内容更加复杂了,因此我们需要重构一下。新的拓扑将如下所示:
- 两个logical switch:“dmz”和"inside"
- 用于连接两个logical switch的logical router "tenant1"
- "dmz"的IP network为172.16.255.128/26
- "inside"的IP network为172.16.255.192/26
- 每个logical switch有一对"虚拟机"
最终的结果如下图所示:
A Word On Routing
首先我们将创建一个OVN router,它也被称为"distributed logical router"(DLR)。DLR和传统的路由器不同,它并不是一个实体而仅仅之上一个逻辑结构(和logical switch一样)。DLR在OVS中只是一个函数:换句话说,每个OVS实例都能模拟一个三层的router hop,在将流量转发到overlay network之前。
Creating the Logical Switches and Router
在ubuntu1中创建两个新的logical switches:
ovn-nbctl ls-add inside ovn-nbctl ls-add dmz
添加logical router以及相关的router ports和switch ports
# add the router ovn-nbctl lr-add tenant1 # create router port for the connection to dmz ovn-nbctl lrp-add tenant1 tenant1-dmz 02:ac:10:ff:01:29 172.16.255.129/26 # create the dmz switch port for connection to tenant1 ovn-nbctl lsp-add dmz dmz-tenant1 ovn-nbctl lsp-set-type dmz-tenant1 router ovn-nbctl lsp-set-addresses dmz-tenant1 02:ac:10:ff:01:29 ovn-nbctl lsp-set-options dmz-tenant1 router-port=tenant1-dmz # create router port for the connection to inside ovn-nbctl lrp-add tenant1 tenant1-inside 02:ac:10:ff:01:93 172.16.255.193/26 # create the inside switch port for connection to tenant1 ovn-nbctl lsp-add inside inside-tenant1 ovn-nbctl lsp-set-type inside-tenant1 router ovn-nbctl lsp-set-addresses inside-tenant1 02:ac:10:ff:01:93 ovn-nbctl lsp-set-options inside-tenant1 router-port=tenant1-inside ovn-nbctl show
Adding DHCP
OVN里的DHCP和一般情况有所不同。一般administrator会进行如下操作:
- 对于给定子网定义一些DHCP options
- 创建logical switch port,并且定义mac地址和IP地址
- 将DHCP options赋予该port
- 设置port security,从而只能使用给定的地址
首先,我们在ubuntu1中为4台虚拟机配置logical ports:
ovn-nbctl lsp-add dmz dmz-vm1 ovn-nbctl lsp-set-addresses dmz-vm1 "02:ac:10:ff:01:30 172.16.255.130" ovn-nbctl lsp-set-port-security dmz-vm1 "02:ac:10:ff:01:30 172.16.255.130" ovn-nbctl lsp-add dmz dmz-vm2 ovn-nbctl lsp-set-addresses dmz-vm2 "02:ac:10:ff:01:31 172.16.255.131" ovn-nbctl lsp-set-port-security dmz-vm2 "02:ac:10:ff:01:31 172.16.255.131" ovn-nbctl lsp-add inside inside-vm3 ovn-nbctl lsp-set-addresses inside-vm3 "02:ac:10:ff:01:94 172.16.255.194" ovn-nbctl lsp-set-port-security inside-vm3 "02:ac:10:ff:01:94 172.16.255.194" ovn-nbctl lsp-add inside inside-vm4 ovn-nbctl lsp-set-addresses inside-vm4 "02:ac:10:ff:01:95 172.16.255.195" ovn-nbctl lsp-set-port-security inside-vm4 "02:ac:10:ff:01:95 172.16.255.195" ovn-nbctl show
也许你已经注意到了,和之前的lab不同,此处我们对logical switch port的mac地址和IP地址都进行了定义。IP地址的定义主要用于以下两个目的:
- 它能通过OVN利用它已知的IP/mac对来直接回答ARP请求来抑制ARP
- 它其实是一种DHCP host assignment mechanism,通过利用已经定义好的IP地址来回答来自对应port的DHCP请求
接着定义我们的DHCP options并且将它们赋给我们的logical ports。这里的操作和上文有所不同,在上文中我们都是和OVN NB database直接进行交互的,但是这里不是。这么做的原因是,我们需要获取我们创建的DHCP_Options entry的UUID,从而我们可以将它赋给我们的switch port。为了达到这个目的,我们需要获取ovn-nbctl命令的输出并将其赋值给一对bash变量
dmzDhcp="$(ovn-nbctl create DHCP_Options cidr=172.16.255.128/26 options=""server_id"="172.16.255.129" "server_mac"="02:ac:10:ff:01:29" "lease_time"="3600" "router"="172.16.255.129"")" echo $dmzDhcp insideDhcp="$(ovn-nbctl create DHCP_Options cidr=172.16.255.192/26 options=""server_id"="172.16.255.193" "server_mac"="02:ac:10:ff:01:93" "lease_time"="3600" "router"="172.16.255.193"")" echo $insideDhcp ovn-nbctl dhcp-options-list
如果你想知道更多关于OVN NB database的内容,参见ovn-nb的man page
现在,我们通过bash变量中存储的UUID,将DHCP_Options赋给我们的logical switch ports
ovn-nbctl lsp-set-dhcpv4-options dmz-vm1 $dmzDhcp ovn-nbctl lsp-get-dhcpv4-options dmz-vm1 ovn-nbctl lsp-set-dhcpv4-options dmz-vm2 $dmzDhcp ovn-nbctl lsp-get-dhcpv4-options dmz-vm2 ovn-nbctl lsp-set-dhcpv4-options inside-vm3 $insideDhcp ovn-nbctl lsp-get-dhcpv4-options inside-vm3 ovn-nbctl lsp-set-dhcpv4-options inside-vm4 $insideDhcp ovn-nbctl lsp-get-dhcpv4-options inside-vm4
Configuring the VMs
和上一个lab一样,我们会通过OVS internal ports和network namespace来创键假的“虚拟机”。不同的是,这里我们将用DHCP来获取地址。接着我们就对虚拟机进行设置。
在ubuntu2中:
ip netns add vm1 ovs-vsctl add-port br-int vm1 -- set interface vm1 type=internal ip link set vm1 address 02:ac:10:ff:01:30 ip link set vm1 netns vm1 ovs-vsctl set Interface vm1 external_ids:iface-id=dmz-vm1 ip netns exec vm1 dhclient vm1 ip netns exec vm1 ip addr show vm1 ip netns exec vm1 ip route show ip netns add vm3 ovs-vsctl add-port br-int vm3 -- set interface vm3 type=internal ip link set vm3 address 02:ac:10:ff:01:94 ip link set vm3 netns vm3 ovs-vsctl set Interface vm3 external_ids:iface-id=inside-vm3 ip netns exec vm3 dhclient vm3 ip netns exec vm3 ip addr show vm3 ip netns exec vm3 ip route show
在ubuntu3中:
ip netns add vm2 ovs-vsctl add-port br-int vm2 -- set interface vm2 type=internal ip link set vm2 address 02:ac:10:ff:01:31 ip link set vm2 netns vm2 ovs-vsctl set Interface vm2 external_ids:iface-id=dmz-vm2 ip netns exec vm2 dhclient vm2 ip netns exec vm2 ip addr show vm2 ip netns exec vm2 ip route show ip netns add vm4 ovs-vsctl add-port br-int vm4 -- set interface vm4 type=internal ip link set vm4 address 02:ac:10:ff:01:95 ip link set vm4 netns vm4 ovs-vsctl set Interface vm4 external_ids:iface-id=inside-vm4 ip netns exec vm4 dhclient vm4 ip netns exec vm4 ip addr show vm4 ip netns exec vm4 ip route show
从ubuntu2中的vm1测试连通性:
# ping the default gateway on tenant1 root@ubuntu2:~# ip netns exec vm1 ping 172.16.255.129 PING 172.16.255.129 (172.16.255.129) 56(84) bytes of data. 64 bytes from 172.16.255.129: icmp_seq=1 ttl=254 time=0.689 ms 64 bytes from 172.16.255.129: icmp_seq=2 ttl=254 time=0.393 ms 64 bytes from 172.16.255.129: icmp_seq=3 ttl=254 time=0.483 ms # ping vm2 through the overlay root@ubuntu2:~# ip netns exec vm1 ping 172.16.255.131 PING 172.16.255.131 (172.16.255.131) 56(84) bytes of data. 64 bytes from 172.16.255.131: icmp_seq=1 ttl=64 time=2.16 ms 64 bytes from 172.16.255.131: icmp_seq=2 ttl=64 time=0.573 ms 64 bytes from 172.16.255.131: icmp_seq=3 ttl=64 time=0.446 ms # ping vm3 through the router, via the local ovs bridge root@ubuntu2:~# ip netns exec vm1 ping 172.16.255.194 PING 172.16.255.194 (172.16.255.194) 56(84) bytes of data. 64 bytes from 172.16.255.194: icmp_seq=1 ttl=63 time=1.37 ms 64 bytes from 172.16.255.194: icmp_seq=2 ttl=63 time=0.077 ms 64 bytes from 172.16.255.194: icmp_seq=3 ttl=63 time=0.076 ms # ping vm4 through the router, across the overlay root@ubuntu2:~# ip netns exec vm1 ping 172.16.255.195 PING 172.16.255.195 (172.16.255.195) 56(84) bytes of data. 64 bytes from 172.16.255.195: icmp_seq=1 ttl=63 time=1.79 ms 64 bytes from 172.16.255.195: icmp_seq=2 ttl=63 time=0.605 ms 64 bytes from 172.16.255.195: icmp_seq=3 ttl=63 time=0.503 ms
Final Words
OVN让三层overlay network的创建相当变得简单了。对于像DHCP这样的服务被创建到系统中,其实可以帮助减少构建一个有效的SDN解决方案所需要的外部依赖。在下一篇文章中,我将讨论如何将我们的overlay network(现在还是隔离的)和外面的世界相连。
原文链接:http://blog.spinhirne.com/2016/09/an-introduction-to-ovn-routing.html