专栏首页charlierorodocker网络之bridge

docker网络之bridge

建议阅读本文章之前了解一下文章,本文不作bridge的基本介绍

https://blog.csdn.net/u014027051/article/details/53908878/

http://williamherry.blogspot.com/2012/05/linux.html

https://tonybai.com/2016/01/15/understanding-container-networking-on-single-host/

linux bridge:

  • 创建2个netns
ip netns add ns0
ip netns add ns1
  • 为每个netns各添加一个网卡,类型为veth
ip link add veth0_ns0 type veth peer name veth_ns0
ip link add veth0_ns1 type veth peer name veth_ns1
ip link set veth0_ns0 netns ns0
ip link set veth0_ns1 netns ns1  

  查看netns下的网络,可以看到ns0和ns1分别新增接口veth0_ns0,veth0_ns1

[root@localhost home]# ip netns exec ns0 ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
12: veth0_ns0@if11: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 82:87:07:8f:59:a9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    
[root@localhost home]# ip netns exec ns1 ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
14: veth0_ns1@if13: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 9a:14:d8:63:56:45 brd ff:ff:ff:ff:ff:ff link-netnsid 0

host上查看接口信息,通过网卡序号可以看到veth0_ns0(12)和veth_ns0(11)为一对veth,veth0_ns1(14)和veth_ns1(13)为一对veth

[root@localhost home]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:12:5d:af brd ff:ff:ff:ff:ff:ff
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:37:84:0b:5f brd ff:ff:ff:ff:ff:fff
11: veth_ns0@if12: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether c2:e3:ef:a8:9c:08 brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: veth_ns1@if14: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether a6:77:5d:48:10:81 brd ff:ff:ff:ff:ff:ff link-netnsid 3
  • 为ns0和ns1的接口配置IP并UP
ip netns exec ns0 ip addr add 1.1.1.1/24 dev veth0_ns0
ip netns exec ns0 ip link set dev veth0_ns0 up

ip netns exec ns1 ip addr add 1.1.1.2/24 dev veth0_ns1 
ip netns exec ns1 ip link set dev veth0_ns1 up

查看ns0和ns1的网卡信息:

[root@localhost home]# ip netns exec ns0 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
12: veth0_ns0@if11: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether 82:87:07:8f:59:a9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 1.1.1.1/24 scope global veth0_ns0
       valid_lft forever preferred_lft forever

[root@localhost home]# ip netns exec ns1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
14: veth0_ns1@if13: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether 9a:14:d8:63:56:45 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 1.1.1.2/24 scope global veth0_ns1
       valid_lft forever preferred_lft forever

因为两个ns相互独立,此时ns0 ping ns1是ping不通的

[root@localhost home]# ip netns exec ns0 ping 1.1.1.2
PING 1.1.1.2 (1.1.1.2) 56(84) bytes of data.
  • 创建linux bridge并添加ns0和ns1的veth pair: veth_ns0和veth_ns1
ip link add br0 type bridge
ip link set dev br0 up

查看br0信息,可以看到ns0和ns1的pair veth都已经连接到br0

[root@localhost home]# ip a show br0
15: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a6:77:5d:48:10:81 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::42e:2dff:fe70:43d7/64 scope link 
       valid_lft forever preferred_lft forever

[root@localhost home]# ip a show master br0
11: veth_ns0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether c2:e3:ef:a8:9c:08 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::c0e3:efff:fea8:9c08/64 scope link 
       valid_lft forever preferred_lft forever
13: veth_ns1@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether a6:77:5d:48:10:81 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::a477:5dff:fe48:1081/64 scope link 
       valid_lft forever preferred_lft forever

  ns0 ping ns1,此时可以ping 通

[root@localhost netns]# ip netns exec ns0 ping 1.1.1.2
PING 1.1.1.2 (1.1.1.2) 56(84) bytes of data.
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.148 ms

当前组网如下

  • 如果ping 外网网关(本环境为192.168.80.2),因为br0上只连接了2个小网(ns0和ns1),此时是无法连通外网的,将host网卡ens33添加到bridge(本操作会导致远程连接失败,请在测试环境操作),并删除ens33的网卡地址和相关路由(原因参见https://blog.csdn.net/sld880311/article/details/77840343的“”连通性“一节)
[root@localhost home]# ip link show master br0
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:12:5d:af brd ff:ff:ff:ff:ff:ff
11: veth_ns0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000
    link/ether c2:e3:ef:a8:9c:08 brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: veth_ns1@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000
    link/ether a6:77:5d:48:10:81 brd ff:ff:ff:ff:ff:ff link-netnsid 3

可以看到br0上已经存在ens33网卡,但ns0 ping 网关仍然失败,原因是当前br0上连接了2个ns的接口,而host的接口ens33接入br0之后其IP会失效,导致网络不通

[root@localhost netns]# ip netns exec ns0 ping 192.168.80.2 -I veth0_ns0
PING 192.168.80.2 (192.168.80.2) from 1.1.1.1 veth0_ns0: 56(84) bytes of data.

  一个简单的解决办法是给ns01添加一个与网关同网段的IP,并给br0设置与网关同网段的IP,配置如下

[root@localhost netns]# ip netns exec ns0 ip addr add 192.168.80.80/24 dev veth0_ns0
[root@localhost netns]# ip netns exec ns1 ip addr add 192.168.80.81/24 dev veth0_ns1
[root@localhost netns]# ip addr add 192.168.80.82/24 dev br0

这样ns0和ns1都可以ping 通网关,但这样有个问题就是会导致host主机无法与外界相接,并不是一个好的解决方案

[root@localhost netns]# ip netns exec ns0 ping 192.168.80.2 -I veth0_ns0
PING 192.168.80.2 (192.168.80.2) from 192.168.80.80 veth0_ns0: 56(84) bytes of data.
64 bytes from 192.168.80.2: icmp_seq=1 ttl=128 time=0.236 ms
64 bytes from 192.168.80.2: icmp_seq=2 ttl=128 time=0.239 ms

[root@localhost netns]# ip netns exec ns1 ping 192.168.80.2 -I veth0_ns1
PING 192.168.80.2 (192.168.80.2) from 192.168.80.81 veth0_ns1: 56(84) bytes of data.
64 bytes from 192.168.80.2: icmp_seq=1 ttl=128 time=0.288 ms
64 bytes from 192.168.80.2: icmp_seq=2 ttl=128 time=0.280 ms

docker bridge:

  docker的netns在centos下的路径为:/var/run/docker/netns,每创建一个容器就会在该路径下生成一个对应的namespace文件,使用nsenter进入该ns可以看到与容器的网络信息是一样的

首先创建bridge网络,并启动两个docker

[root@localhost home]# docker network create -d bridge --subnet 172.1.1.0/24 my_br
[root@localhost home]# docker run -itd --net=my_br --name=centos0 centos /bin/sh
[root@localhost home]# docker run -itd --net=my_br --name=centos1 centos /bin/sh

查看my_br情况如下,centos0 IP为172.1.1.2,centos1 IP为172.1.1.3,为my_br的子网内地址

[root@localhost home]# docker network inspect my_br 
[
    {
        "Name": "my_br",
        "Id": "f830aee4b13fa17479f850ea62d570ea61bc1c7d182a88010709a7285193bb64",
        "Created": "2018-10-17T07:23:53.31341481+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.1.1.0/24"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {
            "03cda0f3fdd1fc65d198adb832998e11098bcc8c1bb5a8379f9c2ee82a14be07": {
                "Name": "centos1",
                "EndpointID": "d608d888da293967949340c1d946e92a6be06d525bcec611d0f20a6188de01ff",
                "MacAddress": "02:42:ac:01:01:03",
                "IPv4Address": "172.1.1.3/24",
                "IPv6Address": ""
            },
            "c739d26d51b08a36d3402e32fbe83656a7ac1b3f611a6c228f8ec80c84423439": {
                "Name": "centos0",
                "EndpointID": "9b38292d043fba31a5d04076c9d6a333c5beac08aba68dadeb84a5e17fed4dd6",
                "MacAddress": "02:42:ac:01:01:02",
                "IPv4Address": "172.1.1.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

当然容器centos0是可以直接ping 网关的

[root@localhost home]# docker exec centos0 /bin/sh -c "ping 192.168.80.2"
PING 192.168.80.2 (192.168.80.2) 56(84) bytes of data.
64 bytes from 192.168.80.2: icmp_seq=1 ttl=127 time=0.273 ms
64 bytes from 192.168.80.2: icmp_seq=2 ttl=127 time=0.642 ms

查看centos0 centos1和host网卡信息,可以看到centos0的eth0与host的veth00f659d为veth pair,centos0的eth0与host的veth05377ae为另一veth pair

[root@localhost home]# docker exec centos0 /bin/sh -c "ip link"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:ac:01:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

[root@localhost home]# docker exec centos1 /bin/sh -c "ip link"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:ac:01:01:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@localhost home]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:12:5d:af brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:48:2d:4c brd ff:ff:ff:ff:ff:ff
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:48:2d:4c brd ff:ff:ff:ff:ff:ff
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:37:84:0b:5f brd ff:ff:ff:ff:ff:ff
6: br-f830aee4b13f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:af:60:4b:4e brd ff:ff:ff:ff:ff:ff
8: veth00f659d@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f830aee4b13f state UP mode DEFAULT group default 
    link/ether 0e:45:69:f8:34:57 brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: veth05377ae@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f830aee4b13f state UP mode DEFAULT group default 
    link/ether aa:ae:fc:5c:dd:06 brd ff:ff:ff:ff:ff:ff link-netnsid 1

  查看centos0的路由,可以看到默认网关为172.1.1.1,该地址对应的网卡就是名为my_br的网桥

[root@localhost home]# docker exec centos0 /bin/bash -c "ip route"
default via 172.1.1.1 dev eth0 
172.1.1.0/24 dev eth0 proto kernel scope link src 172.1.1.2
[root@localhost home]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
8678329d58ab        bridge              bridge              local
e8476b504e33        host                host                local
f830aee4b13f        my_br               bridge              local
96a70c1a9516        none                null                local

[root@localhost home]# ip a|grep f830aee4b13f
6: br-f830aee4b13f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    inet 172.1.1.1/24 scope global br-f830aee4b13f

host上的与centos0相关的路由如下,加粗的第一行为对内centos0的路由,加粗的第二行为对外路由

[root@localhost home]# ip route
default via 192.168.80.2 dev ens33 proto dhcp metric 100 
172.1.1.0/24 dev br-f830aee4b13f proto kernel scope link src 172.1.1.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.80.0/24 dev ens33 proto kernel scope link src 192.168.80.128 metric 100 

同时查看与172.1.1.0相关的iptables,nat表有如下内容,即对源地址为172.1.1.0/24,出接口非网桥接口的报文进行MASQUERADE,将容器发过来的报文SNAT为host网卡地址

Chain POSTROUTING (policy ACCEPT 332 packets, 21915 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 MASQUERADE  all  --  *      !br-f830aee4b13f  172.1.1.0/24         0.0.0.0/0

总结一下:centos0上ping外网网关(192.168.80.2)的处理流程:icmp报文目的地址为192.168.80.2,由于没有对应的路由,直接走默认路由,报文从容器的eth0发出去,进入到默认网关my_br(172.1.1.1),网桥my_br根据host的路由将目的地址为192.168.80.2的报文发送到ens33,同时将源地址使用MASQUERADE SNAT为ens33的地址。这就是docker bridge的报文转发流程。

改造自定义的bridge

  第一部分的网络方案是有缺陷的,它使得host主机的一个网口失效,根据对docker网络的分析,改造其网络如下:

  • 首先创建veth pair,其中一个连接到br0
ip link add veth0 type veth peer name veth1
ip link set dev veth0 up
ip link set dev veth1 up
ip link set dev veth1 master br0
  • host上添加到达1.1.1.0/24网络的路由,其中1.1.1.3为br0的地址
[root@localhost home]# ip route add 1.1.1.0/24 via 1.1.1.3
  • ns0内部添加默认网关路由(注意必须是网关路由)
ip netns exec ns0 ip route add default via 1.1.1.3 dev veth0_ns0
  • host主机添加对1.1.1.0/24网段的SNAT
iptables -t nat -A POSTROUTING -s 1.1.1.0/24 ! -o br0 -j MASQUERADE

这样就构造了一个模仿docker bridge的网络,ns0就可以ping通外部网关了

[root@localhost home]# ip netns exec ns0 ip route
default via 1.1.1.3 dev veth0_ns0 
1.1.1.0/24 dev veth0_ns0 proto kernel scope link src 1.1.1.1 

[root@localhost home]# ip netns exec ns0 ping 192.168.80.2
PING 192.168.80.2 (192.168.80.2) 56(84) bytes of data.
64 bytes from 192.168.80.2: icmp_seq=1 ttl=127 time=0.439 ms
64 bytes from 192.168.80.2: icmp_seq=2 ttl=127 time=0.533 ms

那么使用MASQUERADE时,iptables是怎么判断不同ping的返回包呢?此时host和ns0的icmp返回包有相同的地址和协议,且没有端口号。答案是通过 ip_conntrack识别icmp报文中的id字段来判断不同的ping进程,参见ICMP connections,ip_conntrack是实现NAT的基础。(新版本内核使用nf_conntrack)

TIPS:

  • 可以使用iptstate查看/proc/net/nf_conntrack的连接状态,如下

参考:

https://blog.csdn.net/sld880311/article/details/77840343

https://docs.docker.com/network/bridge/#differences-between-user-defined-bridges-and-the-default-bridge

http://success.docker.com/article/networking#dockerbridgenetworkdriver

http://vinllen.com/linux-bridgeshe-ji-yu-shi-xian/

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • 基于eBPF的微服务网络安全(Cilium 1)

    翻译自:Network security for microservices with eBPF

    charlieroro
  • minikube配置CRI-O作为runtime并指定flannel插件

      使用crio作为runtime后,容器的启动将不依赖docker相关的组件,容器进程更加简洁。如下使用crio作为runtime启动一个nginx的进程信息...

    charlieroro
  • docker网络之overlay

    使用docker network的好处是:在同一个网络上的容器之间可以相互通信,而无需使用expose端口特性

    charlieroro
  • Linux下快速设定ip bond

        在计算机网路普及的初期,很多OS系统都使用的为单网卡方式,即一个网卡使用一个IP地址。随着网络要求的不断提高,我们可以对多个网卡进行绑定聚合当一个逻辑网...

    Leshami
  • Android 安全退出应用程序的方法总结

    正常关闭应用程序: 当应用不再使用时,通常需要关闭应用,可以使用以下三种方法关闭android应用: 第一种方法:首先获取当前进程的id,然后杀死该进程。 an...

    郭耀华
  • Android 安全退出应用程序的方法总结

    Android 安全退出应用程序的方法总结 正常关闭应用程序: 当应用不再使用时,通常需要关闭应用,可以使用以下三种方法关闭android应用: 第一种方法:首...

    郭耀华
  • centos7服务器添加辅助网卡绑定多ip实践演示

    醉生萌死
  • harbor修改了docker默认网卡失效原因

    修改了docker的默认网段,但是harbor启动时候又会出现docker默认的网段,故现找下原因

    SY小站
  • Haproxy+Keepalived高可用环境部署梳理(主主和主从模式)

    Nginx、LVS、HAProxy 是目前使用最广泛的三种负载均衡软件,本人都在多个项目中实施过,通常会结合Keepalive做健康检查,实现故障转移的高可用功...

    洗尽了浮华
  • innodb实例损坏情况下恢复数据及相关工具的开发

    作者介绍:谢浩,现任职于云和恩墨(北京)信息技术有限公司,具有多年oracle数据库企业级运维经验,擅长结合业务、硬件系统制定各种项目方案,具有丰富mysql相...

    数据和云

扫码关注云+社区

领取腾讯云代金券