源码解读ODL的MAC地址学习(二)

1 简介

上一篇文章(源码解读ODL的MAC地址学习(一))已经分析了MAC地址学习中的ARP请求的部分源码,下面将接着上一篇文章,介绍一下ARP响应和生成流表的源码。设想的场景不变,如下图所示:

2 ARP响应源码分析

2.1 ARP响应(Packet-In)

PC B接收到OpenFlow交换机转发的ARP请求后,向PC A发送ARP响应。OpenFlow交换机接收到端口2发出的ARP响应后,依据流表项(参看源码解读ODL的MAC地址学习(一) 2.2部分),将Packet-In消息发往OpenFlow控制器。OpenFlow控制器接收到Packet-In消息后,会对ARP响应进行分析,学习MAC地址。

首先对Packet-In数据包的接收和解码我们在源码解读ODL的MAC地址学习(一)的2.2部分已经分析过了,这里就不多做赘述,直接来分析MAC地址的学习。这个部分的代码主要存在在l2switch项目中的addresstracker目录下,首先分析YANG文件。

address-tracker.yang:
module address-tracker {
  ...... 
  grouping address-node-connector {
    list addresses {
      key id;
      leaf id {
        description "A 64-bit key for this observation. This is opaque and should not be interpreted.";
        type uint64;
      }
      leaf mac {
        type yang:mac-address;
        description "MAC address";
      }
      leaf ip {
        type inet:ip-address;
        description "IPv4 or IPv6 address";
      }
      leaf vlan {
        type ethernet:vlan-id;
        description "VLAN id";
      }
      leaf first-seen {
        type int64;
        description "Timestamp (number of ms since January 1, 1970, 00:00:00 GMT) of observing this address for the first time";
      }
      leaf last-seen {
        type int64;
        description "The most recent timestamp (tnumber of ms since January 1, 1970, 00:00:00 GMT) of observing this address";
      }
    }
  }
  augment "/inv:nodes/inv:node/inv:node-connector" {
    ext:augment-identifier "address-capable-node-connector";
    uses address-node-connector;
  }
}

这个YANG文件中比较重要的就是对node-connector的扩展,扩展成address-capable-node-connector,用于在datastore中存储学习到的address信息。下面分析具体的代码,这里面一共有四个JAVA类,其中AddressObserveUsingArp.java、AddressObserveUsingIpv4.java和AddressObserveUsingIpv6.java这三个类差不多,只是监听的数据包不同,所以我们只分析其中一个,这里我们选择分析AddressObserveUsingArp.java://监听ARP数据包

public class AddressObserverUsingArp implements ArpPacketListener {
  public AddressObserverUsingArp(org.opendaylight.l2switch.addresstracker.addressobserver.AddressObservationWriter addressObservationWriter) {
    this.addressObservationWriter = addressObservationWriter;
  }
  @Override
  public void onArpPacketReceived(ArpPacketReceived packetReceived) {
    if(packetReceived == null || packetReceived.getPacketChain() == null) {
      return;
    }
    RawPacket rawPacket = null;   //原始数据包
    EthernetPacket ethernetPacket = null;  //以太网报文
    ArpPacket arpPacket = null;   //arp报文
 //得到相应的原始数据包、以太网报文和arp报文
    for(PacketChain packetChain : packetReceived.getPacketChain()) {
      if(packetChain.getPacket() instanceof RawPacket) {
        rawPacket = (RawPacket) packetChain.getPacket();
      } else if(packetChain.getPacket() instanceof EthernetPacket) {
        ethernetPacket = (EthernetPacket) packetChain.getPacket();
      } else if(packetChain.getPacket() instanceof ArpPacket) {
        arpPacket = (ArpPacket) packetChain.getPacket();
      }
    }
    if(rawPacket == null || ethernetPacket == null || arpPacket == null) {
      return;
    }
    //调用addressObservationWriter类的addAddress()方法
    addressObservationWriter.addAddress(ethernetPacket.getSourceMac(),
        new IpAddress(arpPacket.getSourceProtocolAddress().toCharArray()),
        rawPacket.getIngress());
  }
}

这个类监听了ARP数据包,得到相应的原始数据包、以太网报文和ARP报文,最后调用addressObservationWriter类的addAddress()方法,将MAC地址、IP地址和数据包的入端口作为参数传到这个函数中。

下面我们分析AddressObservationWriter.java:

public class AddressObservationWriter {
  ......
  public void setTimestampUpdateInterval(long timestampUpdateInterval) {
    this.timestampUpdateInterval = timestampUpdateInterval;  //定义时间戳
  }
  //向datastore中添加Address
  public void addAddress(MacAddress macAddress, IpAddress ipAddress, NodeConnectorRef nodeConnectorRef) {
    if(macAddress == null || ipAddress == null || nodeConnectorRef == null) {
      return;
    }
    // 锁住线程,当前写入线程未结束时不能进行下一个写入线程
    NodeConnectorLock nodeConnectorLock;
    synchronized(this) {
      nodeConnectorLock = lockMap.get(nodeConnectorRef);
      if(nodeConnectorLock == null) {
        nodeConnectorLock = new NodeConnectorLock();
        lockMap.put(nodeConnectorRef, nodeConnectorLock);
      }
    }
    synchronized(nodeConnectorLock) {
      // Ensure previous transaction finished writing to the db
      CheckedFuture<Void, TransactionCommitFailedException> future = futureMap.get(nodeConnectorLock);
      if (future != null) {
        try {
          future.get();
        }
        catch (InterruptedException|ExecutionException e) {
          _logger.error("Exception while waiting for previous transaction to finish", e);
        }
      }
      //初始化addressBuilder
      long now = new Date().getTime();
      final AddressCapableNodeConnectorBuilder acncBuilder = new AddressCapableNodeConnectorBuilder();
      final AddressesBuilder addressBuilder = new AddressesBuilder()
          .setIp(ipAddress)
          .setMac(macAddress)
          .setFirstSeen(now)
          .setLastSeen(now);
      List<Addresses> addresses = null;
      // 从datastore中读取现有的Address
      ReadOnlyTransaction readTransaction = dataService.newReadOnlyTransaction();
      NodeConnector nc = null;
      try {
        Optional<NodeConnector> dataObjectOptional = readTransaction.read(LogicalDatastoreType.OPERATIONAL, (InstanceIdentifier<NodeConnector>) nodeConnectorRef.getValue()).get();
        if(dataObjectOptional.isPresent())
          nc = (NodeConnector) dataObjectOptional.get();
      } catch(Exception e) {
        _logger.error("Error reading node connector {}", nodeConnectorRef.getValue());
        readTransaction.close();
        throw new RuntimeException("Error reading from operational store, node connector : " + nodeConnectorRef, e);
      }
      readTransaction.close();
      if(nc == null) {
        return;
      }
      AddressCapableNodeConnector acnc = (AddressCapableNodeConnector) nc.getAugmentation(AddressCapableNodeConnector.class);
      // Address存在的情况
      if(acnc != null && acnc.getAddresses() != null) {
        //从现有的Address中查找相应的MAC-IP对,并且更新last-seen时间戳
        addresses = acnc.getAddresses();
        for(int i = 0; i < addresses.size(); i++) {
          if(addresses.get(i).getIp().equals(ipAddress) && addresses.get(i).getMac().equals(macAddress)) {
            if((now - addresses.get(i).getLastSeen()) > timestampUpdateInterval) {
              addressBuilder.setFirstSeen(addresses.get(i).getFirstSeen())
                  .setKey(addresses.get(i).getKey());
              addresses.remove(i);
              break;
            } else {
              return;
            }
          }
        }
      }
      //Address不存在的情况,需要写入新的Address
      else {
        addresses = new ArrayList<>();
      }
      if(addressBuilder.getKey() == null) {
        addressBuilder.setKey(new AddressesKey(BigInteger.valueOf(addressKey.getAndIncrement())));
      }
      addresses.add(addressBuilder.build());
      acncBuilder.setAddresses(addresses);
      //定义写入datastore的IID是AddressCapableNodeConnector
      InstanceIdentifier<AddressCapableNodeConnector> addressCapableNcInstanceId =
          ((InstanceIdentifier<NodeConnector>) nodeConnectorRef.getValue())
              .augmentation(AddressCapableNodeConnector.class);
      final WriteTransaction writeTransaction = dataService.newWriteOnlyTransaction();
      // 向datastore中写入Address,其中datastore的类型是operational,IID是AddressCapableNodeConnector
      writeTransaction.merge(LogicalDatastoreType.OPERATIONAL, addressCapableNcInstanceId, acncBuilder.build());
      final CheckedFuture writeTxResultFuture = writeTransaction.submit();
      Futures.addCallback(writeTxResultFuture, new FutureCallback() {
        @Override
        public void onSuccess(Object o) {
          _logger.debug("AddressObservationWriter write successful for tx :{}", writeTransaction.getIdentifier());
        }
        @Override
        public void onFailure(Throwable throwable) {
          _logger.error("AddressObservationWriter write transaction {} failed", writeTransaction.getIdentifier(), throwable.getCause());
        }
      });
      futureMap.put(nodeConnectorLock, writeTxResultFuture);
    }
  }
}

这个类实现的功能就是根据传进来的MAC和IP地址在datastore中查找现有的Address,如果现有的Address中已经存在这个MAC-IP对的话,那么就会更新Address中的last-seen(最新一次的观察时间),如果不存在,那么就向datastore中的AddressCapableNodeConnector IID存入新的Address,这样就学到了MAC地址。此时OpenFlow交换机就学习到了PC B的MAC地址。

2.2 ARP响应(Packet-Out)

学习到了MAC地址之后,ODL控制器就会发送Packet-Out消息,具体的Packet-Out消息发送的源码在源码解读ODL的MAC地址学习(一)的2.3部分,因为已经学习到了目的MAC地址,所以直接将Packet-Out消息行动设置为向目的端口输出,对于设想的场景,就会向端口2输出,这样就实现了PC A向IPv4地址为10.0.0.2的目标地址发送数据包。

3 添加流表项

数据包发送成功之后,ODL控制器还会产生两个流表项下发给OpenFlow交换机:一是匹配此数据包的目的MAC地址,Output是出端口;二是匹配此数据包的源MAC,Output是入端口。这部分的代码主要在l2switch项目中的l2switch-main目录下,这里面主要分析FlowWriterServiceImpl.java

public class FlowWriterServiceImpl implements FlowWriterService {
  ......
  //添加MAC-MAC流表
  @Override
  public void addMacToMacFlow(MacAddress sourceMac, MacAddress destMac, NodeConnectorRef destNodeConnectorRef) {
    Preconditions.checkNotNull(destMac, "Destination mac address should not be null.");
    Preconditions.checkNotNull(destNodeConnectorRef, "Destination port should not be null.");
    // 如果两个MAC地址相同则不添加流表
    if(sourceMac != null && destMac.equals(sourceMac)) {
      _logger.info("In addMacToMacFlow: No flows added. Source and Destination mac are same.");
      return;
    }
    // 生成table ID
    TableKey flowTableKey = new TableKey((short) flowTableId);
    //build a flow path based on node connector to program flow
    InstanceIdentifier<Flow> flowPath = buildFlowPath(destNodeConnectorRef, flowTableKey);
    //生成流表项的主体
    Flow flowBody = createMacToMacFlow(flowTableKey.getId(), flowPriority, sourceMac, destMac, destNodeConnectorRef);
    //向配置数据中提交流表项
    writeFlowToConfigData(flowPath, flowBody);
  }
  //生成对称的两条流表项
  public void addBidirectionalMacToMacFlows(MacAddress sourceMac,
                                            NodeConnectorRef sourceNodeConnectorRef,
                                            MacAddress destMac,
                                            NodeConnectorRef destNodeConnectorRef) {
    Preconditions.checkNotNull(sourceMac, "Source mac address should not be null.");
    Preconditions.checkNotNull(sourceNodeConnectorRef, "Source port should not be null.");
    Preconditions.checkNotNull(destMac, "Destination mac address should not be null.");
    Preconditions.checkNotNull(destNodeConnectorRef, "Destination port should not be null.");
    if(sourceNodeConnectorRef.equals(destNodeConnectorRef)) {
      _logger.info("In addMacToMacFlowsUsingShortestPath: No flows added. Source and Destination ports are same.");
      return;
    }
    // 在源端口添加目的MAC-源MAC的流表项
    addMacToMacFlow(destMac, sourceMac, sourceNodeConnectorRef);
    // 在目的端口添加目的MAC-源MAC的流表项
    addMacToMacFlow(sourceMac, destMac, destNodeConnectorRef);
  }
  private InstanceIdentifier<Flow> buildFlowPath(NodeConnectorRef nodeConnectorRef, TableKey flowTableKey) {
    // 生成流表项的ID
    FlowId flowId = new FlowId(String.valueOf(flowIdInc.getAndIncrement()));
    FlowKey flowKey = new FlowKey(flowId);
   //生成流表的IID
    return InstanceIdentifierUtils.generateFlowInstanceIdentifier(nodeConnectorRef, flowTableKey, flowKey);
  }
  //具体生成一条mac-mac流表项
  private Flow createMacToMacFlow(Short tableId, int priority,
                                  MacAddress sourceMac, MacAddress destMac, NodeConnectorRef destPort) {
    // 定义流表项实例
    FlowBuilder macToMacFlow = new FlowBuilder() //
        .setTableId(tableId) //
        .setFlowName("mac2mac");
    // 用自己的哈希码作为流表项的id
    macToMacFlow.setId(new FlowId(Long.toString(macToMacFlow.hashCode())));
    // 生成匹配字段的实例
 //匹配字段为目的mac
    EthernetMatchBuilder ethernetMatchBuilder = new EthernetMatchBuilder() //
        .setEthernetDestination(new EthernetDestinationBuilder() //
            .setAddress(destMac) //
            .build());
    //如果源mac存在的话添加源mac匹配
    if(sourceMac != null) {
      ethernetMatchBuilder.setEthernetSource(new EthernetSourceBuilder()
          .setAddress(sourceMac)
          .build());
    }
    EthernetMatch ethernetMatch = ethernetMatchBuilder.build();
    Match match = new MatchBuilder()
        .setEthernetMatch(ethernetMatch)
        .build();
    //得到目的port的ID
    Uri destPortUri = destPort.getValue().firstKeyOf(NodeConnector.class, NodeConnectorKey.class).getId();
 //定义output action
    Action outputToControllerAction = new ActionBuilder() //
        .setOrder(0)
        .setAction(new OutputActionCaseBuilder() //
            .setOutputAction(new OutputActionBuilder() //
                .setMaxLength(0xffff) //
                .setOutputNodeConnector(destPortUri) //定义目的端口为output
                .build()) //
            .build()) //
        .build();
    // 生成action的应用
    ApplyActions applyActions = new ApplyActionsBuilder().setAction(ImmutableList.of(outputToControllerAction))
        .build();
    //将这个action的应用添加到指令中
    Instruction applyActionsInstruction = new InstructionBuilder() //
        .setOrder(0)
        .setInstruction(new ApplyActionsCaseBuilder()//
            .setApplyActions(applyActions) //
            .build()) //
        .build();
    // 生成流表实例
    macToMacFlow
        .setMatch(match) //
        .setInstructions(new InstructionsBuilder() //
            .setInstruction(ImmutableList.of(applyActionsInstruction)) //
            .build()) //
        .setPriority(priority) //
        .setBufferId(OFConstants.OFP_NO_BUFFER) //
        .setHardTimeout(flowHardTimeout) //
        .setIdleTimeout(flowIdleTimeout) //
        .setCookie(new FlowCookie(BigInteger.valueOf(flowCookieInc.getAndIncrement())))
        .setFlags(new FlowModFlags(false, false, false, false, false));
    return macToMacFlow.build();
  }
  //向配置数据中提交流表项
  private Future<RpcResult<AddFlowOutput>> writeFlowToConfigData(InstanceIdentifier<Flow> flowPath,
                                                                 Flow flow) {
    final InstanceIdentifier<Table> tableInstanceId = flowPath.<Table>firstIdentifierOf(Table.class);
    final InstanceIdentifier<Node> nodeInstanceId = flowPath.<Node>firstIdentifierOf(Node.class);
    final AddFlowInputBuilder builder = new AddFlowInputBuilder(flow);
    builder.setNode(new NodeRef(nodeInstanceId));
    builder.setFlowRef(new FlowRef(flowPath));
    builder.setFlowTable(new FlowTableRef(tableInstanceId));
    builder.setTransactionUri(new Uri(flow.getId().getValue()));
    return salFlowService.addFlow(builder.build());
  }
}

以上的代码实现了匹配此数据包的目的MAC地址,Output是出端口和匹配此数据包的源MAC的两条流表项

4 总结

经过两篇文章的分析,通过监控ARP请求和ARP响应,即可实现MAC地址学习,这是一种自学习桥接器,可高效的使用于网络中。

原文发布于微信公众号 - SDNLAB(SDNLAB)

原文发表时间:2016-08-16

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏Android 研究

Android Handler机制11之Handler机制总结

经过上面的思考,大家是不是发现和其实我们Handler的机制基本上一致。Looper负责轮询;Message代表消息,为了区别对待,用what来做为标识符,wh...

1K1
来自专栏余林丰

备忘录模式

备忘录模式,望文生义就知道它是用来做备忘的,或者可以直接说是“备份”。当需要保存当前状态,以便在不久要恢复此状态时,就可以使用“备忘录模式”。将当前”状态“备份...

1997
来自专栏Golang语言社区

采用interface实现的限时调用方法-博文

在实时性要求高的工程项目中,对于方法执行的时间有较高的要求,本示例程序,实现了一种安全的传入任意参数的方法限时调用工具类,能执行方法的最大容忍运行时间,保证服务...

3548
来自专栏分布式系统进阶

KafkaController分析2-NetworkClient分析InFlightRequests类

1841
来自专栏源哥的专栏

基于linux的嵌入IPv4协议栈的内容过滤防火墙系统(5)-包过滤模块和内容过滤模块所采用的各种技术详述

3。1 module编程 module可以说是 Linux 的一大革新。有了 module 之后,写 device driver 不再是一项恶梦,修改 ker...

1163
来自专栏Coding迪斯尼

从0到1用java再造tcpip协议栈:实现ARP协议层

经过前两节的准备,我们完成了数据链路层,已经具备了数据包接收和发送的基础设施,本机我们在此基础上实现上层协议,我们首先从实现ARP协议开始。先简单认识一下ARP...

1972
来自专栏码匠的流水账

聊聊hikari连接池的idleTimeout及minimumIdle属性

本文主要研究一个hikari连接池的idleTimeout及minimumIdle属性

2991
来自专栏大内老A

[WCF的Binding模型]之三:信道监听器(Channel Listener)

信道管理器是信道的创建者,一般来说信道栈的中每个信道对应着一个信道管理器。基于不同的消息处理的功能,将我们需要将相应的信道按照一定的顺序能组织起来构成一个信道栈...

2065
来自专栏linux驱动个人学习

高通非adsp 架构下的sensor的bug调试

当休眠后,再次打开preesure sensor的时候,会出现隔一段时候后,APK才会出现数据;(数据有时候会很难出现)

1221
来自专栏Golang语言社区

采用interface实现的限时调用方法-博文

在实时性要求高的工程项目中,对于方法执行的时间有较高的要求,本示例程序,实现了一种安全的传入任意参数的方法限时调用工具类,能执行方法的最大容忍运行时间,保证服务...

3064

扫码关注云+社区

领取腾讯云代金券