Sunday, May 26, 2013

CCIE DC: Multicast Part 5, Multicast and it's relation to OTV

Hi Guys

So in my four part Multicast Series:
http://www.ccierants.com/2013/02/ccie-dc-multicast-part-1.html
http://www.ccierants.com/2013/02/ccie-dc-multicast-part-2.html
http://www.ccierants.com/2013/02/ccie-dc-multicast-part-3.html
http://www.ccierants.com/2013/02/ccie-dc-multicast-part-4.html


We went in depth on multicast, multicast with SSM, multicast with Bi-DIR and how our RP's etc work for us with multicast, I promised in Part 1 that I would link it all back to OTV and I am happy to say I have done so now, let's take a look!

The thing you must remember about OTV is that eventually, you might have multiple data centres connected with OTV, so when cisco designed OTV they said to themselves, we have to have a way to make it easy for OTV DC's to discover each other (OTV Control-Group) and ensure that when we are distributing multicast to multiple OTV DC's that they can utilize an underlying multicast network (OTV Data-Group)

Let's look at a typical OTV interface configuration:

 interface Overlay1
  otv join-interface Ethernet1/9
  otv control-group 224.1.1.2
  otv data-group 232.0.0.0/8
  otv extend-vlan 10
  no shutdown

!

So the first thing is lets look at the control-group, the control group command above means that when OTV is trying to establish an adjacency with it's neighbor, it will use this mcast group to send out the requests to see if anyone wants to establish a peer relationship with it.

So if you can't establish a peer adjacency over a network your sure is enabled for multicast, you need to look at this control-group multicast address and check for reachability end to end by using the show ip mroute and your knowledge from Part 1 to 4 of my mcast tutorial :).

Here is an example of this in the show ip mroute table:


N7K3# show ip mroute
IP Multicast Routing Table for VRF "default"

(*, 224.1.1.2/32), uptime: 00:22:02, otv ip
  Incoming interface: Ethernet1/9, RPF nbr: 169.254.0.71
  Outgoing interface list: (count: 1)
    Overlay1, uptime: 00:22:02, otv


You can see from the above that this little guy has received traffic incoming for this group via the eth1/9 interface, and then passes it on to the overlay interface so that the ISIS adjacency can establish.

This is all fairly straight forward at this point, let's check out what the data-group is all about:

The astute reader may have noticed the IP addressing for the data-group:

interface Overlay1
  otv data-group 232.0.0.0/8

!

This data-group range is the source-specific-multicast group range, which as we know from our previous tutorials requires Source Specific Multicast (SSM)

This is why we are obligated when configuring OTV, to put IP IGMP Version 3 on our join interface:

interface Ethernet1/9
  ip address 169.254.0.71/24
  ip igmp version 3
  no shutdown

!

if we did not have this, the source-specific multicast would not work correctly.

Now we know how this part works, let's generate some mcast traffic



Side note, with iperf be very careful when testing., be sure to run an ip igmp snooping events debug so that you can make sure you see the multicast receiver (the server side) sending the appropriate IGMP join messages, so that the mroute table can be updated

N7K3# 2013 May 26 09:04:42.248659 igmp: SN: 10 Noquerier timer expired, remove all the groups in this vlan.
2013 May 26 09:04:42.249253 igmp: SN: 10 Removing group entry (*, 239.1.1.2) which has no oifs


So on iperf we simply setup our source and receiver, then we can see the mcast traffic flow:

bin/iperf.exe -s -u -P 0 -i 1 -p 5001 -w 41.0K -B 239.1.1.2 -l 1000.0B -f k
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 10.0.0.1
Receiving 1000 byte datagrams
UDP buffer size: 41.0 KByte
------------------------------------------------------------
[148] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 53766



Let's look what that has done to our ip mroute table



N7K3# show ip mroute
IP Multicast Routing Table for VRF "default"

(*, 224.1.1.2/32), uptime: 00:27:48, otv ip
  Incoming interface: Ethernet1/9, RPF nbr: 169.254.0.71
  Outgoing interface list: (count: 1)
    Overlay1, uptime: 00:27:48, otv

(169.254.0.72/32, 232.0.0.0/32), uptime: 00:00:01, otv ip
  Incoming interface: Ethernet1/9, RPF nbr: 169.254.0.72
  Outgoing interface list: (count: 1)
    Overlay1, uptime: 00:00:01, otv



What the heck?  We sent mcast traffic to 239.1.1.1, but the mcast routing table shows it as in the 232.0.0.0 mcast address! What is going on?

Here it is: OTV encapsulates multicast traffic... inside of multicast, then delivers it using the SSM range (the data range) you specified, then decapsulates it at the other end and delivers it to your host:


N7K3# show otv mroute

OTV Multicast Routing Table For Overlay1

(10, *, 239.1.1.2), metric: 0, uptime: 00:00:07, igmp  Outgoing interface list: (count: 1)
    Eth2/11, uptime: 00:00:07, igmp
(10, 10.0.0.2, 239.1.1.2), metric: 0, uptime: 00:01:47, overlay(s)
  Outgoing interface list: (count: 0)



You can see from the above that once OTV receives the mcast traffic (since it is listed as a outgoing interface for the 232.0.0/32 route as per our show ip mroute), it then delivers it to it's own internal otv mroute table which then actually sends the mcast traffic to the correct interface (in our case Eth2/11)

What a hack, it works but wow quite genius!

Why did we do this? because OTV wants to use Source Specific Multicast for distributing the traffic over the Layer 3 DCI Interconnect, why specifically it has to be SSM I am not sure, but if you try and use any other data-group range (non SSM range:)

N7K4(config)# int overlay1
N7K4(config-if-overlay)#   otv data-group 231.0.0.0/24
N7K4(config-if-overlay)# exit
N7K4# show ip mroute
IP Multicast Routing Table for VRF "default"

(*, 224.1.1.2/32), uptime: 00:30:35, ip otv
  Incoming interface: Ethernet1/9, RPF nbr: 169.254.0.72
  Outgoing interface list: (count: 1)
    Overlay1, uptime: 00:00:20, otv

(169.254.0.72/32, 231.0.0.0/32), uptime: 00:00:05, igmp ip
  Incoming interface: Ethernet1/9, RPF nbr: 169.254.0.72
  Outgoing interface list: (count: 1)
    Ethernet1/9, uptime: 00:00:05, igmp, (RPF)

N7K4# show otv mroute

OTV Multicast Routing Table For Overlay1

(10, *, 239.1.1.2), metric: 0, uptime: 00:00:14, overlay(r)
  Outgoing interface list: (count: 1)
    Overlay1, uptime: 00:00:14, isis_otv-default

(10, 10.0.0.2, 239.1.1.2), metric: 0, uptime: 00:00:09, site
  Outgoing interface list: (count: 1)
    Overlay1, uptime: 00:00:09, otv



It still does work, let's try removing the ip igmp version 3 command:


interface Ethernet1/9
  ip address 169.254.0.72/24
  no shutdown



(Showing no ip igmp version 3)

Multicast traffic still flows just fine, note i even tried a new group


bin/iperf.exe -s -u -P 0 -i 1 -p 5001 -w 41.0K -B 239.1.1.3 -l 1000.0B -f k
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 10.0.0.1
Receiving 1000 byte datagrams
UDP buffer size: 41.0 KByte
------------------------------------------------------------
[148] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 53770
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[148]  0.0- 1.0 sec   121 KBytes   992 Kbits/sec  0.024 ms 1600613993/  124 (1.3e+009%)
[148]  1.0- 2.0 sec   121 KBytes   992 Kbits/sec  0.007 ms    0/  124 (0%)
[148]  2.0- 3.0 sec   120 KBytes   984 Kbits/sec  0.009 ms    0/  123 (0%)
[148]  3.0- 4.0 sec   121 KBytes   992 Kbits/sec  0.001 ms    0/  124 (0%)
[148]  4.0- 5.0 sec   121 KBytes   992 Kbits/sec  0.001 ms    0/  124 (0%)
[148]  5.0- 6.0 sec   121 KBytes   992 Kbits/sec  0.001 ms    0/  124 (0%)
[148]  6.0- 7.0 sec   120 KBytes   984 Kbits/sec  0.002 ms    0/  123 (0%)
[148]  7.0- 8.0 sec   121 KBytes   992 Kbits/sec  0.001 ms    0/  124 (0%)
[148]  8.0- 9.0 sec   121 KBytes   992 Kbits/sec  0.016 ms    0/  124 (0%)
[148]  9.0-10.0 sec   124 KBytes  1016 Kbits/sec  0.001 ms    0/  127 (0%)
[148]  0.0-10.1 sec  1214 KBytes   988 Kbits/sec  0.001 ms    0/ 1243 (0%)



I can only assume cisco want you to use SSM for larger topologies because it has some benefits for large-scale, I can't think of any other reason, but i would obviously recommend sticking to using 232.0.0.0/8 as per cisco's recommendation and stick to using ip igmp version 3 on your join interface as per cisco's recommendation.


What if our provider or DCI link doesn't support multicast? Can we not establish OTV? Since

Version 5.1 I believe of NX-OS you are now able to use an "adjacency-server", which is basically a central point for the OTV interconnects, the configuration for this is shown:


Non-adjacency server side:

interface Overlay1
  otv isis authentication-type md5
  otv isis authentication key-chain OTV
  otv join-interface Ethernet1/9
  otv extend-vlan 10
  otv use-adjacency-server 169.254.0.72 unicast-only
  no shutdown


Adjacency side:



interface Overlay1
  otv isis authentication-type md5
  otv isis authentication key-chain OTV
  otv join-interface Ethernet1/9
  otv extend-vlan 10
  otv adjacency-server unicast-only
  no shutdown


Multicast traffic still works over this, it is just encapsulated inside unicast over the overlay,


N7K3# show otv mroute

OTV Multicast Routing Table For Overlay1


(10, *, 239.1.1.4), metric: 0, uptime: 00:00:08, igmp
  Outgoing interface list: (count: 1)
    Eth2/11, uptime: 00:00:08, igmp
N7K3# show ip mroute
IP Multicast Routing Table for VRF "default"



As you can see there are no entries in the multicast routing table when using adjacency, because all multicast traffic stays within the OTV tunnel, the OTV itself has an mroute table, which takes that unicast-encapsulated multicast and spits it out the IGMP joined interfaces.


I hope this helps guys!

5 comments:

  1. Wow this looks pretty hardcore, I am struggling with Multicast on R&S at the moment, but DC is going to be my next challenge, thanks for posting the info it really does help!

    Roger
    UK

    ReplyDelete
  2. I agree, it's hardcore stuff.

    ReplyDelete
  3. Ahhh, I always thought SSM was a requirement for OTV! I implemented it for a customer, but never had a chance to really test non SSM ranges for the data-group.
    Excellent post Peter. And also the last one I read before I attempt the DC lab... :O You indirectly helped me a lot consolidating a lot of concepts (that's why you'll find plenty of annoying comments from me in many of your CCIE DC posts :P). So thanks once again and wish me luck!

    ReplyDelete