unknown
1970-01-01 00:00:00 UTC
--000e0ce0d66a4f8bd6047941c34b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable <div>Hi Everton,</div> <div>�</div> <div>Thanks for the bug fix. We're currently running the latest code from the git repository. I notice that�each of the nodes recognizes the other one as a neighbor and the "rfail" counter for "show ip�pim hello" is 0 - which is much better behavior than before.</div> <div>�</div> <div>However�still unable to pass multicast traffic. The command "show ip igmp sources" on each node returns nothing.�The command "show ip igmp groups" on node 2 lists "239.255.255.250" as a group - which is good. However the same command on node1 returns nothing.</div> <div>�</div> <div>Is there some configuration missing here ?</div> <div>�</div> <div>The network setup and the configuration files are same as stated in the top of the thread before. Any hints or help will be highly appreciated.</div> <div>�</div> <div>Regards,</div> <div>Pravin<br><br></div> <div class="gmail_quote">On Mon, Nov 23, 2009 at 8:28 AM, Everton Marques <span dir="ltr"><<a href="mailto:***@gmail.com">***@gmail.com</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">Hi Yoda,<br><br>Just to let you know, I think you spotted a byte-ordering bug in qpimd<br>while converting 32-bit values from host to network.<br>
<br>I suppose you are running on a 64-bit cpu?<br><br>I think it is fixed by now in the git repository, but I was unable to spare<br>time for testing it. I hope to be able to test it properly by the next week.<br><br>Cheers,<br>
Everton<br><br><br>On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques<br> <div> <div></div> <div class="h5"><<a href="mailto:***@gmail.com">***@gmail.com</a>> wrote:<br>> Yoda,<br>><br>> I am looking at this.<br>><br>> Thanks a lot,<br>> Everton<br>><br>> On Tue, Nov 17, 2009 at 5:23 AM, Yoda geek <<a href="mailto:***@gmail.com">***@gmail.com</a>> wrote:<br>
>> Hi Everton,<br>>><br>>> Seems like the PIM packet options length exceeds what is permitted according<br>>> to the code and the error log. The pimd.log is full of the following<br>>> messages:<br>
>><br>>><br>>> ***@OpenWrt:~# tail -f /usr/local/logs/pimd.log<br>>><br>>> 2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM hello TLV<br>>> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0<br>
>><br>>> 2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM hello TLV<br>>> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0<br>>><br>>> 2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM hello TLV<br>
>> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0<br>>><br>>> 2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM hello TLV<br>>> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0<br>
>><br>>> Also - Below are the contents of the PIMV2 packet captured by wireshark.<br>>> Please note that I have stripped off the IP and other headers. Just the PIM<br>>> protocol packet and the offsets from wireshark:<br>
>><br>>> 0000 20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4<br>>> 0010 09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e<br>>> 0020 00 00<br>>><br>>> Thanks again for all your help.<br>
>><br>>> Regards,<br>>><br>>> Yoda<br>>><br>>><br>>> On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>> wrote:<br>>>><br>>>> Hi Yoda,<br>>>><br>>>> Thanks.<br>>>><br>>>> Yes, I am looking for the reason why the Rfail counter is increasing.<br>>>><br>>>> When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,<br>
>>> Rfail may increment silently. However, now you undefined<br>>>> PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should<br>>>> have a related log message.<br>>>><br>>>> Can you see if you locate any meaningful message in your pimd logs?<br>
>>><br>>>> If you send me your pimd logs I can try to find something as well.<br>>>><br>>>> Thanks a lot,<br>>>> Everton<br>>>><br>>>><br>>>><br>>>> On Sun, Nov 15, 2009 at 5:23 AM, Yoda geek <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> wrote:<br>>>> > Hi Everton,<br>>>> ><br>>>> > I followed the exact directions as you suggested and ran the rebuilt<br>>>> > quagga<br>>>> > on the nodes. However I don't see any difference in behavior. Is there<br>
>>> > anything in particular you're looking for after these changes ?<br>>>> > Below is the output from pimd running on both nodes:<br>>>> ><br>>>> ><br>>>> > Trying 192.168.1.1...<br>
>>> ><br>>>> > Connected to 192.168.1.1.<br>>>> ><br>>>> > Escape character is '^]'.<br>>>> ><br>>>> > Hello, this is Quagga 0.99.15 pimd 0.158<br>
>>> ><br>>>> > Copyright 1996-2005 Kunihiro Ishiguro, et al.<br>>>> ><br>>>> ><br>>>> ><br>>>> > User Access Verification<br>>>> ><br>>>> > Password:<br>
>>> ><br>>>> > node1> enable<br>>>> ><br>>>> > Password:<br>>>> ><br>>>> > node1# show ip pim neighbor<br>>>> ><br>>>> > Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id<br>
>>> > A=address_list<br>>>> ><br>>>> > T=can_disable_join_suppression<br>>>> ><br>>>> > Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv<br>>>> ><br>
>>> > node1# show ip pim hello<br>>>> ><br>>>> > Interface Address Period Timer StatStart Recv Rfail Send Sfail<br>>>> ><br>>>> > ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0<br>
>>> ><br>>>> > ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0<br>>>> ><br>>>> > node1# q<br>>>> ><br>>>> > Connection closed by foreign host.<br>
>>> ><br>>>> > Trying 192.168.3.10...<br>>>> ><br>>>> > Connected to 192.168.3.10.<br>>>> ><br>>>> > Escape character is '^]'.<br>>>> ><br>
>>> > Hello, this is Quagga 0.99.15 pimd 0.158<br>>>> ><br>>>> > Copyright 1996-2005 Kunihiro Ishiguro, et al.<br>>>> ><br>>>> ><br>>>> ><br>>>> > User Access Verification<br>
>>> ><br>>>> > Password:<br>>>> ><br>>>> > node2> enable<br>>>> ><br>>>> > Password:<br>>>> ><br>>>> > node2# show ip pim neighbor<br>
>>> ><br>>>> > Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id<br>>>> > A=address_list<br>>>> ><br>>>> > T=can_disable_join_suppression<br>
>>> ><br>>>> > Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv<br>>>> ><br>>>> > node2# show ip pim hello<br>>>> ><br>>>> > Interface Address Period Timer StatStart Recv Rfail Send Sfail<br>
>>> ><br>>>> > ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0<br>>>> ><br>>>> > ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0<br>>>> ><br>>>> > node2# q<br>
>>> ><br>>>> > Connection closed by foreign host.<br>>>> ><br>>>> ><br>>>> ><br>>>> > Thanks,<br>>>> ><br>>>> > Yoda<br>>>> ><br>
>>> ><br>>>> > On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques<br>>>> > <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>>>> > wrote:<br>
>>> >><br>>>> >> Hi Yoda,<br>>>> >><br>>>> >> Based on the Rfail counter you spotted, I suspect the code under<br>>>> >> PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.<br>
>>> >><br>>>> >> Can you experiment with commenting out the following line:<br>>>> >><br>>>> >> PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY<br>>>> >><br>
>>> >> from pimd/Makefile.am ?<br>>>> >><br>>>> >> Then you will need to bootstrap autotools with:<br>>>> >><br>>>> >> autoreconf -i --force<br>>>> >><br>
>>> >> And finally to rebuild quagga.<br>>>> >><br>>>> >> I know this test may be cumbersome since it requires the whole<br>>>> >> autotools<br>>>> >> suit present on your system, but it could help to identify why pimd is<br>
>>> >> missing the hello packets.<br>>>> >><br>>>> >> Thanks,<br>>>> >> Everton<br>>>> >><br>>>> >><br>>>> >> On Fri, Nov 13, 2009 at 7:30 AM, Yoda geek <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> >> wrote:<br>>>> >> > Hi Everton,<br>>>> >> ><br>>>> >> > Below are the answers :<br>>>> >> ><br>>>> >> > 1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.<br>
>>> >> ><br>>>> >> > 2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are<br>>>> >> > receiving PIMv2 "Hello" packets however they are addressed to<br>
>>> >> > 224.0.0.13.<br>>>> >> ><br>>>> >> > 3) Don't see any error logs on nodes 1 and 2. Below is the output of<br>>>> >> > "show<br>>>> >> > ip pim hello" on both nodes 1 and 2. Please notice the "Rfail"<br>
>>> >> > counters.<br>>>> >> ><br>>>> >> > node1# show ip pim hello<br>>>> >> > Interface Address Period Timer StatStart Recv Rfail Send<br>>>> >> > Sfail<br>
>>> >> > ra_ap0 192.168.4.20 00:30 00:05 29:57:50 0 3496 3595<br>>>> >> > 0<br>>>> >> > ra_sta0 192.168.3.20 00:30 00:04 29:57:50 3496 3496 3595<br>
>>> >> > 0<br>>>> >> > node1#<br>>>> >> ><br>>>> >> > node2# show ip pim hello<br>>>> >> > Interface Address Period Timer StatStart Recv Rfail Send<br>
>>> >> > Sfail<br>>>> >> > ra_ap0 192.168.5.10 00:30 00:04 29:56:48 0 3590 3593<br>>>> >> > 0<br>>>> >> > ra_sta0 192.168.3.10 00:30 00:07 29:56:48 3590 3590 3593<br>
>>> >> > 0<br>>>> >> > node2#<br>>>> >> ><br>>>> >> ><br>>>> >> > Thanks,<br>>>> >> ><br>>>> >> > On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques<br>
>>> >> > <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>>>> >> > wrote:<br>>>> >> >><br>>>> >> >> Hi,<br>>>> >> >><br>
>>> >> >> I think the problem is node2 fails to bring up the node1 as pim<br>>>> >> >> neighbor<br>>>> >> >> on ra_sta0, since node1 is missing from node2 "show ip pim<br>
>>> >> >> neighbor".<br>>>> >> >><br>>>> >> >> Can you please double check the following?<br>>>> >> >><br>>>> >> >> 1) "ip pim ssm" is enabled on node1 ra_sta0 ?<br>
>>> >> >> 2) node2 is receiving pim hello packets from node1 on ra_sta0 ?<br>>>> >> >> 3) node2 pimd is logging any error/warning ? look for messages about<br>>>> >> >> packets from node1, specially hello packets.<br>
>>> >> >><br>>>> >> >> Thanks,<br>>>> >> >> Everton<br>>>> >> >><br>>>> >> >> On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> >> >> wrote:<br>>>> >> >> > Below is the output as requested<br>>>> >> >> ><br>>>> >> >> ><br>>>> >> >> > User Access Verification<br>
>>> >> >> ><br>>>> >> >> > Password:<br>>>> >> >> ><br>>>> >> >> > node2> enable<br>>>> >> >> ><br>
>>> >> >> > Password:<br>>>> >> >> ><br>>>> >> >> > node2# show ip igmp interface<br>>>> >> >> ><br>>>> >> >> > Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu<br>
>>> >> >> > Prmsc<br>>>> >> >> > Del<br>>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no<br>>>> >> >> ><br>
>>> >> >> > node2# show ip igmp interface group<br>>>> >> >> ><br>>>> >> >> > Interface Address Group Mode Timer Srcs V Uptime<br>>>> >> >> ><br>
>>> >> >> > ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48<br>>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48<br>
>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17<br>>>> >> >> ><br>>>> >> >> > node2# show ip igmp group sources<br>
>>> >> >> ><br>>>> >> >> > Interface Address Group Source Timer Fwd Uptime<br>>>> >> >> ><br>>>> >> >> > node2# show ip igmp sources pim designated-router<br>
>>> >> >> ><br>>>> >> >> > NonPri: Number of neighbors missing DR Priority hello option<br>>>> >> >> ><br>>>> >> >> > Interface Address DR Uptime Elections NonPri<br>
>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0<br>>>> >> >> ><br>>>> >> >> > ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0<br>
>>> >> >> ><br>>>> >> >> > node2# show ip pim designated-router hello<br>>>> >> >> ><br>>>> >> >> > Interface Address Period Timer StatStart Recv Rfail Send Sfail<br>
>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0<br>>>> >> >> ><br>>>> >> >> > ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0<br>
>>> >> >> ><br>>>> >> >> > node2# show ip pim hello interface<br>>>> >> >> ><br>>>> >> >> > Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu<br>
>>> >> >> > Prmsc<br>>>> >> >> > Del<br>>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no<br>>>> >> >> ><br>
>>> >> >> > ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no<br>>>> >> >> ><br>>>> >> >> > node2# show ip pim interface local-membership<br>>>> >> >> ><br>
>>> >> >> > Interface Address Source Group Membership<br>>>> >> >> ><br>>>> >> >> > node2# show ip pim local-membership join<br>>>> >> >> ><br>
>>> >> >> > Interface Address Source Group State Uptime Expire Prune<br>>>> >> >> ><br>>>> >> >> > node2# show ip pim join neighbor<br>>>> >> >> ><br>
>>> >> >> > Recv flags: H=holdtime L=lan_prune_delay P=dr_priority<br>>>> >> >> > G=generation_id<br>>>> >> >> > A=address_list<br>>>> >> >> ><br>
>>> >> >> > T=can_disable_join_suppression<br>>>> >> >> ><br>>>> >> >> > Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv<br>>>> >> >> ><br>
>>> >> >> > node2# show ip pim neighbor rpf<br>>>> >> >> ><br>>>> >> >> > RPF Cache Refresh Delay: 10000 msecs<br>>>> >> >> ><br>
>>> >> >> > RPF Cache Refresh Timer: 0 msecs<br>>>> >> >> ><br>>>> >> >> > RPF Cache Refresh Requests: 6<br>>>> >> >> ><br>>>> >> >> > RPF Cache Refresh Events: 3<br>
>>> >> >> ><br>>>> >> >> > RPF Cache Refresh Last: 00:34:24<br>>>> >> >> ><br>>>> >> >> > Source Group RpfIface RpfAddress RibNextHop Metric Pref<br>
>>> >> >> ><br>>>> >> >> > node2# show ip pim rpf upstream<br>>>> >> >> ><br>>>> >> >> > Source Group State Uptime JoinTimer RefCnt<br>
>>> >> >> ><br>>>> >> >> > node2# show ip pim upstream-join-desired<br>>>> >> >> ><br>>>> >> >> > Interface Source Group LostAssert Joins PimInclude JoinDesired<br>
>>> >> >> > EvalJD<br>>>> >> >> ><br>>>> >> >> > node2# show ip pim upstream-join-desired rpf<br>>>> >> >> ><br>>>> >> >> > Source Group RpfIface RibNextHop RpfAddress<br>
>>> >> >> ><br>>>> >> >> > node2# show ip pim upstream-rpf route 192.168.4.60<br>>>> >> >> ><br>>>> >> >> > Address NextHop Interface Metric Preference<br>
>>> >> >> ><br>>>> >> >> > 192.168.4.60 192.168.3.20 ra_sta0 1 0<br>>>> >> >> ><br>>>> >> >> > node2# q<br>>>> >> >> ><br>
>>> >> >> > On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques<br>>>> >> >> > <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>>>> >> >> > wrote:<br>
>>> >> >> >><br>>>> >> >> >> Hi,<br>>>> >> >> >><br>>>> >> >> >> Can you send the following commands from node2 ?<br>
>>> >> >> >><br>>>> >> >> >> show ip igmp interface<br>>>> >> >> >> show ip igmp group<br>>>> >> >> >> show ip igmp sources<br>
>>> >> >> >> show ip pim designated-router<br>>>> >> >> >> show ip pim hello<br>>>> >> >> >> show ip pim interface<br>>>> >> >> >> show ip pim local-membership<br>
>>> >> >> >> show ip pim join<br>>>> >> >> >> show ip pim neighbor<br>>>> >> >> >> show ip pim rpf<br>>>> >> >> >> show ip pim upstream<br>
>>> >> >> >> show ip pim upstream-join-desired<br>>>> >> >> >> show ip pim upstream-rpf<br>>>> >> >> >> show ip route 192.168.4.60<br>>>> >> >> >><br>
>>> >> >> >> Thanks,<br>>>> >> >> >> Everton<br>>>> >> >> >><br>>>> >> >> >> On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek<br>
>>> >> >> >> <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>>>> >> >> >> wrote:<br>>>> >> >> >> > Hi Everton,<br>
>>> >> >> >> ><br>>>> >> >> >> > I added the entry "ip pim ssm" on ra_ap0 as you suggested. I<br>>>> >> >> >> > still<br>
>>> >> >> >> > don't<br>>>> >> >> >> > see join request coming into the source. Below is what the<br>>>> >> >> >> > configuration<br>
>>> >> >> >> > looks like on the individual nodes:<br>>>> >> >> >> ><br>>>> >> >> >> > Node 1 pimd.conf<br>>>> >> >> >> > -------------------------<br>
>>> >> >> >> > !<br>>>> >> >> >> > ! Zebra configuration saved from vty<br>>>> >> >> >> > ! 2009/08/08 05:03:23<br>>>> >> >> >> > !<br>
>>> >> >> >> > hostname node1<br>>>> >> >> >> > password zebra<br>>>> >> >> >> > enable password zebra<br>>>> >> >> >> > log stdout<br>
>>> >> >> >> > !<br>>>> >> >> >> > interface eth0<br>>>> >> >> >> > !<br>>>> >> >> >> > interface eth1<br>
>>> >> >> >> > !<br>>>> >> >> >> > interface lo<br>>>> >> >> >> > !<br>>>> >> >> >> > interface ra_ap0<br>
>>> >> >> >> > ip pim ssm<br>>>> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> > ip igmp query-max-response-time-dsec 100<br>
>>> >> >> >> > !<br>>>> >> >> >> > interface ra_sta0<br>>>> >> >> >> > ip pim ssm<br>>>> >> >> >> > ip igmp query-interval 125<br>
>>> >> >> >> > ip igmp query-max-response-time-dsec 100<br>>>> >> >> >> > !<br>>>> >> >> >> > !<br>>>> >> >> >> > ip multicast-routing<br>
>>> >> >> >> > !<br>>>> >> >> >> > line vty<br>>>> >> >> >> > !<br>>>> >> >> >> ><br>>>> >> >> >> ><br>
>>> >> >> >> > Node 2 pimd.conf<br>>>> >> >> >> > -------------------------<br>>>> >> >> >> > !<br>>>> >> >> >> > ! Zebra configuration saved from vty<br>
>>> >> >> >> > ! 2009/08/09 22:38:12<br>>>> >> >> >> > !<br>>>> >> >> >> > hostname node2<br>>>> >> >> >> > password zebra<br>
>>> >> >> >> > enable password zebra<br>>>> >> >> >> > log stdout<br>>>> >> >> >> > !<br>>>> >> >> >> > interface br-lan<br>
>>> >> >> >> > !<br>>>> >> >> >> > interface eth0<br>>>> >> >> >> > !<br>>>> >> >> >> > interface eth1<br>
>>> >> >> >> > !<br>>>> >> >> >> > interface lo<br>>>> >> >> >> > !<br>>>> >> >> >> > interface ra_ap0<br>
>>> >> >> >> > ip pim ssm<br>>>> >> >> >> > ip igmp<br>>>> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> > ip igmp query-max-response-time-dsec 100<br>
>>> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60<br>>>> >> >> >> > !<br>>>> >> >> >> > interface ra_sta0<br>>>> >> >> >> > ip pim ssm<br>
>>> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> > ip igmp query-max-response-time-dsec 100<br>>>> >> >> >> > !<br>>>> >> >> >> > !<br>
>>> >> >> >> > ip multicast-routing<br>>>> >> >> >> > !<br>>>> >> >> >> > line vty<br>>>> >> >> >> > !<br>
>>> >> >> >> > On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques<br>>>> >> >> >> > <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> >> >> >> > wrote:<br>>>> >> >> >> >><br>>>> >> >> >> >> Hi,<br>>>> >> >> >> >><br>>>> >> >> >> >> Yes, pimd should route the join request towards the source.<br>
>>> >> >> >> >><br>>>> >> >> >> >> However, you need to enable "ip pim ssm" on ra_ap0 as well.<br>>>> >> >> >> >> If you enable only "ip igmp" on a interface, pimd won't inject<br>
>>> >> >> >> >> IGMP-learnt membership into the pim protocol.<br>>>> >> >> >> >><br>>>> >> >> >> >> Cheers,<br>>>> >> >> >> >> Everton<br>
>>> >> >> >> >><br>>>> >> >> >> >> On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek<br>>>> >> >> >> >> <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> >> >> >> >> wrote:<br>>>> >> >> >> >> > Hi Everton,<br>>>> >> >> >> >> ><br>>>> >> >> >> >> > Thanks for the suggestions. I made the changes to the config<br>
>>> >> >> >> >> > files<br>>>> >> >> >> >> > on<br>>>> >> >> >> >> > both<br>>>> >> >> >> >> > nodes as you suggested. Since it is not possible for me to<br>
>>> >> >> >> >> > force<br>>>> >> >> >> >> > the<br>>>> >> >> >> >> > client<br>>>> >> >> >> >> > to do a source specific join I added the following line at<br>
>>> >> >> >> >> > interface<br>>>> >> >> >> >> > ra_ap0<br>>>> >> >> >> >> > on node 2 where the client is attached:<br>
>>> >> >> >> >> ><br>>>> >> >> >> >> > interface ra_ap0<br>>>> >> >> >> >> > ip igmp<br>>>> >> >> >> >> > ip igmp query-interval 125<br>
>>> >> >> >> >> > ip igmp query-max-response-time-dsec 100<br>>>> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60<br>>>> >> >> >> >> ><br>
>>> >> >> >> >> > I do see the source-specific IGMPv3 join group<br>>>> >> >> >> >> > 239.255.255.250<br>>>> >> >> >> >> > for<br>
>>> >> >> >> >> > source<br>>>> >> >> >> >> > 192.168.4.60 which is addressed to 224.0.0.22 on the side of<br>>>> >> >> >> >> > node2.<br>
>>> >> >> >> >> > However<br>>>> >> >> >> >> > this join request never makes it to node 1 where the source<br>>>> >> >> >> >> > is<br>
>>> >> >> >> >> > located<br>>>> >> >> >> >> > on<br>>>> >> >> >> >> > ra_ap0.<br>>>> >> >> >> >> > Shouldn't the pimd route this join request to the node where<br>
>>> >> >> >> >> > the<br>>>> >> >> >> >> > source<br>>>> >> >> >> >> > is<br>>>> >> >> >> >> > attached ?<br>
>>> >> >> >> >> ><br>>>> >> >> >> >> > Thanks,<br>>>> >> >> >> >> ><br>>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>>>> >> >> >> >> ><br>>>> >> >> >> >> > On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques<br>>>> >> >> >> >> > <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> >> >> >> >> > wrote:<br>>>> >> >> >> >> >><br>>>> >> >> >> >> >> Hi,<br>>>> >> >> >> >> >><br>
>>> >> >> >> >> >> You did not mention whether you got a source-specific<br>>>> >> >> >> >> >> IGMPv3<br>>>> >> >> >> >> >> join<br>
>>> >> >> >> >> >> to<br>>>> >> >> >> >> >> the<br>>>> >> >> >> >> >> channel (S,G)=(192.168.4.60,239.255.255.250). Please notice<br>
>>> >> >> >> >> >> qpimd<br>>>> >> >> >> >> >> is<br>>>> >> >> >> >> >> unable to program the multicast forwarding cache with<br>
>>> >> >> >> >> >> non-source-specific<br>>>> >> >> >> >> >> groups. Usually the key issue is to instruct the receiver<br>>>> >> >> >> >> >> application<br>
>>> >> >> >> >> >> to<br>>>> >> >> >> >> >> join the source-specific channel (S,G).<br>>>> >> >> >> >> >><br>
>>> >> >> >> >> >> Regarding the config, the basic rule is:<br>>>> >> >> >> >> >> 1) Enable "ip pim ssm" everywhere (on every interface that<br>
>>> >> >> >> >> >> should<br>>>> >> >> >> >> >> pass<br>>>> >> >> >> >> >> mcast).<br>>>> >> >> >> >> >> 2) Enable both "ip pim ssm" and "ip igmp" on interfaces<br>
>>> >> >> >> >> >> attached<br>>>> >> >> >> >> >> to<br>>>> >> >> >> >> >> the receivers (IGMPv3 hosts).<br>
>>> >> >> >> >> >><br>>>> >> >> >> >> >> An even simpler config rule to remember is to enable both<br>>>> >> >> >> >> >> commands<br>
>>> >> >> >> >> >> everywhere. They should not cause any harm.<br>>>> >> >> >> >> >><br>>>> >> >> >> >> >> Hence, if your mcast receiver is attached to Node 2 at<br>
>>> >> >> >> >> >> ra_ap0, I<br>>>> >> >> >> >> >> think<br>>>> >> >> >> >> >> you will<br>>>> >> >> >> >> >> need at least the following config:<br>
>>> >> >> >> >> >><br>>>> >> >> >> >> >> !<br>>>> >> >> >> >> >> ! Node 1<br>>>> >> >> >> >> >> !<br>
>>> >> >> >> >> >> interface ra_ap0<br>>>> >> >> >> >> >> ip pim ssm<br>>>> >> >> >> >> >> interface ra_sta0<br>
>>> >> >> >> >> >> ip pim ssm<br>>>> >> >> >> >> >><br>>>> >> >> >> >> >> !<br>>>> >> >> >> >> >> ! Node 2<br>
>>> >> >> >> >> >> !<br>>>> >> >> >> >> >> interface ra_ap0<br>>>> >> >> >> >> >> ip pim ssm<br>>>> >> >> >> >> >> ip igmp<br>
>>> >> >> >> >> >> interface ra_sta0<br>>>> >> >> >> >> >> ip pim ssm<br>>>> >> >> >> >> >><br>>>> >> >> >> >> >> Hope this helps,<br>
>>> >> >> >> >> >> Everton<br>>>> >> >> >> >> >><br>>>> >> >> >> >> >> On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek<br>
>>> >> >> >> >> >> <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>>>> >> >> >> >> >> wrote:<br>>>> >> >> >> >> >> > Hi Everton & Fellow qpimd users,<br>
>>> >> >> >> >> >> ><br>>>> >> >> >> >> >> > We're trying to stream multicast video traffic between a<br>>>> >> >> >> >> >> > Tversity<br>
>>> >> >> >> >> >> > server<br>>>> >> >> >> >> >> > and<br>>>> >> >> >> >> >> > a multicast client separated by 2 nodes (node1 and<br>
>>> >> >> >> >> >> > node2).<br>>>> >> >> >> >> >> > Each<br>>>> >> >> >> >> >> > node<br>>>> >> >> >> >> >> > is<br>
>>> >> >> >> >> >> > running quagga suite (version 0.99.15) along with qpimd<br>>>> >> >> >> >> >> > (version<br>>>> >> >> >> >> >> > 0.158)<br>
>>> >> >> >> >> >> > running on top of Linux 2.6.26.<br>>>> >> >> >> >> >> > Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0<br>>>> >> >> >> >> >> > Node 2 has 2 network interfaces - ra_sta0 and ra_ap0<br>
>>> >> >> >> >> >> > The Tversity server talks to interface ra_ap0 on Node 1<br>>>> >> >> >> >> >> > and<br>>>> >> >> >> >> >> > the<br>
>>> >> >> >> >> >> > multicast<br>>>> >> >> >> >> >> > client talks to interface ra_ap0 on Node 2<br>>>> >> >> >> >> >> > Nodes 1 and 2 talk with each other over their ra_sta0<br>
>>> >> >> >> >> >> > interfaces<br>>>> >> >> >> >> >> ><br>>>> >> >> >> >> >> > Below is a graphical depiction :<br>
>>> >> >> >> >> >> ><br>>>> >> >> >> >> >> > Tversity server -----------ra_ap0--> Node 1<br>>>> >> >> >> >> >> > --ra_sta0-----------------ra_sta0-->Node<br>
>>> >> >> >> >> >> > 2-----ra_ap0------------------------> Video Client<br>>>> >> >> >> >> >> > =========== ======================<br>
>>> >> >> >> >> >> > ====================== =============<br>>>> >> >> >> >> >> ><br>>>> >> >> >> >> >> ><br>
>>> >> >> >> >> >> > Node 1 pimd.conf file<br>>>> >> >> >> >> >> > ==================<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > ! Zebra configuration saved from vty<br>>>> >> >> >> >> >> > ! 2009/08/01 20:26:06<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > hostname node1<br>>>> >> >> >> >> >> > password zebra<br>>>> >> >> >> >> >> > enable password zebra<br>
>>> >> >> >> >> >> > log stdout<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface eth0<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface eth1<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface lo<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface ra_ap0<br>>>> >> >> >> >> >> > ip pim ssm<br>
>>> >> >> >> >> >> > ip igmp<br>>>> >> >> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100<br>
>>> >> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface ra_sta0<br>
>>> >> >> >> >> >> > ip igmp<br>>>> >> >> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > ip multicast-routing<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > line vty<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> ><br>
>>> >> >> >> >> >> > Node 2 pimd.conf configuration file<br>>>> >> >> >> >> >> > ============================<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > ! Zebra configuration saved from vty<br>>>> >> >> >> >> >> > ! 2009/08/02 21:54:14<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > hostname node2<br>>>> >> >> >> >> >> > password zebra<br>>>> >> >> >> >> >> > enable password zebra<br>
>>> >> >> >> >> >> > log stdout<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface eth0<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface eth1<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface lo<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface ra_ap0<br>>>> >> >> >> >> >> > ip igmp<br>
>>> >> >> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100<br>>>> >> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface ra_sta0<br>>>> >> >> >> >> >> > ip igmp<br>
>>> >> >> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > ip multicast-routing<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > line vty<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> ><br>>>> >> >> >> >> >> >
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable <div>Hi Everton,</div> <div>�</div> <div>Thanks for the bug fix. We're currently running the latest code from the git repository. I notice that�each of the nodes recognizes the other one as a neighbor and the "rfail" counter for "show ip�pim hello" is 0 - which is much better behavior than before.</div> <div>�</div> <div>However�still unable to pass multicast traffic. The command "show ip igmp sources" on each node returns nothing.�The command "show ip igmp groups" on node 2 lists "239.255.255.250" as a group - which is good. However the same command on node1 returns nothing.</div> <div>�</div> <div>Is there some configuration missing here ?</div> <div>�</div> <div>The network setup and the configuration files are same as stated in the top of the thread before. Any hints or help will be highly appreciated.</div> <div>�</div> <div>Regards,</div> <div>Pravin<br><br></div> <div class="gmail_quote">On Mon, Nov 23, 2009 at 8:28 AM, Everton Marques <span dir="ltr"><<a href="mailto:***@gmail.com">***@gmail.com</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">Hi Yoda,<br><br>Just to let you know, I think you spotted a byte-ordering bug in qpimd<br>while converting 32-bit values from host to network.<br>
<br>I suppose you are running on a 64-bit cpu?<br><br>I think it is fixed by now in the git repository, but I was unable to spare<br>time for testing it. I hope to be able to test it properly by the next week.<br><br>Cheers,<br>
Everton<br><br><br>On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques<br> <div> <div></div> <div class="h5"><<a href="mailto:***@gmail.com">***@gmail.com</a>> wrote:<br>> Yoda,<br>><br>> I am looking at this.<br>><br>> Thanks a lot,<br>> Everton<br>><br>> On Tue, Nov 17, 2009 at 5:23 AM, Yoda geek <<a href="mailto:***@gmail.com">***@gmail.com</a>> wrote:<br>
>> Hi Everton,<br>>><br>>> Seems like the PIM packet options length exceeds what is permitted according<br>>> to the code and the error log. The pimd.log is full of the following<br>>> messages:<br>
>><br>>><br>>> ***@OpenWrt:~# tail -f /usr/local/logs/pimd.log<br>>><br>>> 2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM hello TLV<br>>> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0<br>
>><br>>> 2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM hello TLV<br>>> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0<br>>><br>>> 2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM hello TLV<br>
>> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0<br>>><br>>> 2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM hello TLV<br>>> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0<br>
>><br>>> Also - Below are the contents of the PIMV2 packet captured by wireshark.<br>>> Please note that I have stripped off the IP and other headers. Just the PIM<br>>> protocol packet and the offsets from wireshark:<br>
>><br>>> 0000 20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4<br>>> 0010 09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e<br>>> 0020 00 00<br>>><br>>> Thanks again for all your help.<br>
>><br>>> Regards,<br>>><br>>> Yoda<br>>><br>>><br>>> On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>> wrote:<br>>>><br>>>> Hi Yoda,<br>>>><br>>>> Thanks.<br>>>><br>>>> Yes, I am looking for the reason why the Rfail counter is increasing.<br>>>><br>>>> When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,<br>
>>> Rfail may increment silently. However, now you undefined<br>>>> PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should<br>>>> have a related log message.<br>>>><br>>>> Can you see if you locate any meaningful message in your pimd logs?<br>
>>><br>>>> If you send me your pimd logs I can try to find something as well.<br>>>><br>>>> Thanks a lot,<br>>>> Everton<br>>>><br>>>><br>>>><br>>>> On Sun, Nov 15, 2009 at 5:23 AM, Yoda geek <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> wrote:<br>>>> > Hi Everton,<br>>>> ><br>>>> > I followed the exact directions as you suggested and ran the rebuilt<br>>>> > quagga<br>>>> > on the nodes. However I don't see any difference in behavior. Is there<br>
>>> > anything in particular you're looking for after these changes ?<br>>>> > Below is the output from pimd running on both nodes:<br>>>> ><br>>>> ><br>>>> > Trying 192.168.1.1...<br>
>>> ><br>>>> > Connected to 192.168.1.1.<br>>>> ><br>>>> > Escape character is '^]'.<br>>>> ><br>>>> > Hello, this is Quagga 0.99.15 pimd 0.158<br>
>>> ><br>>>> > Copyright 1996-2005 Kunihiro Ishiguro, et al.<br>>>> ><br>>>> ><br>>>> ><br>>>> > User Access Verification<br>>>> ><br>>>> > Password:<br>
>>> ><br>>>> > node1> enable<br>>>> ><br>>>> > Password:<br>>>> ><br>>>> > node1# show ip pim neighbor<br>>>> ><br>>>> > Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id<br>
>>> > A=address_list<br>>>> ><br>>>> > T=can_disable_join_suppression<br>>>> ><br>>>> > Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv<br>>>> ><br>
>>> > node1# show ip pim hello<br>>>> ><br>>>> > Interface Address Period Timer StatStart Recv Rfail Send Sfail<br>>>> ><br>>>> > ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0<br>
>>> ><br>>>> > ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0<br>>>> ><br>>>> > node1# q<br>>>> ><br>>>> > Connection closed by foreign host.<br>
>>> ><br>>>> > Trying 192.168.3.10...<br>>>> ><br>>>> > Connected to 192.168.3.10.<br>>>> ><br>>>> > Escape character is '^]'.<br>>>> ><br>
>>> > Hello, this is Quagga 0.99.15 pimd 0.158<br>>>> ><br>>>> > Copyright 1996-2005 Kunihiro Ishiguro, et al.<br>>>> ><br>>>> ><br>>>> ><br>>>> > User Access Verification<br>
>>> ><br>>>> > Password:<br>>>> ><br>>>> > node2> enable<br>>>> ><br>>>> > Password:<br>>>> ><br>>>> > node2# show ip pim neighbor<br>
>>> ><br>>>> > Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id<br>>>> > A=address_list<br>>>> ><br>>>> > T=can_disable_join_suppression<br>
>>> ><br>>>> > Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv<br>>>> ><br>>>> > node2# show ip pim hello<br>>>> ><br>>>> > Interface Address Period Timer StatStart Recv Rfail Send Sfail<br>
>>> ><br>>>> > ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0<br>>>> ><br>>>> > ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0<br>>>> ><br>>>> > node2# q<br>
>>> ><br>>>> > Connection closed by foreign host.<br>>>> ><br>>>> ><br>>>> ><br>>>> > Thanks,<br>>>> ><br>>>> > Yoda<br>>>> ><br>
>>> ><br>>>> > On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques<br>>>> > <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>>>> > wrote:<br>
>>> >><br>>>> >> Hi Yoda,<br>>>> >><br>>>> >> Based on the Rfail counter you spotted, I suspect the code under<br>>>> >> PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.<br>
>>> >><br>>>> >> Can you experiment with commenting out the following line:<br>>>> >><br>>>> >> PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY<br>>>> >><br>
>>> >> from pimd/Makefile.am ?<br>>>> >><br>>>> >> Then you will need to bootstrap autotools with:<br>>>> >><br>>>> >> autoreconf -i --force<br>>>> >><br>
>>> >> And finally to rebuild quagga.<br>>>> >><br>>>> >> I know this test may be cumbersome since it requires the whole<br>>>> >> autotools<br>>>> >> suit present on your system, but it could help to identify why pimd is<br>
>>> >> missing the hello packets.<br>>>> >><br>>>> >> Thanks,<br>>>> >> Everton<br>>>> >><br>>>> >><br>>>> >> On Fri, Nov 13, 2009 at 7:30 AM, Yoda geek <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> >> wrote:<br>>>> >> > Hi Everton,<br>>>> >> ><br>>>> >> > Below are the answers :<br>>>> >> ><br>>>> >> > 1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.<br>
>>> >> ><br>>>> >> > 2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are<br>>>> >> > receiving PIMv2 "Hello" packets however they are addressed to<br>
>>> >> > 224.0.0.13.<br>>>> >> ><br>>>> >> > 3) Don't see any error logs on nodes 1 and 2. Below is the output of<br>>>> >> > "show<br>>>> >> > ip pim hello" on both nodes 1 and 2. Please notice the "Rfail"<br>
>>> >> > counters.<br>>>> >> ><br>>>> >> > node1# show ip pim hello<br>>>> >> > Interface Address Period Timer StatStart Recv Rfail Send<br>>>> >> > Sfail<br>
>>> >> > ra_ap0 192.168.4.20 00:30 00:05 29:57:50 0 3496 3595<br>>>> >> > 0<br>>>> >> > ra_sta0 192.168.3.20 00:30 00:04 29:57:50 3496 3496 3595<br>
>>> >> > 0<br>>>> >> > node1#<br>>>> >> ><br>>>> >> > node2# show ip pim hello<br>>>> >> > Interface Address Period Timer StatStart Recv Rfail Send<br>
>>> >> > Sfail<br>>>> >> > ra_ap0 192.168.5.10 00:30 00:04 29:56:48 0 3590 3593<br>>>> >> > 0<br>>>> >> > ra_sta0 192.168.3.10 00:30 00:07 29:56:48 3590 3590 3593<br>
>>> >> > 0<br>>>> >> > node2#<br>>>> >> ><br>>>> >> ><br>>>> >> > Thanks,<br>>>> >> ><br>>>> >> > On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques<br>
>>> >> > <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>>>> >> > wrote:<br>>>> >> >><br>>>> >> >> Hi,<br>>>> >> >><br>
>>> >> >> I think the problem is node2 fails to bring up the node1 as pim<br>>>> >> >> neighbor<br>>>> >> >> on ra_sta0, since node1 is missing from node2 "show ip pim<br>
>>> >> >> neighbor".<br>>>> >> >><br>>>> >> >> Can you please double check the following?<br>>>> >> >><br>>>> >> >> 1) "ip pim ssm" is enabled on node1 ra_sta0 ?<br>
>>> >> >> 2) node2 is receiving pim hello packets from node1 on ra_sta0 ?<br>>>> >> >> 3) node2 pimd is logging any error/warning ? look for messages about<br>>>> >> >> packets from node1, specially hello packets.<br>
>>> >> >><br>>>> >> >> Thanks,<br>>>> >> >> Everton<br>>>> >> >><br>>>> >> >> On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> >> >> wrote:<br>>>> >> >> > Below is the output as requested<br>>>> >> >> ><br>>>> >> >> ><br>>>> >> >> > User Access Verification<br>
>>> >> >> ><br>>>> >> >> > Password:<br>>>> >> >> ><br>>>> >> >> > node2> enable<br>>>> >> >> ><br>
>>> >> >> > Password:<br>>>> >> >> ><br>>>> >> >> > node2# show ip igmp interface<br>>>> >> >> ><br>>>> >> >> > Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu<br>
>>> >> >> > Prmsc<br>>>> >> >> > Del<br>>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no<br>>>> >> >> ><br>
>>> >> >> > node2# show ip igmp interface group<br>>>> >> >> ><br>>>> >> >> > Interface Address Group Mode Timer Srcs V Uptime<br>>>> >> >> ><br>
>>> >> >> > ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48<br>>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48<br>
>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17<br>>>> >> >> ><br>>>> >> >> > node2# show ip igmp group sources<br>
>>> >> >> ><br>>>> >> >> > Interface Address Group Source Timer Fwd Uptime<br>>>> >> >> ><br>>>> >> >> > node2# show ip igmp sources pim designated-router<br>
>>> >> >> ><br>>>> >> >> > NonPri: Number of neighbors missing DR Priority hello option<br>>>> >> >> ><br>>>> >> >> > Interface Address DR Uptime Elections NonPri<br>
>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0<br>>>> >> >> ><br>>>> >> >> > ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0<br>
>>> >> >> ><br>>>> >> >> > node2# show ip pim designated-router hello<br>>>> >> >> ><br>>>> >> >> > Interface Address Period Timer StatStart Recv Rfail Send Sfail<br>
>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0<br>>>> >> >> ><br>>>> >> >> > ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0<br>
>>> >> >> ><br>>>> >> >> > node2# show ip pim hello interface<br>>>> >> >> ><br>>>> >> >> > Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu<br>
>>> >> >> > Prmsc<br>>>> >> >> > Del<br>>>> >> >> ><br>>>> >> >> > ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no<br>>>> >> >> ><br>
>>> >> >> > ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no<br>>>> >> >> ><br>>>> >> >> > node2# show ip pim interface local-membership<br>>>> >> >> ><br>
>>> >> >> > Interface Address Source Group Membership<br>>>> >> >> ><br>>>> >> >> > node2# show ip pim local-membership join<br>>>> >> >> ><br>
>>> >> >> > Interface Address Source Group State Uptime Expire Prune<br>>>> >> >> ><br>>>> >> >> > node2# show ip pim join neighbor<br>>>> >> >> ><br>
>>> >> >> > Recv flags: H=holdtime L=lan_prune_delay P=dr_priority<br>>>> >> >> > G=generation_id<br>>>> >> >> > A=address_list<br>>>> >> >> ><br>
>>> >> >> > T=can_disable_join_suppression<br>>>> >> >> ><br>>>> >> >> > Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv<br>>>> >> >> ><br>
>>> >> >> > node2# show ip pim neighbor rpf<br>>>> >> >> ><br>>>> >> >> > RPF Cache Refresh Delay: 10000 msecs<br>>>> >> >> ><br>
>>> >> >> > RPF Cache Refresh Timer: 0 msecs<br>>>> >> >> ><br>>>> >> >> > RPF Cache Refresh Requests: 6<br>>>> >> >> ><br>>>> >> >> > RPF Cache Refresh Events: 3<br>
>>> >> >> ><br>>>> >> >> > RPF Cache Refresh Last: 00:34:24<br>>>> >> >> ><br>>>> >> >> > Source Group RpfIface RpfAddress RibNextHop Metric Pref<br>
>>> >> >> ><br>>>> >> >> > node2# show ip pim rpf upstream<br>>>> >> >> ><br>>>> >> >> > Source Group State Uptime JoinTimer RefCnt<br>
>>> >> >> ><br>>>> >> >> > node2# show ip pim upstream-join-desired<br>>>> >> >> ><br>>>> >> >> > Interface Source Group LostAssert Joins PimInclude JoinDesired<br>
>>> >> >> > EvalJD<br>>>> >> >> ><br>>>> >> >> > node2# show ip pim upstream-join-desired rpf<br>>>> >> >> ><br>>>> >> >> > Source Group RpfIface RibNextHop RpfAddress<br>
>>> >> >> ><br>>>> >> >> > node2# show ip pim upstream-rpf route 192.168.4.60<br>>>> >> >> ><br>>>> >> >> > Address NextHop Interface Metric Preference<br>
>>> >> >> ><br>>>> >> >> > 192.168.4.60 192.168.3.20 ra_sta0 1 0<br>>>> >> >> ><br>>>> >> >> > node2# q<br>>>> >> >> ><br>
>>> >> >> > On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques<br>>>> >> >> > <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>>>> >> >> > wrote:<br>
>>> >> >> >><br>>>> >> >> >> Hi,<br>>>> >> >> >><br>>>> >> >> >> Can you send the following commands from node2 ?<br>
>>> >> >> >><br>>>> >> >> >> show ip igmp interface<br>>>> >> >> >> show ip igmp group<br>>>> >> >> >> show ip igmp sources<br>
>>> >> >> >> show ip pim designated-router<br>>>> >> >> >> show ip pim hello<br>>>> >> >> >> show ip pim interface<br>>>> >> >> >> show ip pim local-membership<br>
>>> >> >> >> show ip pim join<br>>>> >> >> >> show ip pim neighbor<br>>>> >> >> >> show ip pim rpf<br>>>> >> >> >> show ip pim upstream<br>
>>> >> >> >> show ip pim upstream-join-desired<br>>>> >> >> >> show ip pim upstream-rpf<br>>>> >> >> >> show ip route 192.168.4.60<br>>>> >> >> >><br>
>>> >> >> >> Thanks,<br>>>> >> >> >> Everton<br>>>> >> >> >><br>>>> >> >> >> On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek<br>
>>> >> >> >> <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>>>> >> >> >> wrote:<br>>>> >> >> >> > Hi Everton,<br>
>>> >> >> >> ><br>>>> >> >> >> > I added the entry "ip pim ssm" on ra_ap0 as you suggested. I<br>>>> >> >> >> > still<br>
>>> >> >> >> > don't<br>>>> >> >> >> > see join request coming into the source. Below is what the<br>>>> >> >> >> > configuration<br>
>>> >> >> >> > looks like on the individual nodes:<br>>>> >> >> >> ><br>>>> >> >> >> > Node 1 pimd.conf<br>>>> >> >> >> > -------------------------<br>
>>> >> >> >> > !<br>>>> >> >> >> > ! Zebra configuration saved from vty<br>>>> >> >> >> > ! 2009/08/08 05:03:23<br>>>> >> >> >> > !<br>
>>> >> >> >> > hostname node1<br>>>> >> >> >> > password zebra<br>>>> >> >> >> > enable password zebra<br>>>> >> >> >> > log stdout<br>
>>> >> >> >> > !<br>>>> >> >> >> > interface eth0<br>>>> >> >> >> > !<br>>>> >> >> >> > interface eth1<br>
>>> >> >> >> > !<br>>>> >> >> >> > interface lo<br>>>> >> >> >> > !<br>>>> >> >> >> > interface ra_ap0<br>
>>> >> >> >> > ip pim ssm<br>>>> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> > ip igmp query-max-response-time-dsec 100<br>
>>> >> >> >> > !<br>>>> >> >> >> > interface ra_sta0<br>>>> >> >> >> > ip pim ssm<br>>>> >> >> >> > ip igmp query-interval 125<br>
>>> >> >> >> > ip igmp query-max-response-time-dsec 100<br>>>> >> >> >> > !<br>>>> >> >> >> > !<br>>>> >> >> >> > ip multicast-routing<br>
>>> >> >> >> > !<br>>>> >> >> >> > line vty<br>>>> >> >> >> > !<br>>>> >> >> >> ><br>>>> >> >> >> ><br>
>>> >> >> >> > Node 2 pimd.conf<br>>>> >> >> >> > -------------------------<br>>>> >> >> >> > !<br>>>> >> >> >> > ! Zebra configuration saved from vty<br>
>>> >> >> >> > ! 2009/08/09 22:38:12<br>>>> >> >> >> > !<br>>>> >> >> >> > hostname node2<br>>>> >> >> >> > password zebra<br>
>>> >> >> >> > enable password zebra<br>>>> >> >> >> > log stdout<br>>>> >> >> >> > !<br>>>> >> >> >> > interface br-lan<br>
>>> >> >> >> > !<br>>>> >> >> >> > interface eth0<br>>>> >> >> >> > !<br>>>> >> >> >> > interface eth1<br>
>>> >> >> >> > !<br>>>> >> >> >> > interface lo<br>>>> >> >> >> > !<br>>>> >> >> >> > interface ra_ap0<br>
>>> >> >> >> > ip pim ssm<br>>>> >> >> >> > ip igmp<br>>>> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> > ip igmp query-max-response-time-dsec 100<br>
>>> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60<br>>>> >> >> >> > !<br>>>> >> >> >> > interface ra_sta0<br>>>> >> >> >> > ip pim ssm<br>
>>> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> > ip igmp query-max-response-time-dsec 100<br>>>> >> >> >> > !<br>>>> >> >> >> > !<br>
>>> >> >> >> > ip multicast-routing<br>>>> >> >> >> > !<br>>>> >> >> >> > line vty<br>>>> >> >> >> > !<br>
>>> >> >> >> > On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques<br>>>> >> >> >> > <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> >> >> >> > wrote:<br>>>> >> >> >> >><br>>>> >> >> >> >> Hi,<br>>>> >> >> >> >><br>>>> >> >> >> >> Yes, pimd should route the join request towards the source.<br>
>>> >> >> >> >><br>>>> >> >> >> >> However, you need to enable "ip pim ssm" on ra_ap0 as well.<br>>>> >> >> >> >> If you enable only "ip igmp" on a interface, pimd won't inject<br>
>>> >> >> >> >> IGMP-learnt membership into the pim protocol.<br>>>> >> >> >> >><br>>>> >> >> >> >> Cheers,<br>>>> >> >> >> >> Everton<br>
>>> >> >> >> >><br>>>> >> >> >> >> On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek<br>>>> >> >> >> >> <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> >> >> >> >> wrote:<br>>>> >> >> >> >> > Hi Everton,<br>>>> >> >> >> >> ><br>>>> >> >> >> >> > Thanks for the suggestions. I made the changes to the config<br>
>>> >> >> >> >> > files<br>>>> >> >> >> >> > on<br>>>> >> >> >> >> > both<br>>>> >> >> >> >> > nodes as you suggested. Since it is not possible for me to<br>
>>> >> >> >> >> > force<br>>>> >> >> >> >> > the<br>>>> >> >> >> >> > client<br>>>> >> >> >> >> > to do a source specific join I added the following line at<br>
>>> >> >> >> >> > interface<br>>>> >> >> >> >> > ra_ap0<br>>>> >> >> >> >> > on node 2 where the client is attached:<br>
>>> >> >> >> >> ><br>>>> >> >> >> >> > interface ra_ap0<br>>>> >> >> >> >> > ip igmp<br>>>> >> >> >> >> > ip igmp query-interval 125<br>
>>> >> >> >> >> > ip igmp query-max-response-time-dsec 100<br>>>> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60<br>>>> >> >> >> >> ><br>
>>> >> >> >> >> > I do see the source-specific IGMPv3 join group<br>>>> >> >> >> >> > 239.255.255.250<br>>>> >> >> >> >> > for<br>
>>> >> >> >> >> > source<br>>>> >> >> >> >> > 192.168.4.60 which is addressed to 224.0.0.22 on the side of<br>>>> >> >> >> >> > node2.<br>
>>> >> >> >> >> > However<br>>>> >> >> >> >> > this join request never makes it to node 1 where the source<br>>>> >> >> >> >> > is<br>
>>> >> >> >> >> > located<br>>>> >> >> >> >> > on<br>>>> >> >> >> >> > ra_ap0.<br>>>> >> >> >> >> > Shouldn't the pimd route this join request to the node where<br>
>>> >> >> >> >> > the<br>>>> >> >> >> >> > source<br>>>> >> >> >> >> > is<br>>>> >> >> >> >> > attached ?<br>
>>> >> >> >> >> ><br>>>> >> >> >> >> > Thanks,<br>>>> >> >> >> >> ><br>>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>>>> >> >> >> >> ><br>>>> >> >> >> >> > On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques<br>>>> >> >> >> >> > <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>
>>> >> >> >> >> > wrote:<br>>>> >> >> >> >> >><br>>>> >> >> >> >> >> Hi,<br>>>> >> >> >> >> >><br>
>>> >> >> >> >> >> You did not mention whether you got a source-specific<br>>>> >> >> >> >> >> IGMPv3<br>>>> >> >> >> >> >> join<br>
>>> >> >> >> >> >> to<br>>>> >> >> >> >> >> the<br>>>> >> >> >> >> >> channel (S,G)=(192.168.4.60,239.255.255.250). Please notice<br>
>>> >> >> >> >> >> qpimd<br>>>> >> >> >> >> >> is<br>>>> >> >> >> >> >> unable to program the multicast forwarding cache with<br>
>>> >> >> >> >> >> non-source-specific<br>>>> >> >> >> >> >> groups. Usually the key issue is to instruct the receiver<br>>>> >> >> >> >> >> application<br>
>>> >> >> >> >> >> to<br>>>> >> >> >> >> >> join the source-specific channel (S,G).<br>>>> >> >> >> >> >><br>
>>> >> >> >> >> >> Regarding the config, the basic rule is:<br>>>> >> >> >> >> >> 1) Enable "ip pim ssm" everywhere (on every interface that<br>
>>> >> >> >> >> >> should<br>>>> >> >> >> >> >> pass<br>>>> >> >> >> >> >> mcast).<br>>>> >> >> >> >> >> 2) Enable both "ip pim ssm" and "ip igmp" on interfaces<br>
>>> >> >> >> >> >> attached<br>>>> >> >> >> >> >> to<br>>>> >> >> >> >> >> the receivers (IGMPv3 hosts).<br>
>>> >> >> >> >> >><br>>>> >> >> >> >> >> An even simpler config rule to remember is to enable both<br>>>> >> >> >> >> >> commands<br>
>>> >> >> >> >> >> everywhere. They should not cause any harm.<br>>>> >> >> >> >> >><br>>>> >> >> >> >> >> Hence, if your mcast receiver is attached to Node 2 at<br>
>>> >> >> >> >> >> ra_ap0, I<br>>>> >> >> >> >> >> think<br>>>> >> >> >> >> >> you will<br>>>> >> >> >> >> >> need at least the following config:<br>
>>> >> >> >> >> >><br>>>> >> >> >> >> >> !<br>>>> >> >> >> >> >> ! Node 1<br>>>> >> >> >> >> >> !<br>
>>> >> >> >> >> >> interface ra_ap0<br>>>> >> >> >> >> >> ip pim ssm<br>>>> >> >> >> >> >> interface ra_sta0<br>
>>> >> >> >> >> >> ip pim ssm<br>>>> >> >> >> >> >><br>>>> >> >> >> >> >> !<br>>>> >> >> >> >> >> ! Node 2<br>
>>> >> >> >> >> >> !<br>>>> >> >> >> >> >> interface ra_ap0<br>>>> >> >> >> >> >> ip pim ssm<br>>>> >> >> >> >> >> ip igmp<br>
>>> >> >> >> >> >> interface ra_sta0<br>>>> >> >> >> >> >> ip pim ssm<br>>>> >> >> >> >> >><br>>>> >> >> >> >> >> Hope this helps,<br>
>>> >> >> >> >> >> Everton<br>>>> >> >> >> >> >><br>>>> >> >> >> >> >> On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek<br>
>>> >> >> >> >> >> <<a href="mailto:***@gmail.com">***@gmail.com</a>><br>>>> >> >> >> >> >> wrote:<br>>>> >> >> >> >> >> > Hi Everton & Fellow qpimd users,<br>
>>> >> >> >> >> >> ><br>>>> >> >> >> >> >> > We're trying to stream multicast video traffic between a<br>>>> >> >> >> >> >> > Tversity<br>
>>> >> >> >> >> >> > server<br>>>> >> >> >> >> >> > and<br>>>> >> >> >> >> >> > a multicast client separated by 2 nodes (node1 and<br>
>>> >> >> >> >> >> > node2).<br>>>> >> >> >> >> >> > Each<br>>>> >> >> >> >> >> > node<br>>>> >> >> >> >> >> > is<br>
>>> >> >> >> >> >> > running quagga suite (version 0.99.15) along with qpimd<br>>>> >> >> >> >> >> > (version<br>>>> >> >> >> >> >> > 0.158)<br>
>>> >> >> >> >> >> > running on top of Linux 2.6.26.<br>>>> >> >> >> >> >> > Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0<br>>>> >> >> >> >> >> > Node 2 has 2 network interfaces - ra_sta0 and ra_ap0<br>
>>> >> >> >> >> >> > The Tversity server talks to interface ra_ap0 on Node 1<br>>>> >> >> >> >> >> > and<br>>>> >> >> >> >> >> > the<br>
>>> >> >> >> >> >> > multicast<br>>>> >> >> >> >> >> > client talks to interface ra_ap0 on Node 2<br>>>> >> >> >> >> >> > Nodes 1 and 2 talk with each other over their ra_sta0<br>
>>> >> >> >> >> >> > interfaces<br>>>> >> >> >> >> >> ><br>>>> >> >> >> >> >> > Below is a graphical depiction :<br>
>>> >> >> >> >> >> ><br>>>> >> >> >> >> >> > Tversity server -----------ra_ap0--> Node 1<br>>>> >> >> >> >> >> > --ra_sta0-----------------ra_sta0-->Node<br>
>>> >> >> >> >> >> > 2-----ra_ap0------------------------> Video Client<br>>>> >> >> >> >> >> > =========== ======================<br>
>>> >> >> >> >> >> > ====================== =============<br>>>> >> >> >> >> >> ><br>>>> >> >> >> >> >> ><br>
>>> >> >> >> >> >> > Node 1 pimd.conf file<br>>>> >> >> >> >> >> > ==================<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > ! Zebra configuration saved from vty<br>>>> >> >> >> >> >> > ! 2009/08/01 20:26:06<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > hostname node1<br>>>> >> >> >> >> >> > password zebra<br>>>> >> >> >> >> >> > enable password zebra<br>
>>> >> >> >> >> >> > log stdout<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface eth0<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface eth1<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface lo<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface ra_ap0<br>>>> >> >> >> >> >> > ip pim ssm<br>
>>> >> >> >> >> >> > ip igmp<br>>>> >> >> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100<br>
>>> >> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface ra_sta0<br>
>>> >> >> >> >> >> > ip igmp<br>>>> >> >> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > ip multicast-routing<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > line vty<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> ><br>
>>> >> >> >> >> >> > Node 2 pimd.conf configuration file<br>>>> >> >> >> >> >> > ============================<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > ! Zebra configuration saved from vty<br>>>> >> >> >> >> >> > ! 2009/08/02 21:54:14<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > hostname node2<br>>>> >> >> >> >> >> > password zebra<br>>>> >> >> >> >> >> > enable password zebra<br>
>>> >> >> >> >> >> > log stdout<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface eth0<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface eth1<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface lo<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface ra_ap0<br>>>> >> >> >> >> >> > ip igmp<br>
>>> >> >> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100<br>>>> >> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > interface ra_sta0<br>>>> >> >> >> >> >> > ip igmp<br>
>>> >> >> >> >> >> > ip igmp query-interval 125<br>>>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> > ip multicast-routing<br>>>> >> >> >> >> >> > !<br>
>>> >> >> >> >> >> > line vty<br>>>> >> >> >> >> >> > !<br>>>> >> >> >> >> >> ><br>>>> >> >> >> >> >> >