unknown
1970-01-01 00:00:00 UTC
--0023545bd658d5664604774b83af
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable <div>Hi Everton,</div> <div>�</div> <div>Thanks for the suggestions. I made the changes to the config files on both nodes as you suggested. Since it is not possible for me to force the client to do a source specific�join I added the following line at interface ra_ap0 on node 2 where the client is attached:</div> <div>�</div> <div>interface ra_ap0<br>ip igmp</div> <div>ip igmp query-interval 125<br>ip igmp query-max-response-time-dsec 100<br>ip igmp join 239.255.255.250 192.168.4.60</div> <div>�</div> <div>I do see the source-specific IGMPv3 join�group 239.255.255.250 for source 192.168.4.60�which is addressed to 224.0.0.22 on the side of node2. However this join request never makes it to node 1 where the source is located on ra_ap0.</div> <div>Shouldn't the pimd route this join request to the node where the source is attached ?</div> <div>�</div> <div>Thanks,</div> <div>�</div> <div>�</div> <div>��<br><br></div> <div class="gmail_quote">On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques <span dir="ltr"><<a href="mailto:***@gmail.com">***@gmail.com</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">Hi,<br><br>You did not mention whether you got a source-specific IGMPv3 join to the<br>channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is<br>
unable to program the multicast forwarding cache with non-source-specific<br>groups. Usually the key issue is to instruct the receiver application to<br>join the source-specific channel (S,G).<br><br>Regarding the config, the basic rule is:<br>
1) Enable "ip pim ssm" everywhere (on every interface that should pass mcast).<br>2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to<br>the receivers (IGMPv3 hosts).<br><br>An even simpler config rule to remember is to enable both commands<br>
everywhere. They should not cause any harm.<br><br>Hence, if your mcast receiver is attached to Node 2 at �ra_ap0, I think you will<br>need at least the following config:<br><br>!<br>! Node 1<br> <div class="im">!<br>interface ra_ap0<br>�ip pim ssm<br></div>interface ra_sta0<br>�ip pim ssm<br><br>!<br>! Node 2<br> <div class="im">!<br>interface ra_ap0<br>�ip pim ssm<br>�ip igmp<br></div>interface ra_sta0<br>�ip pim ssm<br><br>Hope this helps,<br><font color="#888888">Everton<br></font> <div> <div></div> <div class="h5"><br>On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek <<a href="mailto:***@gmail.com">***@gmail.com</a>> wrote:<br>> Hi Everton & Fellow qpimd users,<br>><br>> We're trying to stream multicast video traffic between a Tversity server and<br>
> a multicast client separated by 2 nodes (node1 and node2). Each node is<br>> running quagga suite (version 0.99.15) along with qpimd (version 0.158)<br>> running on top of Linux 2.6.26.<br>> Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0<br>
> Node 2 has 2 network interfaces - ra_sta0 and ra_ap0<br>> The Tversity server talks to interface ra_ap0 on Node 1 and the multicast<br>> client talks to interface ra_ap0 on Node 2<br>> Nodes 1 and 2 talk with each other over their ra_sta0 interfaces<br>
><br>> Below is a graphical depiction :<br>><br>> Tversity server -----------ra_ap0--> Node 1<br>> --ra_sta0-----------------ra_sta0-->Node<br>> 2-----ra_ap0------------------------> Video Client<br>
> =========== ======================<br>> ====================== =============<br>><br>><br>> Node 1 pimd.conf file<br>> ==================<br>> !<br>> ! Zebra configuration saved from vty<br>
> ! 2009/08/01 20:26:06<br>> !<br>> hostname node1<br>> password zebra<br>> enable password zebra<br>> log stdout<br>> !<br>> interface eth0<br>> !<br>> interface eth1<br>> !<br>> interface lo<br>
> !<br>> interface ra_ap0<br>> ip pim ssm<br>> ip igmp<br>> ip igmp query-interval 125<br>> ip igmp query-max-response-time-dsec 100<br>> ip igmp join 239.255.255.250 192.168.4.60<br>> !<br>> interface ra_sta0<br>
> ip igmp<br>> ip igmp query-interval 125<br>> ip igmp query-max-response-time-dsec 100<br>> !<br>> !<br>> ip multicast-routing<br>> !<br>> line vty<br>> !<br>><br>> Node 2 pimd.conf configuration file<br>
> ============================<br>> !<br>> ! Zebra configuration saved from vty<br>> ! 2009/08/02 21:54:14<br>> !<br>> hostname node2<br>> password zebra<br>> enable password zebra<br>> log stdout<br>
> !<br>> interface eth0<br>> !<br>> interface eth1<br>> !<br>> interface lo<br>> !<br>> interface ra_ap0<br>> ip igmp<br>> ip igmp query-interval 125<br>> ip igmp query-max-response-time-dsec 100<br>
> ip igmp join 239.255.255.250 192.168.4.60<br>> !<br>> interface ra_sta0<br>> ip igmp<br>> ip igmp query-interval 125<br>> ip igmp query-max-response-time-dsec 100<br>> !<br>> !<br>> ip multicast-routing<br>
> !<br>> line vty<br>> !<br>><br>>
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable <div>Hi Everton,</div> <div>�</div> <div>Thanks for the suggestions. I made the changes to the config files on both nodes as you suggested. Since it is not possible for me to force the client to do a source specific�join I added the following line at interface ra_ap0 on node 2 where the client is attached:</div> <div>�</div> <div>interface ra_ap0<br>ip igmp</div> <div>ip igmp query-interval 125<br>ip igmp query-max-response-time-dsec 100<br>ip igmp join 239.255.255.250 192.168.4.60</div> <div>�</div> <div>I do see the source-specific IGMPv3 join�group 239.255.255.250 for source 192.168.4.60�which is addressed to 224.0.0.22 on the side of node2. However this join request never makes it to node 1 where the source is located on ra_ap0.</div> <div>Shouldn't the pimd route this join request to the node where the source is attached ?</div> <div>�</div> <div>Thanks,</div> <div>�</div> <div>�</div> <div>��<br><br></div> <div class="gmail_quote">On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques <span dir="ltr"><<a href="mailto:***@gmail.com">***@gmail.com</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">Hi,<br><br>You did not mention whether you got a source-specific IGMPv3 join to the<br>channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is<br>
unable to program the multicast forwarding cache with non-source-specific<br>groups. Usually the key issue is to instruct the receiver application to<br>join the source-specific channel (S,G).<br><br>Regarding the config, the basic rule is:<br>
1) Enable "ip pim ssm" everywhere (on every interface that should pass mcast).<br>2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to<br>the receivers (IGMPv3 hosts).<br><br>An even simpler config rule to remember is to enable both commands<br>
everywhere. They should not cause any harm.<br><br>Hence, if your mcast receiver is attached to Node 2 at �ra_ap0, I think you will<br>need at least the following config:<br><br>!<br>! Node 1<br> <div class="im">!<br>interface ra_ap0<br>�ip pim ssm<br></div>interface ra_sta0<br>�ip pim ssm<br><br>!<br>! Node 2<br> <div class="im">!<br>interface ra_ap0<br>�ip pim ssm<br>�ip igmp<br></div>interface ra_sta0<br>�ip pim ssm<br><br>Hope this helps,<br><font color="#888888">Everton<br></font> <div> <div></div> <div class="h5"><br>On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek <<a href="mailto:***@gmail.com">***@gmail.com</a>> wrote:<br>> Hi Everton & Fellow qpimd users,<br>><br>> We're trying to stream multicast video traffic between a Tversity server and<br>
> a multicast client separated by 2 nodes (node1 and node2). Each node is<br>> running quagga suite (version 0.99.15) along with qpimd (version 0.158)<br>> running on top of Linux 2.6.26.<br>> Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0<br>
> Node 2 has 2 network interfaces - ra_sta0 and ra_ap0<br>> The Tversity server talks to interface ra_ap0 on Node 1 and the multicast<br>> client talks to interface ra_ap0 on Node 2<br>> Nodes 1 and 2 talk with each other over their ra_sta0 interfaces<br>
><br>> Below is a graphical depiction :<br>><br>> Tversity server -----------ra_ap0--> Node 1<br>> --ra_sta0-----------------ra_sta0-->Node<br>> 2-----ra_ap0------------------------> Video Client<br>
> =========== ======================<br>> ====================== =============<br>><br>><br>> Node 1 pimd.conf file<br>> ==================<br>> !<br>> ! Zebra configuration saved from vty<br>
> ! 2009/08/01 20:26:06<br>> !<br>> hostname node1<br>> password zebra<br>> enable password zebra<br>> log stdout<br>> !<br>> interface eth0<br>> !<br>> interface eth1<br>> !<br>> interface lo<br>
> !<br>> interface ra_ap0<br>> ip pim ssm<br>> ip igmp<br>> ip igmp query-interval 125<br>> ip igmp query-max-response-time-dsec 100<br>> ip igmp join 239.255.255.250 192.168.4.60<br>> !<br>> interface ra_sta0<br>
> ip igmp<br>> ip igmp query-interval 125<br>> ip igmp query-max-response-time-dsec 100<br>> !<br>> !<br>> ip multicast-routing<br>> !<br>> line vty<br>> !<br>><br>> Node 2 pimd.conf configuration file<br>
> ============================<br>> !<br>> ! Zebra configuration saved from vty<br>> ! 2009/08/02 21:54:14<br>> !<br>> hostname node2<br>> password zebra<br>> enable password zebra<br>> log stdout<br>
> !<br>> interface eth0<br>> !<br>> interface eth1<br>> !<br>> interface lo<br>> !<br>> interface ra_ap0<br>> ip igmp<br>> ip igmp query-interval 125<br>> ip igmp query-max-response-time-dsec 100<br>
> ip igmp join 239.255.255.250 192.168.4.60<br>> !<br>> interface ra_sta0<br>> ip igmp<br>> ip igmp query-interval 125<br>> ip igmp query-max-response-time-dsec 100<br>> !<br>> !<br>> ip multicast-routing<br>
> !<br>> line vty<br>> !<br>><br>>