Discussion:
[qpimd-users] Unable to route multicast video streams using qpimd.
Yoda geek
2009-10-26 06:42:09 UTC
Permalink
Hi Everton & Fellow qpimd users,

We're trying to stream multicast video traffic between a Tversity server and
a multicast client separated by 2 nodes (node1 and node2). Each node is
running quagga suite (version 0.99.15) along with qpimd (version 0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and the multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0 interfaces

Below is a graphical depiction :

Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
====================== =============


Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!

Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
Everton Marques
2009-10-26 13:44:25 UTC
Permalink
Hi,

You did not mention whether you got a source-specific IGMPv3 join to the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is
unable to program the multicast forwarding cache with non-source-specific
groups. Usually the key issue is to instruct the receiver application to
join the source-specific channel (S,G).

Regarding the config, the basic rule is:
1) Enable "ip pim ssm" everywhere (on every interface that should pass mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to
the receivers (IGMPv3 hosts).

An even simpler config rule to remember is to enable both commands
everywhere. They should not cause any harm.

Hence, if your mcast receiver is attached to Node 2 at ra_ap0, I think you will
need at least the following config:

!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm

!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm

Hope this helps,
Everton
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic between a Tversity server and
a multicast client separated by 2 nodes (node1 and node2). Each node is
running quagga suite (version 0.99.15) along with qpimd (version 0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and the multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0 interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================                      =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Yoda geek
2009-11-01 09:02:04 UTC
Permalink
Hi Everton,

Thanks for the suggestions. I made the changes to the config files on both
nodes as you suggested. Since it is not possible for me to force the client
to do a source specific join I added the following line at interface ra_ap0
on node 2 where the client is attached:

interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60

I do see the source-specific IGMPv3 join group 239.255.255.250 for source
192.168.4.60 which is addressed to 224.0.0.22 on the side of node2. However
this join request never makes it to node 1 where the source is located on
ra_ap0.
Shouldn't the pimd route this join request to the node where the source is
attached ?

Thanks,




On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3 join to the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is
unable to program the multicast forwarding cache with non-source-specific
groups. Usually the key issue is to instruct the receiver application to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that should pass mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at ra_ap0, I think you will
!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm
!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm
Hope this helps,
Everton
Post by Yoda geek
Hi Everton & Fellow qpimd users,
We're trying to stream multicast video traffic between a Tversity server
and
Post by Yoda geek
a multicast client separated by 2 nodes (node1 and node2). Each node is
running quagga suite (version 0.99.15) along with qpimd (version 0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and the multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0 interfaces
Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
====================== =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Everton Marques
2009-11-01 20:44:41 UTC
Permalink
Hi,

Yes, pimd should route the join request towards the source.

However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.

Cheers,
Everton
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config files on both
nodes as you suggested. Since it is not possible for me to force the client
to do a source specific join I added the following line at interface ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group 239.255.255.250 for source
192.168.4.60 which is addressed to 224.0.0.22 on the side of node2. However
this join request never makes it to node 1 where the source is located on
ra_ap0.
Shouldn't the pimd route this join request to the node where the source is
attached ?
Thanks,
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3 join to the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is
unable to program the multicast forwarding cache with non-source-specific
groups. Usually the key issue is to instruct the receiver application to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that should pass mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at  ra_ap0, I think
you will
!
! Node 1
!
interface ra_ap0
 ip pim ssm
interface ra_sta0
 ip pim ssm
!
! Node 2
!
interface ra_ap0
 ip pim ssm
 ip igmp
interface ra_sta0
 ip pim ssm
Hope this helps,
Everton
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic between a Tversity server and
a multicast client separated by 2 nodes (node1 and node2). Each node is
running quagga suite (version 0.99.15) along with qpimd (version 0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and the multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0 interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================                      =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Yoda geek
2009-11-02 07:44:35 UTC
Permalink
Hi Everton,

I added the entry "ip pim ssm" on ra_ap0 as you suggested. I still don't
see join request coming into the source. Below is what the configuration
looks like on the individual nodes:

Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!


Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!

On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config files on
both
Post by Yoda geek
nodes as you suggested. Since it is not possible for me to force the
client
Post by Yoda geek
to do a source specific join I added the following line at interface
ra_ap0
Post by Yoda geek
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group 239.255.255.250 for source
192.168.4.60 which is addressed to 224.0.0.22 on the side of node2.
However
Post by Yoda geek
this join request never makes it to node 1 where the source is located on
ra_ap0.
Shouldn't the pimd route this join request to the node where the source
is
Post by Yoda geek
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques <
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3 join to the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is
unable to program the multicast forwarding cache with
non-source-specific
Post by Yoda geek
Post by Everton Marques
groups. Usually the key issue is to instruct the receiver application to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that should pass mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at ra_ap0, I think you will
!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm
!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm
Hope this helps,
Everton
Post by Yoda geek
Hi Everton & Fellow qpimd users,
We're trying to stream multicast video traffic between a Tversity
server
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
and
a multicast client separated by 2 nodes (node1 and node2). Each node
is
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
running quagga suite (version 0.99.15) along with qpimd (version
0.158)
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and the multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0 interfaces
Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
====================== =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Everton Marques
2009-11-03 15:51:00 UTC
Permalink
Hi,

Can you send the following commands from node2 ?

show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60

Thanks,
Everton
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0  as you suggested. I still don't
see join request coming into the source. Below is what the configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config files on both
nodes as you suggested. Since it is not possible for me to force the client
to do a source specific join I added the following line at interface ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group 239.255.255.250 for source
192.168.4.60 which is addressed to 224.0.0.22 on the side of node2. However
this join request never makes it to node 1 where the source is located on
ra_ap0.
Shouldn't the pimd route this join request to the node where the source is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3 join to the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the receiver application to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that should pass mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at  ra_ap0, I think
you will
!
! Node 1
!
interface ra_ap0
 ip pim ssm
interface ra_sta0
 ip pim ssm
!
! Node 2
!
interface ra_ap0
 ip pim ssm
 ip igmp
interface ra_sta0
 ip pim ssm
Hope this helps,
Everton
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic between a Tversity server
and
a multicast client separated by 2 nodes (node1 and node2). Each node is
running quagga suite (version 0.99.15) along with qpimd (version 0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and the multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0 interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================                      =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Yoda geek
2009-11-11 06:48:50 UTC
Permalink
Below is the output as requested


User Access Verification

Password:

node2> enable

Password:

node2# show ip igmp interface

Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del

ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no

node2# show ip igmp interface group

Interface Address Group Mode Timer Srcs V Uptime

ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48

ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48

ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17

node2# show ip igmp group sources

Interface Address Group Source Timer Fwd Uptime

node2# show ip igmp sources pim designated-router

NonPri: Number of neighbors missing DR Priority hello option

Interface Address DR Uptime Elections NonPri

ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0

ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0

node2# show ip pim designated-router hello

Interface Address Period Timer StatStart Recv Rfail Send Sfail

ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0

ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0

node2# show ip pim hello interface

Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del

ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no

ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no

node2# show ip pim interface local-membership

Interface Address Source Group Membership

node2# show ip pim local-membership join

Interface Address Source Group State Uptime Expire Prune

node2# show ip pim join neighbor

Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list

T=can_disable_join_suppression

Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv

node2# show ip pim neighbor rpf

RPF Cache Refresh Delay: 10000 msecs

RPF Cache Refresh Timer: 0 msecs

RPF Cache Refresh Requests: 6

RPF Cache Refresh Events: 3

RPF Cache Refresh Last: 00:34:24

Source Group RpfIface RpfAddress RibNextHop Metric Pref

node2# show ip pim rpf upstream

Source Group State Uptime JoinTimer RefCnt

node2# show ip pim upstream-join-desired

Interface Source Group LostAssert Joins PimInclude JoinDesired EvalJD

node2# show ip pim upstream-join-desired rpf

Source Group RpfIface RibNextHop RpfAddress

node2# show ip pim upstream-rpf route 192.168.4.60

Address NextHop Interface Metric Preference

192.168.4.60 192.168.3.20 ra_sta0 1 0

node2# q


On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0 as you suggested. I still don't
see join request coming into the source. Below is what the configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques <
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config files on both
nodes as you suggested. Since it is not possible for me to force the client
to do a source specific join I added the following line at interface ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group 239.255.255.250 for source
192.168.4.60 which is addressed to 224.0.0.22 on the side of node2. However
this join request never makes it to node 1 where the source is located on
ra_ap0.
Shouldn't the pimd route this join request to the node where the
source
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3 join to the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the receiver application to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that should
pass
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at ra_ap0, I
think
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
you will
!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm
!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm
Hope this helps,
Everton
Post by Yoda geek
Hi Everton & Fellow qpimd users,
We're trying to stream multicast video traffic between a Tversity server
and
a multicast client separated by 2 nodes (node1 and node2). Each
node
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
is
running quagga suite (version 0.99.15) along with qpimd (version 0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and the multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0 interfaces
Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
====================== =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Everton Marques
2009-11-11 14:04:55 UTC
Permalink
Hi,

I think the problem is node2 fails to bring up the node1 as pim neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".

Can you please double check the following?

1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages about
packets from node1, specially hello packets.

Thanks,
Everton
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0  as you suggested. I still don't
see join request coming into the source. Below is what the configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config files on both
nodes as you suggested. Since it is not possible for me to force the client
to do a source specific join I added the following line at interface ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group 239.255.255.250 for source
192.168.4.60 which is addressed to 224.0.0.22 on the side of node2. However
this join request never makes it to node 1 where the source is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node where the source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3 join to the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at  ra_ap0, I think
you will
!
! Node 1
!
interface ra_ap0
 ip pim ssm
interface ra_sta0
 ip pim ssm
!
! Node 2
!
interface ra_ap0
 ip pim ssm
 ip igmp
interface ra_sta0
 ip pim ssm
Hope this helps,
Everton
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic between a Tversity server
and
a multicast client separated by 2 nodes (node1 and node2). Each node
is
running quagga suite (version 0.99.15) along with qpimd (version 0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and the multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0 interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================                      =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Yoda geek
2009-11-13 09:30:48 UTC
Permalink
Hi Everton,

Below are the answers :

1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.

2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are
receiving PIMv2 "Hello" packets however they are addressed to 224.0.0.13.

3) Don't see any error logs on nodes 1 and 2. Below is the output of "show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail" counters.

node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:05 29:57:50 0 3496 3595 0
ra_sta0 192.168.3.20 00:30 00:04 29:57:50 3496 3496 3595 0
node1#

node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:04 29:56:48 0 3590 3593 0
ra_sta0 192.168.3.10 00:30 00:07 29:56:48 3590 3590 3593 0
node2#


Thanks,

On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages about
packets from node1, specially hello packets.
Thanks,
Everton
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques <
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0 as you suggested. I still don't
see join request coming into the source. Below is what the
configuration
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config files
on
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
both
nodes as you suggested. Since it is not possible for me to force
the
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
client
to do a source specific join I added the following line at
interface
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group 239.255.255.250 for source
192.168.4.60 which is addressed to 224.0.0.22 on the side of node2. However
this join request never makes it to node 1 where the source is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node where the source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3 join
to
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd
is
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached
to
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm
!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek <
Post by Yoda geek
Hi Everton & Fellow qpimd users,
We're trying to stream multicast video traffic between a
Tversity
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
server
and
a multicast client separated by 2 nodes (node1 and node2). Each node
is
running quagga suite (version 0.99.15) along with qpimd (version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0 interfaces
Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
====================== =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Everton Marques
2009-11-13 12:16:26 UTC
Permalink
Hi Yoda,

Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.

Can you experiment with commenting out the following line:

PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY

from pimd/Makefile.am ?

Then you will need to bootstrap autotools with:

autoreconf -i --force

And finally to rebuild quagga.

I know this test may be cumbersome since it requires the whole autotools
suit present on your system, but it could help to identify why pimd is
missing the hello packets.

Thanks,
Everton
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are
receiving PIMv2 "Hello" packets however they are addressed to 224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output of "show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail" counters.
node1# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send Sfail
ra_ap0    192.168.4.20     00:30 00:05  29:57:50    0  3496 3595     0
ra_sta0   192.168.3.20     00:30 00:04  29:57:50 3496  3496 3595     0
node1#
node2# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send Sfail
ra_ap0    192.168.5.10     00:30 00:04  29:56:48    0  3590 3593     0
ra_sta0   192.168.3.10     00:30 00:07  29:56:48 3590  3590 3593     0
node2#
Thanks,
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages about
packets from node1, specially hello packets.
Thanks,
Everton
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0  as you suggested. I still don't
see join request coming into the source. Below is what the configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config files on
both
nodes as you suggested. Since it is not possible for me to force the
client
to do a source specific join I added the following line at
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group 239.255.255.250 for
source
192.168.4.60 which is addressed to 224.0.0.22 on the side of
node2.
However
this join request never makes it to node 1 where the source is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node where the source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3 join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at  ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
 ip pim ssm
interface ra_sta0
 ip pim ssm
!
! Node 2
!
interface ra_ap0
 ip pim ssm
 ip igmp
interface ra_sta0
 ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic between a Tversity
server
and
a multicast client separated by 2 nodes (node1 and node2). Each
node
is
running quagga suite (version 0.99.15) along with qpimd (version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0 interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================                      =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Yoda geek
2009-11-15 07:23:24 UTC
Permalink
Hi Everton,

I followed the exact directions as you suggested and ran the rebuilt quagga
on the nodes. However I don't see any difference in behavior. Is there
anything in particular you're looking for after these changes ?
Below is the output from pimd running on both nodes:


Trying 192.168.1.1...

Connected to 192.168.1.1.

Escape character is '^]'.

Hello, this is Quagga 0.99.15 pimd 0.158

Copyright 1996-2005 Kunihiro Ishiguro, et al.

User Access Verification

Password:

node1> enable

Password:

node1# show ip pim neighbor

Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list

T=can_disable_join_suppression

Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv

node1# show ip pim hello

Interface Address Period Timer StatStart Recv Rfail Send Sfail

ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0

ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0

node1# q

Connection closed by foreign host.

Trying 192.168.3.10...

Connected to 192.168.3.10.

Escape character is '^]'.

Hello, this is Quagga 0.99.15 pimd 0.158

Copyright 1996-2005 Kunihiro Ishiguro, et al.

User Access Verification

Password:

node2> enable

Password:

node2# show ip pim neighbor

Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list

T=can_disable_join_suppression

Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv

node2# show ip pim hello

Interface Address Period Timer StatStart Recv Rfail Send Sfail

ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0

ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0

node2# q

Connection closed by foreign host.



Thanks,

Yoda



On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole autotools
suit present on your system, but it could help to identify why pimd is
missing the hello packets.
Thanks,
Everton
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are
receiving PIMv2 "Hello" packets however they are addressed to 224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output of
"show
Post by Yoda geek
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail" counters.
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:05 29:57:50 0 3496 3595 0
ra_sta0 192.168.3.20 00:30 00:04 29:57:50 3496 3496 3595 0
node1#
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:04 29:56:48 0 3590 3593 0
ra_sta0 192.168.3.10 00:30 00:07 29:56:48 3590 3590 3593 0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques <
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages about
packets from node1, specially hello packets.
Thanks,
Everton
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0 as you suggested. I still don't
see join request coming into the source. Below is what the configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek <
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config
files
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
on
both
nodes as you suggested. Since it is not possible for me to force the
client
to do a source specific join I added the following line at interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group 239.255.255.250
for
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
source
192.168.4.60 which is addressed to 224.0.0.22 on the side of node2.
However
this join request never makes it to node 1 where the source is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node where the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3
join
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice
qpimd
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that
should
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces
attached
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at ra_ap0,
I
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
think
you will
!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm
!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Post by Yoda geek
Hi Everton & Fellow qpimd users,
We're trying to stream multicast video traffic between a
Tversity
server
and
a multicast client separated by 2 nodes (node1 and node2).
Each
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
node
is
running quagga suite (version 0.99.15) along with qpimd
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and
the
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
====================== =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Everton Marques
2009-11-16 11:28:56 UTC
Permalink
Hi Yoda,

Thanks.

Yes, I am looking for the reason why the Rfail counter is increasing.

When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.

Can you see if you locate any meaningful message in your pimd logs?

If you send me your pimd logs I can try to find something as well.

Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
I followed the exact directions as you suggested and ran the rebuilt quagga
on the nodes. However I don't see any difference in behavior. Is there
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole autotools
suit present on your system, but it could help to identify why pimd is
missing the hello packets.
Thanks,
Everton
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are
receiving PIMv2 "Hello" packets however they are addressed to 224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output of "show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail" counters.
node1# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send Sfail
ra_ap0    192.168.4.20     00:30 00:05  29:57:50    0  3496 3595     0
ra_sta0   192.168.3.20     00:30 00:04  29:57:50 3496  3496 3595     0
node1#
node2# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send Sfail
ra_ap0    192.168.5.10     00:30 00:04  29:56:48    0  3590 3593     0
ra_sta0   192.168.3.10     00:30 00:07  29:56:48 3590  3590 3593     0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages about
packets from node1, specially hello packets.
Thanks,
Everton
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0  as you suggested. I
still
don't
see join request coming into the source. Below is what the configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config files
on
both
nodes as you suggested. Since it is not possible for me to force
the
client
to do a source specific join I added the following line at
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group 239.255.255.250 for
source
192.168.4.60 which is addressed to 224.0.0.22 on the side of
node2.
However
this join request never makes it to node 1 where the source is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node where the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3 join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
 ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
 ip pim ssm
interface ra_sta0
 ip pim ssm
!
! Node 2
!
interface ra_ap0
 ip pim ssm
 ip igmp
interface ra_sta0
 ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic between a
Tversity
server
and
a multicast client separated by 2 nodes (node1 and node2).
Each
node
is
running quagga suite (version 0.99.15) along with qpimd
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1 and
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================                      =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Yoda geek
2009-11-17 07:23:18 UTC
Permalink
Hi Everton,

Seems like the PIM packet options length exceeds what is permitted according
to the code and the error log. The pimd.log is full of the following
messages:


***@OpenWrt:~# tail -f /usr/local/logs/pimd.log

2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0

2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0

2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0

2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0

Also - Below are the contents of the PIMV2 packet captured by wireshark.
Please note that I have stripped off the IP and other headers. Just the PIM
protocol packet and the offsets from wireshark:

0000 20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
0010 09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
0020 00 00

Thanks again for all your help.

Regards,

Yoda



On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Thanks.
Yes, I am looking for the reason why the Rfail counter is increasing.
When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.
Can you see if you locate any meaningful message in your pimd logs?
If you send me your pimd logs I can try to find something as well.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
I followed the exact directions as you suggested and ran the rebuilt
quagga
Post by Yoda geek
on the nodes. However I don't see any difference in behavior. Is there
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques <
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole autotools
suit present on your system, but it could help to identify why pimd is
missing the hello packets.
Thanks,
Everton
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are
receiving PIMv2 "Hello" packets however they are addressed to 224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output of "show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail"
counters.
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:05 29:57:50 0 3496 3595 0
ra_sta0 192.168.3.20 00:30 00:04 29:57:50 3496 3496 3595 0
node1#
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:04 29:56:48 0 3590 3593 0
ra_sta0 192.168.3.10 00:30 00:07 29:56:48 3590 3590 3593 0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages about
packets from node1, specially hello packets.
Thanks,
Everton
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu
Prmsc
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu
Prmsc
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired
EvalJD
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek <
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0 as you suggested. I
still
don't
see join request coming into the source. Below is what the
configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config
files
on
both
nodes as you suggested. Since it is not possible for me to
force
the
client
to do a source specific join I added the following line at
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group
239.255.255.250
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
for
source
192.168.4.60 which is addressed to 224.0.0.22 on the side of
node2.
However
this join request never makes it to node 1 where the source
is
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node where
the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific IGMPv3
join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice
qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that
should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces
attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm
!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Post by Yoda geek
Hi Everton & Fellow qpimd users,
We're trying to stream multicast video traffic between a
Tversity
server
and
a multicast client separated by 2 nodes (node1 and node2).
Each
node
is
running quagga suite (version 0.99.15) along with qpimd
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1
and
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
====================== =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Everton Marques
2009-11-19 12:36:08 UTC
Permalink
Yoda,

I am looking at this.

Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
Seems like the PIM packet options length exceeds what is permitted according
to the code and the error log. The pimd.log is full of the following
2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
Also - Below are the contents of the PIMV2 packet captured by wireshark.
Please note that I have stripped off the IP and other headers. Just the PIM
0000   20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
0010   09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
0020   00 00
Thanks again for all your help.
Regards,
Yoda
Post by Everton Marques
Hi Yoda,
Thanks.
Yes, I am looking for the reason why the Rfail counter is increasing.
When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.
Can you see if you locate any meaningful message in your pimd logs?
If you send me your pimd logs I can try to find something as well.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
I followed the exact directions as you suggested and ran the rebuilt quagga
on the nodes. However I don't see any difference in behavior. Is there
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole autotools
suit present on your system, but it could help to identify why pimd is
missing the hello packets.
Thanks,
Everton
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are
receiving PIMv2 "Hello" packets however they are addressed to 224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output of "show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail" counters.
node1# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send Sfail
ra_ap0    192.168.4.20     00:30 00:05  29:57:50    0  3496 3595
0
ra_sta0   192.168.3.20     00:30 00:04  29:57:50 3496  3496 3595
0
node1#
node2# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send Sfail
ra_ap0    192.168.5.10     00:30 00:04  29:56:48    0  3590 3593
0
ra_sta0   192.168.3.10     00:30 00:07  29:56:48 3590  3590 3593
0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages about
packets from node1, specially hello packets.
Thanks,
Everton
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0  as you suggested. I
still
don't
see join request coming into the source. Below is what the
configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config
files
on
both
nodes as you suggested. Since it is not possible for me to
force
the
client
to do a source specific join I added the following line at
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group
239.255.255.250
for
source
192.168.4.60 which is addressed to 224.0.0.22 on the side of
node2.
However
this join request never makes it to node 1 where the source
is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node where
the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific
IGMPv3
join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice
qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that
should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces
attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
 ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
 ip pim ssm
interface ra_sta0
 ip pim ssm
!
! Node 2
!
interface ra_ap0
 ip pim ssm
 ip igmp
interface ra_sta0
 ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic between a
Tversity
server
and
a multicast client separated by 2 nodes (node1 and
node2).
Each
node
is
running quagga suite (version 0.99.15) along with qpimd
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1
and
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================                      =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Everton Marques
2009-11-23 16:28:49 UTC
Permalink
Hi Yoda,

Just to let you know, I think you spotted a byte-ordering bug in qpimd
while converting 32-bit values from host to network.

I suppose you are running on a 64-bit cpu?

I think it is fixed by now in the git repository, but I was unable to spare
time for testing it. I hope to be able to test it properly by the next week.

Cheers,
Everton


On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques
Post by Everton Marques
Yoda,
I am looking at this.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
Seems like the PIM packet options length exceeds what is permitted according
to the code and the error log. The pimd.log is full of the following
2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
Also - Below are the contents of the PIMV2 packet captured by wireshark.
Please note that I have stripped off the IP and other headers. Just the PIM
0000   20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
0010   09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
0020   00 00
Thanks again for all your help.
Regards,
Yoda
Post by Everton Marques
Hi Yoda,
Thanks.
Yes, I am looking for the reason why the Rfail counter is increasing.
When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.
Can you see if you locate any meaningful message in your pimd logs?
If you send me your pimd logs I can try to find something as well.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
I followed the exact directions as you suggested and ran the rebuilt quagga
on the nodes. However I don't see any difference in behavior. Is there
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole autotools
suit present on your system, but it could help to identify why pimd is
missing the hello packets.
Thanks,
Everton
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are
receiving PIMv2 "Hello" packets however they are addressed to 224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output of "show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail" counters.
node1# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0    192.168.4.20     00:30 00:05  29:57:50    0  3496 3595
0
ra_sta0   192.168.3.20     00:30 00:04  29:57:50 3496  3496 3595
0
node1#
node2# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0    192.168.5.10     00:30 00:04  29:56:48    0  3590 3593
0
ra_sta0   192.168.3.10     00:30 00:07  29:56:48 3590  3590 3593
0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages about
packets from node1, specially hello packets.
Thanks,
Everton
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0  as you suggested. I
still
don't
see join request coming into the source. Below is what the
configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the config
files
on
both
nodes as you suggested. Since it is not possible for me to
force
the
client
to do a source specific join I added the following line at
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group
239.255.255.250
for
source
192.168.4.60 which is addressed to 224.0.0.22 on the side of
node2.
However
this join request never makes it to node 1 where the source
is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node where
the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific
IGMPv3
join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please notice
qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface that
should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces
attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable both
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
 ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
 ip pim ssm
interface ra_sta0
 ip pim ssm
!
! Node 2
!
interface ra_ap0
 ip pim ssm
 ip igmp
interface ra_sta0
 ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic between a
Tversity
server
and
a multicast client separated by 2 nodes (node1 and
node2).
Each
node
is
running quagga suite (version 0.99.15) along with qpimd
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node 1
and
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================                      =============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Yoda geek
2009-11-26 08:14:23 UTC
Permalink
Hi Everton,

Thanks for the bug fix. We're currently running the latest code from the git
repository. I notice that each of the nodes recognizes the other one as a
neighbor and the "rfail" counter for "show ip pim hello" is 0 - which is
much better behavior than before.

However still unable to pass multicast traffic. The command "show ip igmp
sources" on each node returns nothing. The command "show ip igmp groups" on
node 2 lists "239.255.255.250" as a group - which is good. However the same
command on node1 returns nothing.

Is there some configuration missing here ?

The network setup and the configuration files are same as stated in the top
of the thread before. Any hints or help will be highly appreciated.

Regards,
Pravin

On Mon, Nov 23, 2009 at 8:28 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Just to let you know, I think you spotted a byte-ordering bug in qpimd
while converting 32-bit values from host to network.
I suppose you are running on a 64-bit cpu?
I think it is fixed by now in the git repository, but I was unable to spare
time for testing it. I hope to be able to test it properly by the next week.
Cheers,
Everton
On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques
Post by Everton Marques
Yoda,
I am looking at this.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
Seems like the PIM packet options length exceeds what is permitted
according
Post by Everton Marques
Post by Yoda geek
to the code and the error log. The pimd.log is full of the following
2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM hello
TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM hello
TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM hello
TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM hello
TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
Also - Below are the contents of the PIMV2 packet captured by wireshark.
Please note that I have stripped off the IP and other headers. Just the
PIM
Post by Everton Marques
Post by Yoda geek
0000 20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
0010 09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
0020 00 00
Thanks again for all your help.
Regards,
Yoda
On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques <
Post by Everton Marques
Hi Yoda,
Thanks.
Yes, I am looking for the reason why the Rfail counter is increasing.
When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.
Can you see if you locate any meaningful message in your pimd logs?
If you send me your pimd logs I can try to find something as well.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
I followed the exact directions as you suggested and ran the rebuilt quagga
on the nodes. However I don't see any difference in behavior. Is
there
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole autotools
suit present on your system, but it could help to identify why pimd
is
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
missing the hello packets.
Thanks,
Everton
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2
ra_sta0.
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2
are
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
receiving PIMv2 "Hello" packets however they are addressed to 224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output
of
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
"show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail" counters.
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:05 29:57:50 0 3496 3595
0
ra_sta0 192.168.3.20 00:30 00:04 29:57:50 3496 3496 3595
0
node1#
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:04 29:56:48 0 3590 3593
0
ra_sta0 192.168.3.10 00:30 00:07 29:56:48 3590 3590 3593
0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim
neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages
about
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
packets from node1, specially hello packets.
Thanks,
Everton
On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek <
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired
EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0 as you suggested.
I
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
still
don't
see join request coming into the source. Below is what the
configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the source.
However, you need to enable "ip pim ssm" on ra_ap0 as well.
If you enable only "ip igmp" on a interface, pimd won't
inject
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the
config
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
files
on
both
nodes as you suggested. Since it is not possible for me
to
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
force
the
client
to do a source specific join I added the following line
at
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group
239.255.255.250
for
source
192.168.4.60 which is addressed to 224.0.0.22 on the side
of
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
node2.
However
this join request never makes it to node 1 where the
source
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node
where
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific
IGMPv3
join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please
notice
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the
receiver
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface
that
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces
attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable
both
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm
!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Post by Yoda geek
Hi Everton & Fellow qpimd users,
We're trying to stream multicast video traffic between
a
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Tversity
server
and
a multicast client separated by 2 nodes (node1 and
node2).
Each
node
is
running quagga suite (version 0.99.15) along with
qpimd
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and
ra_sta0
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node
1
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
and
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
======================
=============
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Yoda geek
2009-11-26 08:23:39 UTC
Permalink
We even tried adding interface ra_ap0 on nodes 1 and 2 to the multicast
group 239.255.255.250 with source as 192.168.4.60. However this doesn't help
either.

Thanks,
Yoda
Post by Yoda geek
Hi Everton,
Thanks for the bug fix. We're currently running the latest code from the
git repository. I notice that each of the nodes recognizes the other one as
a neighbor and the "rfail" counter for "show ip pim hello" is 0 - which is
much better behavior than before.
However still unable to pass multicast traffic. The command "show ip igmp
sources" on each node returns nothing. The command "show ip igmp groups" on
node 2 lists "239.255.255.250" as a group - which is good. However the same
command on node1 returns nothing.
Is there some configuration missing here ?
The network setup and the configuration files are same as stated in the top
of the thread before. Any hints or help will be highly appreciated.
Regards,
Yoda
On Mon, Nov 23, 2009 at 8:28 AM, Everton Marques <
Post by Everton Marques
Hi Yoda,
Just to let you know, I think you spotted a byte-ordering bug in qpimd
while converting 32-bit values from host to network.
I suppose you are running on a 64-bit cpu?
I think it is fixed by now in the git repository, but I was unable to spare
time for testing it. I hope to be able to test it properly by the next week.
Cheers,
Everton
On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques
Post by Everton Marques
Yoda,
I am looking at this.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
Seems like the PIM packet options length exceeds what is permitted
according
Post by Everton Marques
Post by Yoda geek
to the code and the error log. The pimd.log is full of the following
2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM
hello TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM
hello TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM
hello TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM
hello TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
Also - Below are the contents of the PIMV2 packet captured by
wireshark.
Post by Everton Marques
Post by Yoda geek
Please note that I have stripped off the IP and other headers. Just the
PIM
Post by Everton Marques
Post by Yoda geek
0000 20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
0010 09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
0020 00 00
Thanks again for all your help.
Regards,
Yoda
On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques <
Post by Everton Marques
Hi Yoda,
Thanks.
Yes, I am looking for the reason why the Rfail counter is increasing.
When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.
Can you see if you locate any meaningful message in your pimd logs?
If you send me your pimd logs I can try to find something as well.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
I followed the exact directions as you suggested and ran the rebuilt quagga
on the nodes. However I don't see any difference in behavior. Is
there
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole autotools
suit present on your system, but it could help to identify why pimd
is
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
missing the hello packets.
Thanks,
Everton
On Fri, Nov 13, 2009 at 7:30 AM, Yoda geek <
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2
ra_sta0.
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2
are
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
receiving PIMv2 "Hello" packets however they are addressed to
224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output
of
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
"show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail"
counters.
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:05 29:57:50 0 3496 3595
0
ra_sta0 192.168.3.20 00:30 00:04 29:57:50 3496 3496 3595
0
node1#
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:04 29:56:48 0 3590 3593
0
ra_sta0 192.168.3.10 00:30 00:07 29:56:48 3590 3590 3593
0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim
neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages
about
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
packets from node1, specially hello packets.
Thanks,
Everton
On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek <
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop
AllMu
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Prmsc
Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop
AllMu
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Prmsc
Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired
EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0 as you suggested.
I
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
still
don't
see join request coming into the source. Below is what the
configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the
source.
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
However, you need to enable "ip pim ssm" on ra_ap0 as
well.
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
If you enable only "ip igmp" on a interface, pimd won't
inject
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the
config
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
files
on
both
nodes as you suggested. Since it is not possible for me
to
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
force
the
client
to do a source specific join I added the following line
at
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group
239.255.255.250
for
source
192.168.4.60 which is addressed to 224.0.0.22 on the
side of
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
node2.
However
this join request never makes it to node 1 where the
source
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node
where
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific
IGMPv3
join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please
notice
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the
receiver
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface
that
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces
attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable
both
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm
!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Post by Yoda geek
Hi Everton & Fellow qpimd users,
We're trying to stream multicast video traffic
between a
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Tversity
server
and
a multicast client separated by 2 nodes (node1 and
node2).
Each
node
is
running quagga suite (version 0.99.15) along with
qpimd
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and
ra_sta0
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node
1
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
and
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
======================
=============
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Yoda geek
2009-11-26 08:55:40 UTC
Permalink
One more thing. When we take a wireshark trace on the node 2 interface to
which the client is connected there are no ssdp messages observed. While
taking the wireshark trace on the node 1 interface to which the source is
connected, there are several ssdp messages sent from the source to address
239.255.255.250.
Post by Yoda geek
We even tried adding interface ra_ap0 on nodes 1 and 2 to the multicast
group 239.255.255.250 with source as 192.168.4.60. However this doesn't help
either.
Thanks,
Yoda
Post by Yoda geek
Hi Everton,
Thanks for the bug fix. We're currently running the latest code from the
git repository. I notice that each of the nodes recognizes the other one as
a neighbor and the "rfail" counter for "show ip pim hello" is 0 - which is
much better behavior than before.
However still unable to pass multicast traffic. The command "show ip igmp
sources" on each node returns nothing. The command "show ip igmp groups" on
node 2 lists "239.255.255.250" as a group - which is good. However the same
command on node1 returns nothing.
Is there some configuration missing here ?
The network setup and the configuration files are same as stated in the
top of the thread before. Any hints or help will be highly appreciated.
Regards,
Yoda
On Mon, Nov 23, 2009 at 8:28 AM, Everton Marques <
Post by Everton Marques
Hi Yoda,
Just to let you know, I think you spotted a byte-ordering bug in qpimd
while converting 32-bit values from host to network.
I suppose you are running on a 64-bit cpu?
I think it is fixed by now in the git repository, but I was unable to spare
time for testing it. I hope to be able to test it properly by the next week.
Cheers,
Everton
On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques
Post by Everton Marques
Yoda,
I am looking at this.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
Seems like the PIM packet options length exceeds what is permitted
according
Post by Everton Marques
Post by Yoda geek
to the code and the error log. The pimd.log is full of the following
2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM
hello TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM
hello TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM
hello TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM
hello TLV
Post by Everton Marques
Post by Yoda geek
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
Also - Below are the contents of the PIMV2 packet captured by
wireshark.
Post by Everton Marques
Post by Yoda geek
Please note that I have stripped off the IP and other headers. Just
the PIM
Post by Everton Marques
Post by Yoda geek
0000 20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
0010 09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
0020 00 00
Thanks again for all your help.
Regards,
Yoda
On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques <
Post by Everton Marques
Hi Yoda,
Thanks.
Yes, I am looking for the reason why the Rfail counter is increasing.
When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.
Can you see if you locate any meaningful message in your pimd logs?
If you send me your pimd logs I can try to find something as well.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
I followed the exact directions as you suggested and ran the
rebuilt
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
quagga
on the nodes. However I don't see any difference in behavior. Is
there
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole autotools
suit present on your system, but it could help to identify why
pimd is
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
missing the hello packets.
Thanks,
Everton
On Fri, Nov 13, 2009 at 7:30 AM, Yoda geek <
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2
ra_sta0.
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and
2 are
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
receiving PIMv2 "Hello" packets however they are addressed to
224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the
output of
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
"show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail"
counters.
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0 192.168.4.20 00:30 00:05 29:57:50 0 3496 3595
0
ra_sta0 192.168.3.20 00:30 00:04 29:57:50 3496 3496 3595
0
node1#
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0 192.168.5.10 00:30 00:04 29:56:48 0 3590 3593
0
ra_sta0 192.168.3.10 00:30 00:07 29:56:48 3590 3590 3593
0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim
neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim
neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages
about
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
packets from node1, specially hello packets.
Thanks,
Everton
On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek <
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop
AllMu
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Prmsc
Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3
00:02:17
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send
Sfail
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop
AllMu
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Prmsc
Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId
Recv
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude
JoinDesired
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0 as you
suggested. I
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
still
don't
see join request coming into the source. Below is what the
configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the
source.
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
However, you need to enable "ip pim ssm" on ra_ap0 as
well.
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
If you enable only "ip igmp" on a interface, pimd won't
inject
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the
config
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
files
on
both
nodes as you suggested. Since it is not possible for me
to
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
force
the
client
to do a source specific join I added the following line
at
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group
239.255.255.250
for
source
192.168.4.60 which is addressed to 224.0.0.22 on the
side of
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
node2.
However
this join request never makes it to node 1 where the
source
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node
where
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific
IGMPv3
join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please
notice
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the
receiver
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface
that
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on
interfaces
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable
both
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm
!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Post by Yoda geek
Hi Everton & Fellow qpimd users,
We're trying to stream multicast video traffic
between a
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Tversity
server
and
a multicast client separated by 2 nodes (node1 and
node2).
Each
node
is
running quagga suite (version 0.99.15) along with
qpimd
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and
ra_sta0
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on
Node 1
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
and
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their
ra_sta0
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
interfaces
Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
======================
=============
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Post by Everton Marques
Post by Yoda geek
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Everton Marques
2009-11-27 13:51:43 UTC
Permalink
Hi Yoda,

I suspect the same kind of byte ordering bug may be affecting the IGMP
code as well. I only verified the PIM code.
I will look at the IGMP byte ordering code as soon as I can, in a few days.
Thanks a lot for confirming the PIM byte ordering seems good now.

Cheers,
Everton
Post by Yoda geek
Hi Everton,
Thanks for the bug fix. We're currently running the latest code from the git
repository. I notice that each of the nodes recognizes the other one as a
neighbor and the "rfail" counter for "show ip pim hello" is 0 - which is
much better behavior than before.
However still unable to pass multicast traffic. The command "show ip igmp
sources" on each node returns nothing. The command "show ip igmp groups" on
node 2 lists "239.255.255.250" as a group - which is good. However the same
command on node1 returns nothing.
Is there some configuration missing here ?
The network setup and the configuration files are same as stated in the top
of the thread before. Any hints or help will be highly appreciated.
Regards,
Pravin
Post by Everton Marques
Hi Yoda,
Just to let you know, I think you spotted a byte-ordering bug in qpimd
while converting 32-bit values from host to network.
I suppose you are running on a 64-bit cpu?
I think it is fixed by now in the git repository, but I was unable to spare
time for testing it. I hope to be able to test it properly by the next week.
Cheers,
Everton
On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques
Post by Everton Marques
Yoda,
I am looking at this.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
Seems like the PIM packet options length exceeds what is permitted according
to the code and the error log. The pimd.log is full of the following
2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
Also - Below are the contents of the PIMV2 packet captured by wireshark.
Please note that I have stripped off the IP and other headers. Just the PIM
0000   20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
0010   09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
0020   00 00
Thanks again for all your help.
Regards,
Yoda
On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Thanks.
Yes, I am looking for the reason why the Rfail counter is increasing.
When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.
Can you see if you locate any meaningful message in your pimd logs?
If you send me your pimd logs I can try to find something as well.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
I followed the exact directions as you suggested and ran the rebuilt quagga
on the nodes. However I don't see any difference in behavior. Is there
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole autotools
suit present on your system, but it could help to identify why pimd is
missing the hello packets.
Thanks,
Everton
On Fri, Nov 13, 2009 at 7:30 AM, Yoda geek
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are
receiving PIMv2 "Hello" packets however they are addressed to
224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output of
"show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail"
counters.
node1# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0    192.168.4.20     00:30 00:05  29:57:50    0  3496 3595
0
ra_sta0   192.168.3.20     00:30 00:04  29:57:50 3496  3496 3595
0
node1#
node2# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0    192.168.5.10     00:30 00:04  29:56:48    0  3590 3593
0
ra_sta0   192.168.3.10     00:30 00:07  29:56:48 3590  3590 3593
0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim
neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages about
packets from node1, specially hello packets.
Thanks,
Everton
On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired
EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0  as you suggested.
I
still
don't
see join request coming into the source. Below is what the
configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the
source.
However, you need to enable "ip pim ssm" on ra_ap0 as
well.
If you enable only "ip igmp" on a interface, pimd won't
inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the
config
files
on
both
nodes as you suggested. Since it is not possible for me
to
force
the
client
to do a source specific join I added the following line
at
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group
239.255.255.250
for
source
192.168.4.60 which is addressed to 224.0.0.22 on the
side of
node2.
However
this join request never makes it to node 1 where the
source
is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node
where
the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific
IGMPv3
join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please
notice
qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the
receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface
that
should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces
attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable
both
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
 ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
 ip pim ssm
interface ra_sta0
 ip pim ssm
!
! Node 2
!
interface ra_ap0
 ip pim ssm
 ip igmp
interface ra_sta0
 ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic
between a
Tversity
server
and
a multicast client separated by 2 nodes (node1 and
node2).
Each
node
is
running quagga suite (version 0.99.15) along with
qpimd
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and
ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node
1
and
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================
=============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Yoda geek
2009-11-29 09:08:52 UTC
Permalink
Hi Everton,

I have made byte-ordering changes in the igmp portion of the pimd code
similar to what you did. However - inspite of these changes we're
unable to route multicast streams between ssdp server on node 1 and
ssdp client on node 2. Below are the changes made to the igmp portion
of the code. It would be great if you could validate these changes for
us :

Regards,
Yoda


--- pim_int_orig.h 2015-11-13 23:04:21.000000000 -0800
+++ pim_int.h 2015-11-13 21:38:22.000000000 -0800
@@ -27,5 +27,7 @@

uint32_t pim_read_uint32_host(const uint8_t *buf);
void pim_write_uint32(uint8_t *buf, uint32_t val_host);
+uint16_t pim_read_uint16_host(const uint8_t *buf);
+void pim_write_uint16(uint8_t *buf, uint16_t val_host);

#endif /* PIM_INT_H */
--- pim_int_orig.c 2015-11-13 23:04:05.000000000 -0800
+++ pim_int.c 2015-11-13 21:36:54.000000000 -0800
@@ -42,3 +42,21 @@
/* val_host is in netorder */
memcpy(buf, &val_host, sizeof(val_host));
}
+
+uint16_t pim_read_uint16_host(const uint8_t *buf)
+{
+ uint16_t val;
+ memcpy(&val, buf, sizeof(val));
+ /* val is in netorder */
+ val = ntohs(val);
+ /* val is in hostorder */
+ return val;
+}
+
+void pim_write_uint16(uint8_t *buf, uint16_t val_host)
+{
+ /* val_host is in host order */
+ val_host = htons(val_host);
+ /* val_host is in netorder */
+ memcpy(buf, &val_host, sizeof(val_host));
+}
--- pim_igmp_orig.c 2015-11-13 21:25:03.000000000 -0800
+++ pim_igmp.c 2015-11-13 21:39:04.000000000 -0800
@@ -34,6 +34,7 @@
#include "pim_util.h"
#include "pim_time.h"
#include "pim_zebra.h"
+#include "pim_int.h"

#define IGMP_GRP_REC_TYPE_MODE_IS_INCLUDE (1)
#define IGMP_GRP_REC_TYPE_MODE_IS_EXCLUDE (2)
@@ -403,7 +404,8 @@

group = find_group_by_addr(igmp, group_addr);
if (group) {
- int recv_num_sources = ntohs(*(uint16_t *)(igmp_msg +
IGMP_V3_NUMSOURCES_OFFSET));
+ //int recv_num_sources = ntohs(*(uint16_t *)(igmp_msg +
IGMP_V3_NUMSOURCES_OFFSET));
+ int recv_num_sources = pim_read_uint16_host(igmp_msg +
IGMP_V3_NUMSOURCES_OFFSET);

/*
RFC 3376: 6.6.1. Timer Updates
@@ -472,7 +474,7 @@
return -1;
}

- num_groups = ntohs(*(uint16_t *) (igmp_msg +
IGMP_V3_REPORT_NUMGROUPS_OFFSET));
+ num_groups = pim_read_uint16_host(igmp_msg +
IGMP_V3_REPORT_NUMGROUPS_OFFSET);
if (num_groups < 1) {
zlog_warn("Recv IGMP report v3 from %s on %s: missing group records",
from_str, ifp->name);
@@ -504,7 +506,7 @@

rec_type = group_record[IGMP_V3_GROUP_RECORD_TYPE_OFFSET];
rec_auxdatalen = group_record[IGMP_V3_GROUP_RECORD_AUXDATALEN_OFFSET];
- rec_num_sources = ntohs(* (uint16_t *) (group_record +
IGMP_V3_GROUP_RECORD_NUMSOURCES_OFFSET));
+ rec_num_sources = pim_read_uint16_host(group_record +
IGMP_V3_GROUP_RECORD_NUMSOURCES_OFFSET);

rec_group = *(struct in_addr *)(group_record +
IGMP_V3_GROUP_RECORD_GROUP_OFFSET);





On Fri, Nov 27, 2009 at 5:51 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
I suspect the same kind of byte ordering bug may be affecting the IGMP
code as well. I only verified the PIM code.
I will look at the IGMP byte ordering code as soon as I can, in a few days.
Thanks a lot for confirming the PIM byte ordering seems good now.
Cheers,
Everton
Post by Yoda geek
Hi Everton,
Thanks for the bug fix. We're currently running the latest code from the git
repository. I notice that each of the nodes recognizes the other one as a
neighbor and the "rfail" counter for "show ip pim hello" is 0 - which is
much better behavior than before.
However still unable to pass multicast traffic. The command "show ip igmp
sources" on each node returns nothing. The command "show ip igmp groups" on
node 2 lists "239.255.255.250" as a group - which is good. However the same
command on node1 returns nothing.
Is there some configuration missing here ?
The network setup and the configuration files are same as stated in the top
of the thread before. Any hints or help will be highly appreciated.
Regards,
Pravin
Post by Everton Marques
Hi Yoda,
Just to let you know, I think you spotted a byte-ordering bug in qpimd
while converting 32-bit values from host to network.
I suppose you are running on a 64-bit cpu?
I think it is fixed by now in the git repository, but I was unable to spare
time for testing it. I hope to be able to test it properly by the next week.
Cheers,
Everton
On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques
Post by Everton Marques
Yoda,
I am looking at this.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
Seems like the PIM packet options length exceeds what is permitted according
to the code and the error log. The pimd.log is full of the following
2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
Also - Below are the contents of the PIMV2 packet captured by wireshark.
Please note that I have stripped off the IP and other headers. Just the PIM
0000   20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
0010   09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
0020   00 00
Thanks again for all your help.
Regards,
Yoda
On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Thanks.
Yes, I am looking for the reason why the Rfail counter is increasing.
When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.
Can you see if you locate any meaningful message in your pimd logs?
If you send me your pimd logs I can try to find something as well.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
I followed the exact directions as you suggested and ran the rebuilt
quagga
on the nodes. However I don't see any difference in behavior. Is there
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole
autotools
suit present on your system, but it could help to identify why pimd is
missing the hello packets.
Thanks,
Everton
On Fri, Nov 13, 2009 at 7:30 AM, Yoda geek
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2
ra_sta0.
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2
are
receiving PIMv2 "Hello" packets however they are addressed to
224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output
of
"show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail"
counters.
node1# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0    192.168.4.20     00:30 00:05  29:57:50    0  3496 3595
0
ra_sta0   192.168.3.20     00:30 00:04  29:57:50 3496  3496 3595
0
node1#
node2# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0    192.168.5.10     00:30 00:04  29:56:48    0  3590 3593
0
ra_sta0   192.168.3.10     00:30 00:07  29:56:48 3590  3590 3593
0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim
neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim
neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages
about
packets from node1, specially hello packets.
Thanks,
Everton
On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop
AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop
AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired
EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0  as you suggested.
I
still
don't
see join request coming into the source. Below is what the
configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the
source.
However, you need to enable "ip pim ssm" on ra_ap0 as
well.
If you enable only "ip igmp" on a interface, pimd won't
inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the
config
files
on
both
nodes as you suggested. Since it is not possible for me
to
force
the
client
to do a source specific join I added the following line
at
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group
239.255.255.250
for
source
192.168.4.60 which is addressed to 224.0.0.22 on the
side of
node2.
However
this join request never makes it to node 1 where the
source
is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node
where
the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific
IGMPv3
join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please
notice
qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the
receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface
that
should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces
attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable
both
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
 ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
 ip pim ssm
interface ra_sta0
 ip pim ssm
!
! Node 2
!
interface ra_ap0
 ip pim ssm
 ip igmp
interface ra_sta0
 ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic
between a
Tversity
server
and
a multicast client separated by 2 nodes (node1 and
node2).
Each
node
is
running quagga suite (version 0.99.15) along with
qpimd
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and
ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node
1
and
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================
=============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Everton Marques
2009-12-03 13:14:21 UTC
Permalink
Hi Yoda,

I am unsure the portion you modified would improve things.

Can you confirm that node2 is receiving a source-specific IGMPv3 join
to (S,G)=(192.168.4.60,239.255.255.250) ?

I don't understand why the group 239.255.255.250 is seen in "show ip
igmp group",
but the source 192.168.4.60 is missing from "show ip igmp sources". Since
the group is in EXCLUDE mode, I fear node2 is receiving an any-source join.

Are you able to grab a wireshark capture for such an IGMPv3 join?

Please check pimd logs for any related warning. You can mail me the
pimd logs if you want me to have a look as well. Also, you can enable
"debug ip igmp" in order to see detailed logs from pimd.

Please let me know your findings.

Thanks,
Everton
Post by Yoda geek
Hi Everton,
I have made byte-ordering changes in the igmp portion of the pimd code
similar to what you did. However - inspite of these changes we're
unable to route multicast streams between ssdp server on node 1 and
ssdp client on node 2. Below are the changes made to the igmp portion
of the code. It would be great if you could validate these changes for
Regards,
Yoda
--- pim_int_orig.h      2015-11-13 23:04:21.000000000 -0800
+++ pim_int.h   2015-11-13 21:38:22.000000000 -0800
@@ -27,5 +27,7 @@
 uint32_t pim_read_uint32_host(const uint8_t *buf);
 void pim_write_uint32(uint8_t *buf, uint32_t val_host);
+uint16_t pim_read_uint16_host(const uint8_t *buf);
+void pim_write_uint16(uint8_t *buf, uint16_t val_host);
 #endif /* PIM_INT_H */
--- pim_int_orig.c      2015-11-13 23:04:05.000000000 -0800
+++ pim_int.c   2015-11-13 21:36:54.000000000 -0800
@@ -42,3 +42,21 @@
  /* val_host is in netorder */
  memcpy(buf, &val_host, sizeof(val_host));
 }
+
+uint16_t pim_read_uint16_host(const uint8_t *buf)
+{
+  uint16_t val;
+  memcpy(&val, buf, sizeof(val));
+  /* val is in netorder */
+  val = ntohs(val);
+  /* val is in hostorder */
+  return val;
+}
+
+void pim_write_uint16(uint8_t *buf, uint16_t val_host)
+{
+  /* val_host is in host order */
+  val_host = htons(val_host);
+  /* val_host is in netorder */
+  memcpy(buf, &val_host, sizeof(val_host));
+}
--- pim_igmp_orig.c     2015-11-13 21:25:03.000000000 -0800
+++ pim_igmp.c  2015-11-13 21:39:04.000000000 -0800
@@ -34,6 +34,7 @@
 #include "pim_util.h"
 #include "pim_time.h"
 #include "pim_zebra.h"
+#include "pim_int.h"
 #define IGMP_GRP_REC_TYPE_MODE_IS_INCLUDE        (1)
 #define IGMP_GRP_REC_TYPE_MODE_IS_EXCLUDE        (2)
@@ -403,7 +404,8 @@
      group = find_group_by_addr(igmp, group_addr);
      if (group) {
-       int recv_num_sources = ntohs(*(uint16_t *)(igmp_msg +
IGMP_V3_NUMSOURCES_OFFSET));
+       //int recv_num_sources = ntohs(*(uint16_t *)(igmp_msg +
IGMP_V3_NUMSOURCES_OFFSET));
+       int recv_num_sources = pim_read_uint16_host(igmp_msg +
IGMP_V3_NUMSOURCES_OFFSET);
       /*
         RFC 3376: 6.6.1. Timer Updates
@@ -472,7 +474,7 @@
    return -1;
  }
-  num_groups = ntohs(*(uint16_t *) (igmp_msg +
IGMP_V3_REPORT_NUMGROUPS_OFFSET));
+  num_groups = pim_read_uint16_host(igmp_msg +
IGMP_V3_REPORT_NUMGROUPS_OFFSET);
  if (num_groups < 1) {
    zlog_warn("Recv IGMP report v3 from %s on %s: missing group records",
             from_str, ifp->name);
@@ -504,7 +506,7 @@
    rec_type        = group_record[IGMP_V3_GROUP_RECORD_TYPE_OFFSET];
    rec_auxdatalen  = group_record[IGMP_V3_GROUP_RECORD_AUXDATALEN_OFFSET];
-    rec_num_sources = ntohs(* (uint16_t *) (group_record +
IGMP_V3_GROUP_RECORD_NUMSOURCES_OFFSET));
+    rec_num_sources = pim_read_uint16_host(group_record +
IGMP_V3_GROUP_RECORD_NUMSOURCES_OFFSET);
    rec_group = *(struct in_addr *)(group_record +
IGMP_V3_GROUP_RECORD_GROUP_OFFSET);
On Fri, Nov 27, 2009 at 5:51 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
I suspect the same kind of byte ordering bug may be affecting the IGMP
code as well. I only verified the PIM code.
I will look at the IGMP byte ordering code as soon as I can, in a few days.
Thanks a lot for confirming the PIM byte ordering seems good now.
Cheers,
Everton
Post by Yoda geek
Hi Everton,
Thanks for the bug fix. We're currently running the latest code from the git
repository. I notice that each of the nodes recognizes the other one as a
neighbor and the "rfail" counter for "show ip pim hello" is 0 - which is
much better behavior than before.
However still unable to pass multicast traffic. The command "show ip igmp
sources" on each node returns nothing. The command "show ip igmp groups" on
node 2 lists "239.255.255.250" as a group - which is good. However the same
command on node1 returns nothing.
Is there some configuration missing here ?
The network setup and the configuration files are same as stated in the top
of the thread before. Any hints or help will be highly appreciated.
Regards,
Pravin
Post by Everton Marques
Hi Yoda,
Just to let you know, I think you spotted a byte-ordering bug in qpimd
while converting 32-bit values from host to network.
I suppose you are running on a 64-bit cpu?
I think it is fixed by now in the git repository, but I was unable to spare
time for testing it. I hope to be able to test it properly by the next week.
Cheers,
Everton
On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques
Post by Everton Marques
Yoda,
I am looking at this.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
Seems like the PIM packet options length exceeds what is permitted according
to the code and the error log. The pimd.log is full of the following
2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
Also - Below are the contents of the PIMV2 packet captured by wireshark.
Please note that I have stripped off the IP and other headers. Just the PIM
0000   20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
0010   09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
0020   00 00
Thanks again for all your help.
Regards,
Yoda
On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Thanks.
Yes, I am looking for the reason why the Rfail counter is increasing.
When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.
Can you see if you locate any meaningful message in your pimd logs?
If you send me your pimd logs I can try to find something as well.
Thanks a lot,
Everton
Post by Yoda geek
Hi Everton,
I followed the exact directions as you suggested and ran the rebuilt
quagga
on the nodes. However I don't see any difference in behavior. Is there
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
Post by Everton Marques
Hi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole
autotools
suit present on your system, but it could help to identify why pimd
is
missing the hello packets.
Thanks,
Everton
On Fri, Nov 13, 2009 at 7:30 AM, Yoda geek
Post by Yoda geek
Hi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2
ra_sta0.
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2
are
receiving PIMv2 "Hello" packets however they are addressed to
224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output
of
"show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail"
counters.
node1# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0    192.168.4.20     00:30 00:05  29:57:50    0  3496 3595
0
ra_sta0   192.168.3.20     00:30 00:04  29:57:50 3496  3496 3595
0
node1#
node2# show ip pim hello
Interface Address         Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0    192.168.5.10     00:30 00:04  29:56:48    0  3590 3593
0
ra_sta0   192.168.3.10     00:30 00:07  29:56:48 3590  3590 3593
0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton Marques
Hi,
I think the problem is node2 fails to bring up the node1 as pim
neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim
neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages
about
packets from node1, specially hello packets.
Thanks,
Everton
On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek
Post by Yoda geek
Below is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop
AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop
AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired
EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton Marques
Hi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek
Post by Yoda geek
Hi Everton,
I added the entry "ip pim ssm" on ra_ap0  as you suggested.
I
still
don't
see join request coming into the source. Below is what the
configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton Marques
Hi,
Yes, pimd should route the join request towards the
source.
However, you need to enable "ip pim ssm" on ra_ap0 as
well.
If you enable only "ip igmp" on a interface, pimd won't
inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geek
Hi Everton,
Thanks for the suggestions. I made the changes to the
config
files
on
both
nodes as you suggested. Since it is not possible for me
to
force
the
client
to do a source specific join I added the following line
at
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group
239.255.255.250
for
source
192.168.4.60 which is addressed to 224.0.0.22 on the
side of
node2.
However
this join request never makes it to node 1 where the
source
is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node
where
the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton Marques
Hi,
You did not mention whether you got a source-specific
IGMPv3
join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please
notice
qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the
receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface
that
should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces
attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable
both
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
 ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
 ip pim ssm
interface ra_sta0
 ip pim ssm
!
! Node 2
!
interface ra_ap0
 ip pim ssm
 ip igmp
interface ra_sta0
 ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Hi Everton & Fellow  qpimd users,
We're trying to stream multicast video traffic
between a
Tversity
server
and
a multicast client separated by 2 nodes (node1 and
node2).
Each
node
is
running quagga suite (version 0.99.15) along with
qpimd
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and
ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node
1
and
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server   -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
===========             ======================
======================
=============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Loading...