Hi Everton,
I have made byte-ordering changes in the igmp portion of the pimd code
similar to what you did. However - inspite of these changes we're
unable to route multicast streams between ssdp server on node 1 and
ssdp client on node 2. Below are the changes made to the igmp portion
of the code. It would be great if you could validate these changes for
us :
Regards,
Yoda
--- pim_int_orig.h 2015-11-13 23:04:21.000000000 -0800
+++ pim_int.h 2015-11-13 21:38:22.000000000 -0800
@@ -27,5 +27,7 @@
uint32_t pim_read_uint32_host(const uint8_t *buf);
void pim_write_uint32(uint8_t *buf, uint32_t val_host);
+uint16_t pim_read_uint16_host(const uint8_t *buf);
+void pim_write_uint16(uint8_t *buf, uint16_t val_host);
#endif /* PIM_INT_H */
--- pim_int_orig.c 2015-11-13 23:04:05.000000000 -0800
+++ pim_int.c 2015-11-13 21:36:54.000000000 -0800
@@ -42,3 +42,21 @@
/* val_host is in netorder */
memcpy(buf, &val_host, sizeof(val_host));
}
+
+uint16_t pim_read_uint16_host(const uint8_t *buf)
+{
+ uint16_t val;
+ memcpy(&val, buf, sizeof(val));
+ /* val is in netorder */
+ val = ntohs(val);
+ /* val is in hostorder */
+ return val;
+}
+
+void pim_write_uint16(uint8_t *buf, uint16_t val_host)
+{
+ /* val_host is in host order */
+ val_host = htons(val_host);
+ /* val_host is in netorder */
+ memcpy(buf, &val_host, sizeof(val_host));
+}
--- pim_igmp_orig.c 2015-11-13 21:25:03.000000000 -0800
+++ pim_igmp.c 2015-11-13 21:39:04.000000000 -0800
@@ -34,6 +34,7 @@
#include "pim_util.h"
#include "pim_time.h"
#include "pim_zebra.h"
+#include "pim_int.h"
#define IGMP_GRP_REC_TYPE_MODE_IS_INCLUDE (1)
#define IGMP_GRP_REC_TYPE_MODE_IS_EXCLUDE (2)
@@ -403,7 +404,8 @@
group = find_group_by_addr(igmp, group_addr);
if (group) {
- int recv_num_sources = ntohs(*(uint16_t *)(igmp_msg +
IGMP_V3_NUMSOURCES_OFFSET));
+ //int recv_num_sources = ntohs(*(uint16_t *)(igmp_msg +
IGMP_V3_NUMSOURCES_OFFSET));
+ int recv_num_sources = pim_read_uint16_host(igmp_msg +
IGMP_V3_NUMSOURCES_OFFSET);
/*
RFC 3376: 6.6.1. Timer Updates
@@ -472,7 +474,7 @@
return -1;
}
- num_groups = ntohs(*(uint16_t *) (igmp_msg +
IGMP_V3_REPORT_NUMGROUPS_OFFSET));
+ num_groups = pim_read_uint16_host(igmp_msg +
IGMP_V3_REPORT_NUMGROUPS_OFFSET);
if (num_groups < 1) {
zlog_warn("Recv IGMP report v3 from %s on %s: missing group records",
from_str, ifp->name);
@@ -504,7 +506,7 @@
rec_type = group_record[IGMP_V3_GROUP_RECORD_TYPE_OFFSET];
rec_auxdatalen = group_record[IGMP_V3_GROUP_RECORD_AUXDATALEN_OFFSET];
- rec_num_sources = ntohs(* (uint16_t *) (group_record +
IGMP_V3_GROUP_RECORD_NUMSOURCES_OFFSET));
+ rec_num_sources = pim_read_uint16_host(group_record +
IGMP_V3_GROUP_RECORD_NUMSOURCES_OFFSET);
rec_group = *(struct in_addr *)(group_record +
IGMP_V3_GROUP_RECORD_GROUP_OFFSET);
On Fri, Nov 27, 2009 at 5:51 AM, Everton Marques
Post by Everton MarquesHi Yoda,
I suspect the same kind of byte ordering bug may be affecting the IGMP
code as well. I only verified the PIM code.
I will look at the IGMP byte ordering code as soon as I can, in a few days.
Thanks a lot for confirming the PIM byte ordering seems good now.
Cheers,
Everton
Post by Yoda geekHi Everton,
Thanks for the bug fix. We're currently running the latest code from the git
repository. I notice that each of the nodes recognizes the other one as a
neighbor and the "rfail" counter for "show ip pim hello" is 0 - which is
much better behavior than before.
However still unable to pass multicast traffic. The command "show ip igmp
sources" on each node returns nothing. The command "show ip igmp groups" on
node 2 lists "239.255.255.250" as a group - which is good. However the same
command on node1 returns nothing.
Is there some configuration missing here ?
The network setup and the configuration files are same as stated in the top
of the thread before. Any hints or help will be highly appreciated.
Regards,
Pravin
Post by Everton MarquesHi Yoda,
Just to let you know, I think you spotted a byte-ordering bug in qpimd
while converting 32-bit values from host to network.
I suppose you are running on a 64-bit cpu?
I think it is fixed by now in the git repository, but I was unable to spare
time for testing it. I hope to be able to test it properly by the next week.
Cheers,
Everton
On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques
Post by Everton MarquesYoda,
I am looking at this.
Thanks a lot,
Everton
Post by Yoda geekHi Everton,
Seems like the PIM packet options length exceeds what is permitted according
to the code and the error log. The pimd.log is full of the following
2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM hello TLV
type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
Also - Below are the contents of the PIMV2 packet captured by wireshark.
Please note that I have stripped off the IP and other headers. Just the PIM
0000 20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
0010 09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
0020 00 00
Thanks again for all your help.
Regards,
Yoda
On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques
Post by Everton MarquesHi Yoda,
Thanks.
Yes, I am looking for the reason why the Rfail counter is increasing.
When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
Rfail may increment silently. However, now you undefined
PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
have a related log message.
Can you see if you locate any meaningful message in your pimd logs?
If you send me your pimd logs I can try to find something as well.
Thanks a lot,
Everton
Post by Yoda geekHi Everton,
I followed the exact directions as you suggested and ran the rebuilt
quagga
on the nodes. However I don't see any difference in behavior. Is there
anything in particular you're looking for after these changes ?
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node1> enable
node1# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
node1# q
Connection closed by foreign host.
Trying 192.168.3.10...
Connected to 192.168.3.10.
Escape character is '^]'.
Hello, this is Quagga 0.99.15 pimd 0.158
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
node2> enable
node2# show ip pim neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
node2# q
Connection closed by foreign host.
Thanks,
Yoda
On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
Post by Everton MarquesHi Yoda,
Based on the Rfail counter you spotted, I suspect the code under
PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
from pimd/Makefile.am ?
autoreconf -i --force
And finally to rebuild quagga.
I know this test may be cumbersome since it requires the whole
autotools
suit present on your system, but it could help to identify why pimd is
missing the hello packets.
Thanks,
Everton
On Fri, Nov 13, 2009 at 7:30 AM, Yoda geek
Post by Yoda geekHi Everton,
1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2
ra_sta0.
2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2
are
receiving PIMv2 "Hello" packets however they are addressed to
224.0.0.13.
3) Don't see any error logs on nodes 1 and 2. Below is the output
of
"show
ip pim hello" on both nodes 1 and 2. Please notice the "Rfail"
counters.
node1# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0 192.168.4.20 00:30 00:05 29:57:50 0 3496 3595
0
ra_sta0 192.168.3.20 00:30 00:04 29:57:50 3496 3496 3595
0
node1#
node2# show ip pim hello
Interface Address Period Timer StatStart Recv Rfail Send
Sfail
ra_ap0 192.168.5.10 00:30 00:04 29:56:48 0 3590 3593
0
ra_sta0 192.168.3.10 00:30 00:07 29:56:48 3590 3590 3593
0
node2#
Thanks,
On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
Post by Everton MarquesHi,
I think the problem is node2 fails to bring up the node1 as pim
neighbor
on ra_sta0, since node1 is missing from node2 "show ip pim
neighbor".
Can you please double check the following?
1) "ip pim ssm" is enabled on node1 ra_sta0 ?
2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
3) node2 pimd is logging any error/warning ? look for messages
about
packets from node1, specially hello packets.
Thanks,
Everton
On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek
Post by Yoda geekBelow is the output as requested
User Access Verification
node2> enable
node2# show ip igmp interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop
AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
node2# show ip igmp interface group
Interface Address Group Mode Timer Srcs V Uptime
ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
node2# show ip igmp group sources
Interface Address Group Source Timer Fwd Uptime
node2# show ip igmp sources pim designated-router
NonPri: Number of neighbors missing DR Priority hello option
Interface Address DR Uptime Elections NonPri
ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
node2# show ip pim designated-router hello
Interface Address Period Timer StatStart Recv Rfail Send Sfail
ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
node2# show ip pim hello interface
Interface Address ifIndex Socket Uptime Multi Broad MLoop
AllMu
Prmsc
Del
ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
node2# show ip pim interface local-membership
Interface Address Source Group Membership
node2# show ip pim local-membership join
Interface Address Source Group State Uptime Expire Prune
node2# show ip pim join neighbor
Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
G=generation_id
A=address_list
T=can_disable_join_suppression
Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
node2# show ip pim neighbor rpf
RPF Cache Refresh Delay: 10000 msecs
RPF Cache Refresh Timer: 0 msecs
RPF Cache Refresh Requests: 6
RPF Cache Refresh Events: 3
RPF Cache Refresh Last: 00:34:24
Source Group RpfIface RpfAddress RibNextHop Metric Pref
node2# show ip pim rpf upstream
Source Group State Uptime JoinTimer RefCnt
node2# show ip pim upstream-join-desired
Interface Source Group LostAssert Joins PimInclude JoinDesired
EvalJD
node2# show ip pim upstream-join-desired rpf
Source Group RpfIface RibNextHop RpfAddress
node2# show ip pim upstream-rpf route 192.168.4.60
Address NextHop Interface Metric Preference
192.168.4.60 192.168.3.20 ra_sta0 1 0
node2# q
On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
Post by Everton MarquesHi,
Can you send the following commands from node2 ?
show ip igmp interface
show ip igmp group
show ip igmp sources
show ip pim designated-router
show ip pim hello
show ip pim interface
show ip pim local-membership
show ip pim join
show ip pim neighbor
show ip pim rpf
show ip pim upstream
show ip pim upstream-join-desired
show ip pim upstream-rpf
show ip route 192.168.4.60
Thanks,
Everton
On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek
Post by Yoda geekHi Everton,
I added the entry "ip pim ssm" on ra_ap0 as you suggested.
I
still
don't
see join request coming into the source. Below is what the
configuration
Node 1 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/08 05:03:23
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf
-------------------------
!
! Zebra configuration saved from vty
! 2009/08/09 22:38:12
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface br-lan
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip pim ssm
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
Post by Everton MarquesHi,
Yes, pimd should route the join request towards the
source.
However, you need to enable "ip pim ssm" on ra_ap0 as
well.
If you enable only "ip igmp" on a interface, pimd won't
inject
IGMP-learnt membership into the pim protocol.
Cheers,
Everton
On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
Post by Yoda geekHi Everton,
Thanks for the suggestions. I made the changes to the
config
files
on
both
nodes as you suggested. Since it is not possible for me
to
force
the
client
to do a source specific join I added the following line
at
interface
ra_ap0
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
I do see the source-specific IGMPv3 join group
239.255.255.250
for
source
192.168.4.60 which is addressed to 224.0.0.22 on the
side of
node2.
However
this join request never makes it to node 1 where the
source
is
located
on
ra_ap0.
Shouldn't the pimd route this join request to the node
where
the
source
is
attached ?
Thanks,
On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
Post by Everton MarquesHi,
You did not mention whether you got a source-specific
IGMPv3
join
to
the
channel (S,G)=(192.168.4.60,239.255.255.250). Please
notice
qpimd
is
unable to program the multicast forwarding cache with
non-source-specific
groups. Usually the key issue is to instruct the
receiver
application
to
join the source-specific channel (S,G).
1) Enable "ip pim ssm" everywhere (on every interface
that
should
pass
mcast).
2) Enable both "ip pim ssm" and "ip igmp" on interfaces
attached
to
the receivers (IGMPv3 hosts).
An even simpler config rule to remember is to enable
both
commands
everywhere. They should not cause any harm.
Hence, if your mcast receiver is attached to Node 2 at
ra_ap0, I
think
you will
!
! Node 1
!
interface ra_ap0
ip pim ssm
interface ra_sta0
ip pim ssm
!
! Node 2
!
interface ra_ap0
ip pim ssm
ip igmp
interface ra_sta0
ip pim ssm
Hope this helps,
Everton
On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
Hi Everton & Fellow qpimd users,
We're trying to stream multicast video traffic
between a
Tversity
server
and
a multicast client separated by 2 nodes (node1 and
node2).
Each
node
is
running quagga suite (version 0.99.15) along with
qpimd
(version
0.158)
running on top of Linux 2.6.26.
Node 1 has 3 network interfaces - eth0, ap0 and
ra_sta0
Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
The Tversity server talks to interface ra_ap0 on Node
1
and
the
multicast
client talks to interface ra_ap0 on Node 2
Nodes 1 and 2 talk with each other over their ra_sta0
interfaces
Tversity server -----------ra_ap0--> Node 1
--ra_sta0-----------------ra_sta0-->Node
2-----ra_ap0------------------------> Video Client
=========== ======================
======================
=============
Node 1 pimd.conf file
==================
!
! Zebra configuration saved from vty
! 2009/08/01 20:26:06
!
hostname node1
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip pim ssm
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!
Node 2 pimd.conf configuration file
============================
!
! Zebra configuration saved from vty
! 2009/08/02 21:54:14
!
hostname node2
password zebra
enable password zebra
log stdout
!
interface eth0
!
interface eth1
!
interface lo
!
interface ra_ap0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
ip igmp join 239.255.255.250 192.168.4.60
!
interface ra_sta0
ip igmp
ip igmp query-interval 125
ip igmp query-max-response-time-dsec 100
!
!
ip multicast-routing
!
line vty
!