Copyright © TIBCO Software Inc. All Rights Reserved
Copyright © TIBCO Software Inc. All Rights Reserved


Chapter 14 Multicast Deployment and Troubleshooting : Deployment Considerations

Deployment Considerations
Ensuring a proper multicast deployment takes some forethought, more than a traditional unicast deployment. This section discusses some subjects to consider before deploying TIBCO Enterprise Message Service with multicast.
Issues in multicast deployment can be separated into three areas: ensuring multicast connectivity, restricting multicast traffic, and managing bandwidth. These can be represented with three basic questions:
1.
2.
3.
Connectivity
Like unicast applications, multicast applications require that the network layer provide a path for multicast data to flow from senders to receivers. However, routers and switches may require additional configuration for multicast use and tuning. The first step in ensuring and limiting connectivity is defining channels, and assigning multicast group addresses these channels.
Multicast Addresses
Each multicast channel, defined in the channels.conf configuration file, is assigned a multicast address. TIBCO Enterprise Message Service allows you to assign any valid multicast address, in the class D address range, 224.0.0.0 through 239.255.255.255. However, in order to avoid a conflict, please refer to the Internet Assigned Numbers Authority (IANA) list of reserved addresses to avoid a conflict:
http://www.iana.org/assignments/multicast-addresses
When assigning addresses to your channels, keep these additional considerations in mind:
Multicast addresses 224.0.1.78 and 224.0.1.79 are reserved by TIBCO EMS for internal use. These addresses should not be used, as TIBCO multicast traffic may be encountered there.
Ideally, you should select multicast addresses from 239.0.0.0 to 239.255.255.255. These have been set aside as an administratively scoped block, and IANA will never reserve these. They can be freely used within your enterprise without worry of any external conflict.
There is not a one-to-one mapping of MAC addresses to IP addresses; because of this you should not pick x.0.0.x addresses, as they may map to reserved addresses and so may not work. The class D IP address range assigned to multicasting is 28 bits wide, but the range of MAC addresses assigned to multicast is only 23 bits wide. Since only the 23 lower order bits of the IP address are assigned to make the MAC address, an overlap results. For example, if one chooses a multicast address 239.0.0.1, it may incorrectly overlap to the reserved 224.0.0.1.
Defining Channels
TIBCO Enterprise Message Service does not restrict the number of channels that you can configure and use in the EMS server or the multicast daemon. However, the number of IP multicast group addresses that can be joined by any one host at one time may be constrained by outside factors. Often, the number is limited by the NIC, and typically this limitation is not specified in the NIC documentation.
Experimentation is often the only way to determine what the limit is for a specific NIC and OS. With some NICs, joining too many groups will set the card to "promiscuous mode" which will adversely affect performance.
It is also important to note that, because a channel represents both an IP multicast group address and a destination port, there is not necessarily a one-to-one correlation between a channel and multicast group.
A group is joined when a multicast daemon listens to an IP multicast group address. Because a channel represents both an IP multicast group address and a destination port, there is not necessarily a one-to-one correlation between a channel and multicast group. For example, if you have 10 multicast channels all using the same multicast group address but different ports, then a multicast daemon will join at most one group. However, if the 10 multicast channels are all using different multicast group addresses, then a multicast daemon may join up to 10 groups.
The multicast IP address and port combinations that you choose should only be used with TIBCO EMS. While the TIBCO Multicast Daemon can filter out corrupt network data, receiving data packets that are not specific to EMS can yield unpredictable results, which could destabilize your network.
Ensuring Multicast connectivity
As stated earlier, multicast applications require that the network layer provide a path for multicast data to flow from senders to receivers. By default, most routers and switches have multicast routing disabled and require additional configuration to enable it. If you experience connectivity problems, this is the first place to check.
For example, with CISCO routers you must use the ip multicast-routing command to enable multicast routing. Multicast hardware configuration falls outside the scope of this document; please consult your network administrator or the TIBCO Professional Services Group for configuration specific to your network and enterprise.
Restricting Multicast Traffic
Multicast deployment often also involves making sure that multicast streams do not go where they are unwanted, especially when high-bandwidth streams are present on a network that also includes some low-bandwidth links, or where access must be controlled at the network layer for security reasons.
Within a LAN, Ethernet switches can direct unicast traffic only to ports where it is wanted. Typically, because routers and switches do not enable multicast packet forwarding by default, restricting multicast traffic is not an issue. However, one must be cognizant of this issue when planning a multicast deployment.
Managing Bandwidth
This section discusses bandwidth considerations that are specific to multicast deployments. There are three main aspects to bandwidth:
Determining Available Bandwidth — determine your available bandwidth, and setting bandwidth limitations to maximize performance.
Dividing Bandwidth Among Channels — create channels to make the best use of available bandwidth.
Handling Slow Applications — managing small numbers of slow applications so that they do not slow the entire multicast network.
Determining Available Bandwidth
Reliable unicast transports, such as TCP, automatically share available network bandwidth among all sessions contending for it. Administrators play no role in this process; the available bandwidth is dynamically determined by the protocol stacks as they measure the round-trip time and packet loss rates. This process is called congestion control. It assumes that all streams have equal priority and it automatically divides bandwidth accordingly.
In contrast, multicast relies on the administrator to ensure that the amount of bandwidth the network delivers is reserved or available. In TIBCO Enterprise Message Service, the administrator allocates network bandwidth for each multicast channel using the maxrate configuration parameter (see channels.conf on page 232). Correctly allocating bandwidth prevents the application from experiencing congestion.
Congestion can cause packet loss, which can in turn cause erratic behavior or even application failure. This is another significant difference between multicast and unicast; with unicast, congestion causes applications to run more slowly, but will not cause them to fail.
You must carefully consider and limit how fast you send, because TIBCO Enterprise Message Service does not impose bandwidth limitations. If you try to send faster than the network can actually deliver the data, you will see substantially lower throughput than had you asked for slightly less bandwidth than the network can actually deliver.
It is somewhat paradoxical, but if you ask the EMS server to deliver 900 Mbps over a network layer that can deliver 1 Gbps, it will. If you ask it to deliver more than 1 Gbps over a 1 Gbps network layer, you could get as little as 400 Mbps. What will most likely occur is chaotic behavior based on loss rates and other factors.
This leads to an unusual rule: if throughput is too low, try asking for less—there is a chance you may get more. It is important to perform this test even if your throughput is still well below "wire speed." That is because loss due to congestion can come from many sources other than the wire speed limit, such as TCP data on the same network. It is a simple test and if the results show that actual throughput goes up as the amount of bandwidth requested goes down, it is a very strong sign that there is loss due to congestion somewhere in your network, between the sender and receivers.
Restrict multicast traffic to a rate a little below the maximum capacity of your network. If your throughput rate is slower than expected, restrict the rate further. You may find that throughput actually increases.
You can think of the bandwidth rate specified for a channel as a delivery promise that the network layer makes to EMS. If the network layer breaks that promise, EMS multicast throughput falls to a rate substantially below what the network can actually deliver.
Dividing Bandwidth Among Channels
Ideally, a deployment within a set of routed subnets, or VLAN, should have hosts with heterogeneous interfaces of homogeneous speed. Deployments that do not adhere to this are not recommended, because loss can be introduced if the receiving interfaces are slower than the link and sending interface. This happens because the slower interfaces cannot handle bursts of data on a faster network. Also, we do not recommend that you use EMS multicast over WAN links.
For example, if you have a number of clients with 100Mb NIC cards and others with 1Gb NIC cards, the recommended architecture is to send from a 100Mb NIC to the slower receivers and a 1Gb NIC to the faster receivers. You can accomplish this by configuring two multicast channels, one for the faster-speed senders and receivers, and one for the slower senders and receivers.
Alternatively, you can configure one channel and limit the bandwidth to the slowest receiver, or 100Mb. However, the best solution is to use a multi-homed machine, separate the applications by defining different channels for two interfaces, then allowing each channel to operate at its optimum speed.
For example, these two channel configurations are optimized for 100Mb NIC card and a 1Gb NIC card:
--- channels.conf ----
[channel_100mb]
    address = 239.1.1.1:10
    maxrate = 7MB
    interface=10.99.99.99
 
[channel_1Gb]
    Address = 239.1.1.2:10
    maxrate = 95MB
    interface=10.99.99.100
Applications running on 100Mb machines would use topics with channel_100Mb assigned to them, and applications on machines with 1 Gb NIC cards would use topics with channel_1Gb assigned. Also note that some bandwidth has been left for other TCP data, as suggested in Determining Available Bandwidth.
Handling Slow Applications
If you have a small number of applications or hosts that are known to be "slow" or are on a WAN, but need to subscribe to the data on a multicast enabled topic, we recommend disabling EMS multicast at the application. You can disable multicast in a client through API calls; see the API documentation for your language.
The slow application will receive messages from the server over TCP, effectively removing them from the multicast stream and avoiding congesting and slowing down other multicast receivers. It is very important to account for the TCP bandwidth used by application(s) that do this in your multicast bandwidth calculations.
If an EMS client with multicast disabled subscribes to a topic that is multicast-enabled, messages will be delivered to the client over TCP. Take this TCP traffic into consideration when setting your bandwidth limitations, as described in Determining Available Bandwidth.

Copyright © TIBCO Software Inc. All Rights Reserved
Copyright © TIBCO Software Inc. All Rights Reserved