UM Router Configuration Overview <-
When the UM Router daemon launches, it uses configuration option settings to determine its behavior and expectations. You specify option values in an XML configuration file, and reference the file from a command line argument.
Typically, you have a separate XML configuration file for each UM Router, which contains structured configuration elements that describe aspects of the UM Router. Within this XML configuration file, each endpoint portal definition points to a UM configuration file, which allow the portal to properly connect to its TRD.
Creating Applications for UM Router Compatibility <-
When developing messaging applications that use Ultra Messaging and, in particular, the UM Router, please observe the following guidelines.
Naming and Identification <-
An important part to successfully implementing UM Routers is prudent and error-free naming of TRDs, UM Routers, portals, etc., as well as correct identification of IP addresses and ports. It is good practice to first design the UM Router network by defining all connections and uniquely naming all UM Routers, portals, and TRDs. This works well as a diagram similar to some examples presented in this document. Include the following names and parameters in your design diagram:
-
TRD names and IDs
-
UM Router names
-
Portal names
-
Portal costs
For example, a well-prepared UM Router design could look like the following figure.
Portal Costs <-
A network of UM Routers uses forwarding cost as the criterion for determining the best (lowest cost) path over which to resolve a topic and route data. Forwarding cost is simply the sum of all portal costs along a multi-UM Router path. Thus, total cost for the single path in the above example is 34. (Note that this is a non-real-world example, since costs are pointless without alternate routes to compare to.) You assign portal costs via the <cost>
configuration option.
After the UM Router network calculates its paths, if a new lower-cost source becomes available, receivers switch to that path.
Access Control Lists (ACL) <-
You can apply Access Control Lists (ACL) to a UM Router's portals to filter traffic by certain topics, transports, topic patterns, multicast groups, etc. You configure ACLs in a UM Router's XML configuration file, as children of an <endpoint>
or <peer>
portal. As traffic arrives at the portal, the portal either forwards it or rejects it per ACL criteria.
Inbound ACLs determine what information to forward to other portals in the UM Router, while Outbound ACLs determine (by topic) what information from other portals that this portal can send out the UM Router. Each portal (endpoint or peer) can have up to one inbound ACL and one outbound ACL.
An ACL can contain one or more Access Control Entries (ACEs). ACEs are the filters that let you match (and accept or reject based on), criteria elements. For example, to accept only messages for topic ABC:
<acl>
<inbound>
<ace match="accept">
<topic>ABC</topic>
</ace>
</inbound>
</acl>
Possible ACE condition elements are:
-
<multicast-group/>
*
-
<pcre-pattern>
(PCRE wildcard patterns)
-
<regex-pattern>
(Regex wildcard patterns)
-
<source-ip/>
*
-
<tcp-source-port/>
*
-
<topic>
-
<transport/>
*
-
<udp-destination-port/>
*
-
<udp-source-port/>
*
-
<xport-id/>
* (for LBT-IPC traffic)
These items apply to only inbound ACLs, and are ignored if used with an outbound ACL.
The above elements are all children of the <ace>
element. When an ACL has multiple ACE entries, the UM Router goes down the list until it finds a match. It then accepts (forwards) or rejects, and is done with that ACL. An implicit "reject all" is at the end of every ACL, so the UM Router rejects any topic not matched. If you place multiple conditions within an ACE, the UM Router performs an "and" operation with them.
Note that the portal ignores a condition element if a) it is inbound-only and used in an outbound ACL, or b) it simply does not apply (such as a <udp-source-port/>
if the transport is TCP).
Also note that ACLs can affect topic resolution traffic as well as user messages. They can, for example, block a topic (which prevents the creation of proxy receivers) and, thus, protect remote TRDs from unwanted queries and advertisements. This effect does not apply to wildcard receivers, however, because ACLs match only on discrete topics. Thus, while ACLs can operate on specific topic traffic derived from wildcard topic resolution, they cannot prevent pattern interest from propagating throughout the network.
Consider the following example, where we configure a portal to forward on specific topics. This example also illustrates the parent/child hierarchy for ACLs, ACEs, and ACE conditions.
<endpoint>
<name>LAN1</name>
<lbm-config>lan1.cfg</lbm-config>
<domain-id>1</domain-id>
<acl>
<inbound>
<ace match="accept">
<topic>ABC</topic>
</ace>
<ace match="accept">
<topic>DEF</topic>
<transport value=lbt-rm comparison=eq/>
</ace>
<ace match="accept">
<topic>GHI</topic>
</ace>
</inbound>
</acl>
</endpoint>
The above example shows each topic match in a separate ACE. When topic "GHI" arrives, the portal finds a match in the third ACE and forwards the topic. (Placing all three <topic>
s in a single ACE would never match anything.) Also note that "DEF" is forwarded only if it uses an LBT-RM transport.
Since the behavior for multiple ACEs is "first match, then done", list ACEs in a specific-to-general order. For the example below, to forward topic "ABC123" but reject similar topics such as "ABCD123" or "ABCE123", list the ACE for "ABC123" first (as done below). If the ACE to reject "ABC.*123" was listed first, it would also (undesirably) match and reject "ABC123".
<endpoint>
<name>LAN1</name>
<lbm-config>lan1.cfg</lbm-config>
<domain-id>1</domain-id>
<acl>
<inbound>
<ace match="accept">
<topic>ABC123</topic>
</ace>
<ace match="reject">
<pcre-pattern>ABC.*123</pcre-pattern>
</ace>
</inbound>
</acl>
</endpoint>
You can also filter on certain transport types to accept multicast traffic but reject tcp traffic, as shown below.
<endpoint>
<name>LAN1</name>
<lbm-config>lan1.cfg</lbm-config>
<domain-id>1</domain-id>
<acl>
<inbound>
<ace match="accept">
<transport comparison="equal" value="lbtrm"/>
</ace>
<ace match="reject">
<transport comparison="equal" value="tcp"/>
</ace>
</inbound>
</acl>
</endpoint>
Timers and Intervals <-
The UM Router offers a wide choice of timer and interval options to fine tune its behavior and performance. There are interactions and dependencies between some of these, and if misconfigured, they may cause race or failure conditions.
This manual's description of configuration options (see XML Configuration Reference), includes identification of such relationships. Please heed them.
Multicast Immediate Messaging Considerations <-
Multicast Immediate Messages (MIMs) may pass through the UM Router. You cannot filter MIMs with Access Control Lists (ACL)-MIMs are forwarded to all TRDs. Informatica does not recommend using MIM for messaging traffic across the UM Router. MIM is intended for short-lived topics and applications that cannot tolerate a delay between source creation and the sending of the first message. See also Multicast Immediate Messaging.
Persistence Over the UM Router <-
The UM Router supports UMP persistence by routing all necessary control and retransmission channels along with transport and topic resolution traffic. A typical implementation places the UMP persistent store in the same TRD as its registered source, as shown in the following figure.
The UM Router also supports UMP implementations with the store located in a receiver's TRD, as shown in the following figure.
Note: For more reliable operation when using UMP with UM Routers, Informatica recommends enabling OTR.
Late Join and Off-Transport Recovery <-
The UM Router supports sources and receivers configured for Late Join and/or Off-Transport Recovery (OTR). Retransmission requests and subsequent retransmissions are conducted across the entire path through the UM Router network. A UM Router's proxy sources do not have Late-Join/OTR retention buffers and hence, are not able to provide recovered messages.
Topic Resolution Reliability <-
Topic resolution can sometimes remain in a quiescent phase due to link interruption, preventing needed re-subscription topic resolution activity. Two ways you can address this are:
BOS and EOS Behavior Over the UM Router <-
Through a network of UM Routers, a topic traverses a separate session for each link along its path. Thus, the UM Router reports BOS/EOSs based on the activity between the proxy source transport and its associated receiver. There is no end-to-end, application-to-application reporting of the data path state. Also, in the case of multiple topics being assigned to multiple sessions, topics may find themselves with different session mates from hop to hop. Of course, this all influences when, and for which transport session, a topic's BOSs and EOSs are issued.
UM Router Reliable Loss <-
The UM router can create a situation where a "reliable" transport (TCP or LBT-IPC) can experience out-of-order message delivery.
The UM router can perform a "protocol conversion" function. I.e. an originating source can use a UDP-based protocol (LBT-RM or LBT-RU), but the proxy source for a remote receiver can use a "reliable" protocol (TCP or LBT-IPC). With a UDP-based protocol, messages can arrive to the UM Router network out of order, usually due to packet loss and recovery. However, when those out-of-order messages are forwarded across a "reliable" protocol (TCP or LBT-IPC), the receiver does not expect the sequence number gap, and immediately declares the out-of-order messages as unrecoverable loss. This, in spite of the fact that the missing message arrives shortly thereafter.
Starting in UM version 6.12, there are two new configuration options: transport_tcp_dro_loss_recovery_timeout (receiver) and transport_lbtipc_dro_loss_recovery_timeout (receiver), which modify the receiver's behavior. Instead of declaring a gap immediately unrecoverable, a delay is introduced which is similar to what a UDP-based receiver uses to wait for lost and retransmitted datagrams. If the missing message arrives within the delay time, the messages are delivered to application without loss.
Be aware that this functionality is only used with "reliable" protocols published by a UM Router's proxy source. If this delay feature is enabled, it will not apply to a "reliable" protocol that is received directly from the originating source.
Note however that you can get genuine gaps in the "reliable" data stream without recovery. For example, an overloaded UM Router can drop messages. Or a UM Router's proxy receiver can experience unrecoverable loss. In that case, the delay will have to expire before the missing messages are declared unrecoverable and subsequent data is delivered.
- Attention
- The delay times default to 0, which retains the pre-6.12 behavior of immediately declaring sequence number gaps unrecoverable. If you want this new behavior, you must configure the appropriate option.
Topology Configuration Examples <-
Following are example configurations for a variety of UM Router topologies. These are the topology examples presented Routing Topologies.
In a real-world situation, you would have UM Router XML configuration files with their portal interfaces referencing complete UM configuration files. However, for these examples, the referred domain configuration files are simplified to contain only information relevant to the applicable UM Router. As part of this simplification, domain configuration files show interfaces for only one or two transport types.
Also, IP addresses are provided in some cases and omitted in other cases. This is because initiator peer portals need to know the IP addresses (and port numbers) of their corresponding acceptor portals to establish connections, whereas endpoint portals communicate via topic resolution and thus, do not need to know IP addresses.
- Note
- Before designing any UM Router implementations based on configurations or examples other than the types presented in this document, please contact your technical support representative.
Direct Link Configuration <-
This example uses a UM Router to connect two topic resolution domain LANs.
TRD1 Configuration
This UM configuration file, trd1.cfg, describes TRD1 and is referenced in the UM Router configuration file.
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85
G1 Configuration
This UM Router configuration file defines two endpoint portals. In the daemon section, we have turned on monitoring for the all endpoint portals in the UM Router. The configuration specifies that all statistics be collected every 5 seconds and uses the lbm transport module to send statistics to your monitoring application, which runs in TRD1. See also UM Concepts, Monitoring UMS. The Web Monitor has also been turned on (port 15304) to monitor the performance of the UM Router.
<?xml version="1.0" encoding="UTF-8" ?>
<!-- G1 xml file- 2 endpoint portals -->
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
<lbm-license-file>lic0014.txt</lbm-license-file>
<monitor interval="5">
<transport-module module="lbm" options="config=trd1.cfg"/>
</monitor>
<web-monitor>*:15304</web-monitor>
</daemon>
<portals>
<endpoint>
<name>G1-TRD1</name>
<domain-id>1</domain-id>
<lbm-config>trd1.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1-TRD2</name>
<domain-id>2</domain-id>
<lbm-config>trd2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
TRD2 Configuration
The configuration file trd2.cfg could look something like this.
# Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85
Peer Link Configuration <-
In cases where the UM Router connection between two TRDs must tunnel through a WAN or TCP/IP network, you can implement a UM Router at each end, as shown in the example below.
TRD1 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85
G1 Configuration
Following is an example of two companion peer portals (on different UM Routers) configured via UM Router XML configuration file for a single TCP setup. Note that one must be an initiator and the other, an acceptor.
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G1-TRD1</name>
<domain-id>1</domain-id>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<peer>
<name>G1-G2</name>
<single-tcp>
<interface>10.30.3.100</interface>
<initiator>
<address>10.30.3.102</address>
<port>26123</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>
G2 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G2-G1</name>
<single-tcp>
<interface>10.30.3.102</interface>
<acceptor>
<listen-port>26123</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G2-TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
TRD2 Configuration
## LAN2 Configuration Options ##
context request_tcp_interface 10.33.3.0/24
context resolver_multicast_port 13965
Transit TRD Link Configuration <-
This example, like the previous one, configures two localized UM Routers tunneling a connection between two TRDs, however, the UM Routers in this example are tunneling through an intermediate TRD. This has the added effect of connecting three TRDs.
TRD1 Configuration
## TRD1 Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85
G1 Configuration
Following is an example of two companion peer portals (on different UM Routers) configured via UM Router XML configuration file for a single TCP setup. Note that one must be an initiator and the other, an acceptor.
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G1-TRD1</name>
<domain-id>1</domain-id>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1-TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
TRD2 Configuration
## TRD2 Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85
G2 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G2-TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G2-TRD3</name>
<domain-id>3</domain-id>
<lbm-config>TRD3.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
TRD3 Configuration
## TRD3 Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.3.37.85
Parallel Links Configuration <-
This example is similar in purpose to the single link, peer-to-peer example, except that a second pair of UM Routers is added as a backup route. You can set one of these as a secondary route by assigning a higher cost to portals along the path. In this case we set G3 and G4's portal costs to 5, forcing the lower route to be selected only if the upper (G1, G2) route fails.
Also note that we have configured the peer portals for the leftmost or odd-numbered UM Routers as initiators, and the rightmost or even-numbered UM Router peers as acceptors.
TRD1 Configuration
## TRD1 Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85
G1 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G1-TRD1</name>
<domain-id>1</domain-id>
<cost>2</cost>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<peer>
<name>G1-G2</name>
<cost>2</cost>
<single-tcp>
<interface>10.30.3.101</interface>
<initiator>
<address>10.30.3.102</address>
<port>23745</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>
G2 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G2-G1</name>
<cost>2</cost>
<single-tcp>
<interface>10.30.3.102</interface>
<acceptor>
<listen-port>23745</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G2-TRD2</name>
<domain-id>2</domain-id>
<cost>2</cost>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
G3 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G3-TRD1</name>
<domain-id>1</domain-id>
<cost>5</cost>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<peer>
<name>G3-G4</name>
<cost>5</cost>
<single-tcp>
<interface>10.30.3.103</interface>
<initiator>
<address>10.30.3.104</address>
<port>23746</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>
G4 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G4-G3</name>
<cost>5</cost>
<single-tcp>
<interface>10.30.3.104</interface>
<acceptor>
<listen-port>23746</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G4-TRD2</name>
<domain-id>2</domain-id>
<cost>5</cost>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
TRD2 Configuration
## TRD2 Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85
Loop and Spur Configuration <-
TRD1 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85
G1 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G1_to_G3</name>
<single-tcp>
<initiator>
<address>55.55.10.27</address>
<port>23801</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G1_to_G2</name>
<single-tcp>
<initiator>
<address>55.55.10.26</address>
<port>23745</port>
</initiator>
</single-tcp>
</peer>
<endpoint>
<name>G1_to_TRD1</name>
<domain-id>1</domain-id>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
G2 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G2_to_G4</name>
<single-tcp>
<initiator>
<address>55.55.10.28</address>
<port>23632</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G2_to_G1</name>
<single-tcp>
<acceptor>
<listen-port>23745</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G2_to_TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
TRD2 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85
TRD3 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.3.37.85
G3 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G3_to_G4</name>
<single-tcp>
<initiator>
<address>55.55.10.28</address>
<port>23754</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G3_to_G1</name>
<single-tcp>
<acceptor>
<listen-port>23801</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G3_to_TRD3</name>
<domain-id>3</domain-id>
<lbm-config>TRD3.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
G4 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G4_to_G3</name>
<single-tcp>
<acceptor>
<listen-port>23754</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G4_to_TRD4</name>
<domain-id>4</domain-id>
<lbm-config>TRD4.cfg</lbm-config>
</endpoint>
<peer>
<name>G4_to_G2</name>
<single-tcp>
<acceptor>
<listen-port>23632</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G4_to_G5</name>
<single-tcp>
<initiator>
<address>55.55.10.29</address>
<port>23739</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>
TRD4 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.4.37.85
G5 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G5_to_TRD5</name>
<domain-id>5</domain-id>
<lbm-config>TRD5.cfg</lbm-config>
</endpoint>
<peer>
<name>G5_to_G4</name>
<single-tcp>
<acceptor>
<listen-port>23739</listen-port>
</acceptor>
</single-tcp>
</peer>
</portals>
</tnw-gateway>
TRD5 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.5.37.85
Star Configuration <-
This network consists of four TRDs. Within each TRD, full multicast connectivity exists. However, no multicast connectivity exists between the four TRDs.
G1 Configuration
The configuration for this UM Router also has transport statistics monitoring and the WebMonitor turned on.
<?xml version="1.0" encoding="UTF-8" ?>
<!-- UM GW xml file- 3 endpoint portals -->
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
<lbm-license-file>lic0014.txt</lbm-license-file>
<monitor interval="5">
<transport-module module="lbm" options="config=trd1.cfg"/>
</monitor>
<web-monitor>*:15304</web-monitor>
</daemon>
<portals>
<endpoint>
<name>G1_to_TRD1</name>
<domain-id>1</domain-id>
<lbm-config>trd1.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1_to_TRD2</name>
<domain-id>2</domain-id>
<lbm-config>trd2.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1_to_TRD3</name>
<domain-id>3</domain-id>
<lbm-config>trd3.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1_to_TRD4</name>
<domain-id>4</domain-id>
<lbm-config>trd4.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
TRD1 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85
TRD2 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85
TRD3 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.3.37.85
TRD4 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.4.37.85
Mesh Configuration <-
The mesh topology utilizes many connections between many nodes, to provide a variety of alternate routes. However, meshes are not the best solution in many cases, as unneeded complexity can increase the chance for configuration errors or make it more difficult to trace problems.
TRD1 Configuration
### Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85
G1 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G1_to_G5</name>
<single-tcp>
<initiator>
<address>55.55.10.105</address>
<port>23880</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G1_to_G4</name>
<single-tcp>
<initiator>
<address>55.55.10.104</address>
<port>23801</port>
</initiator>
</single-tcp>
</peer>
<endpoint>
<name>G1_to_TRD1</name>
<domain-id>1</domain-id>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1_to_TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
<peer>
<name>G1_to_G2</name>
<single-tcp>
<initiator>
<address>55.55.10.102</address>
<port>23745</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>
G2 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G2_to_G5</name>
<single-tcp>
<initiator>
<address>55.55.10.105</address>
<port>23608</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G2_to_G4</name>
<single-tcp>
<acceptor>
<listen-port>23831</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G2_to_G1</name>
<single-tcp>
<acceptor>
<listen-port>23745</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G2_to_G3</name>
<single-tcp>
<initiator>
<address>55.55.10.103</address>
<port>23632</port>
</initiator>
</single-tcp>
</peer>
<endpoint>
<name>G2_to_TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>
G3 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G3_to_G5</name>
<single-tcp>
<initiator>
<address>55.55.10.105</address>
<port>23739</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G3_to_G4</name>
<single-tcp>
<acceptor>
<listen-port>23754</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G3_to_G2</name>
<single-tcp>
<acceptor>
<listen-port>23632</listen-port>
</acceptor>
</single-tcp>
</peer>
</portals>
</tnw-gateway>
TRD2 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85
TRD3 Configuration
## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.3.37.85
G4 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G4_to_G5</name>
<single-tcp>
<initiator>
<address>55.55.10.105</address>
<port>23580</port>
</initiator>
</single-tcp>
</peer>
<endpoint>
<name>G4_to_TRD1</name>
<domain-id>1</domain-id>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G4_to_TRD3</name>
<domain-id>3</domain-id>
<lbm-config>TRD3.cfg</lbm-config>
</endpoint>
<peer>
<name>G4_to_G1</name>
<single-tcp>
<acceptor>
<listen-port>23801</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G4_to_G3</name>
<single-tcp>
<initiator>
<address>55.55.10.103</address>
<port>23754</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G4_to_G2</name>
<single-tcp>
<initiator>
<address>55.55.10.102</address>
<port>23831</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>
G5 Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G5_to_G4</name>
<single-tcp>
<acceptor>
<listen-port>23580</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G5_to_G1</name>
<single-tcp>
<acceptor>
<listen-port>23880</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G5_to_G3</name>
<single-tcp>
<acceptor>
<listen-port>23739</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G5_to_G2</name>
<single-tcp>
<acceptor>
<listen-port>23608</listen-port>
</acceptor>
</single-tcp>
</peer>
</portals>
</tnw-gateway>
Using UM Configuration Files with the UM Router <-
Within the UM Router configuration file, the endpoint portal's <lbm-config>
element lets you import configurations from either a plain text or XML UM configuration file. However, using the XML type of UM configuration files provides the following advantages over plain text UM configuration files:
-
You can apply UM attributes per topic and/or per context.
-
You can apply attributes to all portals on a particular UM Router using a UM XML template (instead of individual portal settings).
-
Using UM XML templates to set options for individual portals lets the UM Router process these settings in the
<daemon>
element instead of within each portal's configuration.
Setting Individual Endpoint Options <-
When setting endpoint options, first name the context of each endpoint in the UM Router's XML configuration file.
<portals>
<endpoint>
<name>Endpoint_1</name>
<domain-id>1</domain-id>
<source-context-name>G1_E1</source-context-name>
<lbm-attributes>
<option name="request_tcp_interface" scope="context" value="10.29.4.0/24"/>
</lbm-attributes>
</endpoint>
<endpoint>
<name>G1-TRD2</name>
<domain-id>2</domain-id>
<receiver-context-name>G1_E2</source-context-name>
<lbm-attributes>
<option name="request_tcp_interface" scope="context" value="10.29.5.0/24" />
</lbm-attributes>
</endpoint>
</portals>
Then assign configuration templates to those contexts in the UM XML configuration file.
<application name="dro1" template="global">
<contexts>
<context name="G1_E1" template="G1-E1-options">
<sources />
</context>
<context name="G1_E2" template="G1-E2-options">
<sources />
</context>
</contexts>
</application>
You specify the unique options for each of this UM Router's two endpoints in the UM XML configuration <templates>
section used for G1-E1-options and G1-E2-options.
UM Router and UM XML Configuration Use Cases <-
One advantage of using UM XML configuration files with the UM Router is the ability to assign unique UM attributes to the topics and contexts used for the proxy sources and receivers (which plain text UM configuration files cannot do). The following example shows how to assign a different LBTRM multicast address to a source based on its topic.
Create a new UM XML configuration template for the desired topic name.
<template name="AAA-template">
<options type="source">
<option name="transport_lbtrm_multicast_address"
default-value="225.2.37.88"/>
</options>
</template>
Then include this template in the <application>
element associated with the UM Router.
<application name="dro1" template="global-options">
<contexts>
<context>
<sources template="source-options">
<topic topicname="AAA" template="AAA-template" />
</sources>
</context>
</contexts>
</application>
It is also possible to assign UM attributes directly in the <application>
tag. For example, the following specifies that a particular topic should use an LBT-RU transport.
<application name="dro1" template="dro1-common">
<contexts>
<context>
<sources template="source-template">
<topic topicname="LBTRU_TOPIC">
<options type="source">
<option name="transport" default-value="lbtru" />
</options>
</topic>
</sources>
</context>
</contexts>
</application>
Sample Configuration <-
The following sample configuration incorporates many of the examples mentioned above. The UM Router applies options to all UM objects created. The UM XML configuration file overwrites these options for two specific topics. The first topic, LBTRM_TOPIC, uses a different template to change its transport from TCP to LBTRM, and to set an additional property. The second topic, LBTRU_TOPIC, also changes its transport from TCP to a new value. However, its new attributes are applied directly in its associated topic tag, instead of referencing a template. In addition, this sample configuration assigns the rm-source template to all sources and receivers associated with the context endpt_1.
XML UM Configuration File <-
<?xml version="1.0" encoding="UTF-8" ?>
<um-configuration version="1.0">
<templates>
<template name="dro1-common">
<options type="source">
<option name="transport" default-value="tcp" />
</options>
<options type="context">
<option name="request_tcp_interface" default-value="10.29.5.6" />
<option name="transport_tcp_port_low" default-value="4400" />
<option name="transport_tcp_port_high" default-value="4500" />
<option name="resolver_multicast_address" default-value="225.2.37.88"/>
</options>
</template>
<template name="rm-source">
<options type="source">
<option name="transport" default-value="lbtrm" />
<option name="transport_lbtrm_multicast_address" default-value="225.2.37.89"/>
</options>
</template>
</templates>
<applications>
<application name="dro1" template="dro1-common">
<contexts>
<context>
<sources>
<topic topicname="LBTRM_TOPIC" template="rm-source" />
<topic topicname="LBTRU_TOPIC">
<options type="source">
<option name="transport" default-value="lbtru" />
<option name="resolver_unicast_daemon" default-value="10.29.5.1:1234" />
</options>
</topic>
</sources>
</context>
<context name="endpt_1">
<sources template="rm-source"/>
</context>
</contexts>
</application>
</applications>
</um-configuration>
XML UM Router Configuration File <-
This UM Router uses the above XML UM configuration file, sample-config.xml, to set its UM options. It has three endpoints, one of which has the context endpt_1.
<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
<xml-config>sample-config.xml</xml-config>
</daemon>
<portals>
<endpoint>
<name>Endpoint_1</name>
<domain-id>1</domain-id>
<lbm-attributes>
<option name="context_name" scope="context" value="endpt_1" />
<option name="request_tcp_interface" scope="context"
value="10.29.4.0/24"/>
</lbm-attributes>
</endpoint>
<endpoint>
<name>Endpoint_2</name>
<domain-id>2</domain-id>
<lbm-attributes>
<option name="request_tcp_interface" scope="context"
value="10.29.5.0/24"/>
</lbm-attributes>
</endpoint>
<endpoint>
<name>Endpoint_3</name>
<domain-id>3</domain-id>
<lbm-attributes>
<option name="request_tcp_interface" scope="context"
value="10.29.6.0/24"/>
</lbm-attributes>
</endpoint>
</portals>
</tnw-gateway>
Running the UM Router Daemon <-
To run the UM Router, ensure the following:
-
Library environment variable paths are set correctly (LD_LIBRARY_PATH)
-
The license environment variable LBM_LICENSE_FILENAME points to a valid UM Router license file.
-
The configuration file is error free.
Typically, you run the UM Router with one configuration file argument, for example:
(FYI: "tnwgd" stands for "Twenty Nine West Gateway Daemon", a historical name for the UM Router.)
The UM Router logs version information on startup. The following is an example of this information:
Version 6.0 Build: Sep 26 2012, 00:31:33 (UMS 6.0 [UMP-6.0] [UMQ-6.0] [64-bit] Build: Sep 26 2012, 00:27:17 ( DEBUG license LBT-RM LBT-RU LBT-IPC LBT-RDMA ) WC[PCRE 7.4 2007-09-21, regex, appcb] HRT[gettimeofday()])