Dynamic Routing Guide
DRO Implementation


DRO Configuration Overview  <-

When the DRO daemon launches, it uses configuration option settings to determine its behavior and expectations. You specify option values in an XML configuration file, and reference the file from a command line argument.

Typically, you have a separate XML configuration file for each DRO, which contains structured configuration elements that describe aspects of the DRO. Within this XML configuration file, each endpoint portal definition points to a UM configuration file, which allow the portal to properly connect to its TRD.


Creating Applications for DRO Compatibility  <-

When developing messaging applications that use Ultra Messaging and, in particular, the DRO, please observe the following guidelines.


Naming and Identification  <-

An important part to successfully implementing DROs is prudent and error-free naming of TRDs, DROs, portals, etc., as well as correct identification of IP addresses and ports. It is good practice to first design the DRO network by defining all connections and uniquely naming all DROs, portals, and TRDs. This works well as a diagram similar to some examples presented in this document. Include the following names and parameters in your design diagram:

  • TRD names and IDs
  • DRO names
  • Portal names
  • Portal costs

For example, a well-prepared DRO design could look like the following figure.

design_diagram.png


Portal Costs  <-

A network of DROs uses forwarding cost as the criterion for determining the best (lowest cost) path over which to resolve a topic and route data. Forwarding cost is simply the sum of all portal costs along a multi-DRO path. Thus, total cost for the single path in the above example is 34. (Note that this is a non-real-world example, since costs are pointless without alternate routes to compare to.) You assign portal costs via the <cost> configuration option.

After the DRO network calculates its paths, if a new lower-cost source becomes available, receivers switch to that path.


Access Control Lists (ACL)  <-

In the DRO, an Access Control List (ACL) is a method of blocking traffic from being forwarded from one TRD to another.

Typical applications for this feature are:

  • Prevent unauthorized access to sensitive messages.
  • Prevent overloading of bandwidth-limited WAN links, even in the face of accidental use of overly-permissive wildcard receivers.
  • ACLs can be used to limit the amount of Topic Resolution traffic for topics on TRDs that don't need those topics. However, the use of wildcard receivers can result in TR traffic even for topics which are blocked from being forwarded.

You can apply Access Control Lists to one or more of a DRO's portals to filter traffic by topic, transport, transport session, etc. You configure an ACL in a DRO's XML configuration's <acl> element, as a child of an <endpoint> or <peer> portal. As messages are processed by the DRO, the portals use the ACLs to decide whether to reject the the messages or accept them.

Inbound vs. Outbound

There are two types of ACLs: inbound and outbound.

acl.png

An inbound ACL tests messages from a source TRD on their way into a DRO portal, and decides whether to reject or accept them. If accepted, the messages can be forwarded to the appropriate destination portal(s).

An outbound ACL tests messages on their way out of a DRO portal, and decides whether to reject them, or transmit them to the destination TRD.

This distinction becomes especially important when a DRO has more than two portals. Messages rejected inbound cannot be forwarded at all. Messages rejected outbound can allow messages to be forwarded out some portals but not others.

An ACL contains one or more Access Control Entries (ACEs).

Access Control Entry (ACE)

An ACE specifies a set of message matching criteria, and an action to perform based on successful matches. The action is either accept (the message is made available for forwarding, based on interest) or reject (the message is dropped).

When more than one ACE is supplied in an ACL, messages are tested against each ACE in the order defined until a match is found, at which point the ACE specifies what to do (reject or accept).

An ACE contains one or more conditional elements.

Conditional Elements

Conditional elements do the work of testing various characteristics of messages to determine if they should be rejected or accepted (made available for forwarding).

When more than one conditional element is supplied in an ACE, received messages are tested against all of them to determine if the ACE should be applied.

There are two classes of conditional elements:

  • Topic conditionals, which test the topic string of a message.
  • Transport session conditionals, which test network transport session characteristics of a message.

Topic conditionals can be included on both inbound and outbound ACLs. The topic conditionals are:

Transport session conditionals only apply to inbound ACLs (they are ignored for outbound). The transport session conditionals are:

Conditional elements are children of the <ace> element. If you place multiple conditions within an ACE, the DRO performs an "and" operation with them. That is, all relevant conditions in the ACE must be true for the ACE to be applied to a message.

A portal will silently ignore conditional elements that don't apply. For example, if a transport conditional is used on an outbound ACL, or if a UDP-based conditional is present and a TCP message is received.

Reject by Default

An implicit "reject all" is at the end of every ACL, so the DRO rejects any topic not matched by any ACE. When an ACL is configured for a portal, rejection is the default behavior.

For example, to accept and forward only messages for topic ABC and reject all others:

<acl>
<inbound>
<ace match="accept">
<topic>ABC</topic>
</ace>
</inbound>
</acl>

No "reject" ACE is needed since rejection is the default.

In contrast, to accept all messages except for topic ABC:

<acl>
<inbound>
<ace match="reject">
<topic>ABC</topic>
</ace>
<ace match="accept">
<topic>.*</topic>
</ace>
</inbound>
</acl>

The second ACE is used as a "match all", which effectively changes the default behavior to "accept".

ACE Ordering

Since the behavior for multiple ACEs is to test them in the order defined, ACEs should be ordered from specific to general.

In the example below, a user named "user1" is assigned to the LAN1 TRD. It is desired to forward all non-user-specific messages, but restrict user-specific message to only that user.

By ordering the ACEs as shown, messages for USER.user1 will be forwarded by the first ACE, but messages for USER.user2, etc. will be rejected due to the second ACE. Messages for topics not starting with "USER." will be forwarded by the third ACE.

<endpoint>
<name>LAN1</name>
<lbm-config>lan1.cfg</lbm-config>
<domain-id>1</domain-id>
<acl>
<inbound>
<ace match="accept">
<topic>USER.user1</topic>
</ace>
<ace match="reject">
<pcre-pattern>^USER\..*</pcre-pattern>
</ace>
<ace match="accept">
<pcre-pattern>.*</pcre-pattern>
</ace>
</inbound>
</acl>
</endpoint>

Note that the string in "<topic>USER.user1</topic>" is not a regular expression pattern, and therefore does not need any special escaping or meta characters. The "<pcre-pattern>^USER\..*</pcre-pattern>" is a regular expression, and therefore needs the "^" anchor and the "\." escape sequence.


Timers and Intervals  <-

The DRO offers a wide choice of timer and interval options to fine tune its behavior and performance. There are interactions and dependencies between some of these, and if misconfigured, they may cause race or failure conditions.

This manual's description of configuration options (see DRO Configuration Reference), includes identification of such relationships. Please heed them.


Multicast Immediate Messaging Considerations  <-

Multicast Immediate Messages (MIMs) may pass through the DRO. You cannot filter MIMs with Access Control Lists (ACL)-MIMs are forwarded to all TRDs. Informatica does not recommend using MIM for messaging traffic across the DRO. MIM is intended for short-lived topics and applications that cannot tolerate a delay between source creation and the sending of the first message. See also Multicast Immediate Messaging.


Persistence Over the DRO  <-

The DRO supports UMP persistence by routing all necessary control and retransmission channels along with transport and topic resolution traffic. A typical implementation places the UMP persistent store in the same TRD as its registered source, as shown in the following figure.

persist_1.png

The DRO also supports UMP implementations with the store located in a receiver's TRD, as shown in the following figure.

persist_2.png

Note: For more reliable operation when using UMP with DROs, Informatica recommends enabling OTR.


Late Join and Off-Transport Recovery  <-

The DRO supports sources and receivers configured for Late Join and/or Off-Transport Recovery (OTR). Retransmission requests and subsequent retransmissions are conducted across the entire path through the DRO network. A DRO's proxy sources do not have Late-Join/OTR retention buffers and hence, are not able to provide recovered messages.


Topic Resolution Reliability  <-

Topic resolution can sometimes remain in a quiescent phase due to link interruption, preventing needed re-subscription topic resolution activity. Two ways you can address this are:


BOS and EOS Behavior Over the DRO  <-

Through a network of DROs, a topic traverses a separate session for each link along its path. Thus, the DRO reports BOS/EOSs based on the activity between the proxy source transport and its associated receiver. There is no end-to-end, application-to-application reporting of the data path state. Also, in the case of multiple topics being assigned to multiple sessions, topics may find themselves with different session mates from hop to hop. Of course, this all influences when, and for which transport session, a topic's BOSs and EOSs are issued.


DRO Reliable Loss  <-

The DRO can create a situation where a "reliable" transport (TCP or LBT-IPC) can experience out-of-order message delivery.

The DRO can perform a "protocol conversion" function. I.e. an originating source can use a UDP-based protocol (LBT-RM or LBT-RU), but the proxy source for a remote receiver can use a "reliable" protocol (TCP or LBT-IPC). With a UDP-based protocol, messages can arrive to the DRO network out of order, usually due to packet loss and recovery. However, when those out-of-order messages are forwarded across a "reliable" protocol (TCP or LBT-IPC), the receiver does not expect the sequence number gap, and immediately declares the out-of-order messages as unrecoverable loss. This, in spite of the fact that the missing message arrives shortly thereafter.

Starting in UM version 6.12, there are two new configuration options: transport_tcp_dro_loss_recovery_timeout (receiver) and transport_lbtipc_dro_loss_recovery_timeout (receiver), which modify the receiver's behavior. Instead of declaring a gap immediately unrecoverable, a delay is introduced which is similar to what a UDP-based receiver uses to wait for lost and retransmitted datagrams. If the missing message arrives within the delay time, the messages are delivered to application without loss.

Be aware that this functionality is only used with "reliable" protocols published by a DRO's proxy source. If this delay feature is enabled, it will not apply to a "reliable" protocol that is received directly from the originating source.

Note however that you can get genuine gaps in the "reliable" data stream without recovery. For example, an overloaded DRO can drop messages. Or a DRO's proxy receiver can experience unrecoverable loss. In that case, the delay will have to expire before the missing messages are declared unrecoverable and subsequent data is delivered.

Attention
The delay times default to 0, which retains the pre-6.12 behavior of immediately declaring sequence number gaps unrecoverable. If you want this new behavior, you must configure the appropriate option.


Topology Configuration Examples  <-

Following are example configurations for a variety of DRO topologies. These are the topology examples presented Routing Topologies.

In a real-world situation, you would have DRO XML configuration files with their portal interfaces referencing complete UM configuration files. However, for these examples, the referred domain configuration files are simplified to contain only information relevant to the applicable DRO. As part of this simplification, domain configuration files show interfaces for only one or two transport types.

Also, IP addresses are provided in some cases and omitted in other cases. This is because initiator peer portals need to know the IP addresses (and port numbers) of their corresponding acceptor portals to establish connections, whereas endpoint portals communicate via topic resolution and thus, do not need to know IP addresses.

Note
Before designing any DRO implementations based on configurations or examples other than the types presented in this document, please contact Informatica Support.


Direct Link Configuration  <-

This example uses a DRO to connect two topic resolution domain LANs.

cfg_direct_link.png

TRD1 Configuration

This UM configuration file, trd1.cfg, describes TRD1 and is referenced in the DRO configuration file.

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85

G1 Configuration

This DRO configuration file defines two endpoint portals. In the daemon section, we have turned on monitoring for the all endpoint portals in the DRO. The configuration specifies that all statistics be collected every 5 seconds and uses the lbm transport module to send statistics to your monitoring application, which runs in TRD1. See also UM Concepts, Monitoring UMS. The Web Monitor has also been turned on (port 15304) to monitor the performance of the DRO.

<?xml version="1.0" encoding="UTF-8" ?>
<!-- G1 xml file- 2 endpoint portals -->
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
<lbm-license-file>lic0014.txt</lbm-license-file>
<monitor interval="5">
<transport-module module="lbm" options="config=trd1.cfg"/>
</monitor>
<web-monitor>*:15304</web-monitor>
</daemon>
<portals>
<endpoint>
<name>G1-TRD1</name>
<domain-id>1</domain-id>
<lbm-config>trd1.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1-TRD2</name>
<domain-id>2</domain-id>
<lbm-config>trd2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

TRD2 Configuration

The configuration file trd2.cfg could look something like this.

# Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85


Peer Link Configuration  <-

In cases where the DRO connection between two TRDs must tunnel through a WAN or TCP/IP network, you can implement a DRO at each end, as shown in the example below.

cfg_single_link.png

TRD1 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85

G1 Configuration

Following is an example of two companion peer portals (on different DROs) configured via DRO XML configuration file for a TCP-only setup. Note that one must be an initiator and the other, an acceptor.

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G1-TRD1</name>
<domain-id>1</domain-id>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<peer>
<name>G1-G2</name>
<single-tcp>
<interface>10.30.3.100</interface>
<initiator>
<address>10.30.3.102</address>
<port>26123</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>

G2 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G2-G1</name>
<single-tcp>
<interface>10.30.3.102</interface>
<acceptor>
<listen-port>26123</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G2-TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

TRD2 Configuration

## LAN2 Configuration Options ##
context request_tcp_interface 10.33.3.0/24
context resolver_multicast_port 13965


Transit TRD Link Configuration  <-

This example, like the previous one, configures two localized DROs tunneling a connection between two TRDs, however, the DROs in this example are tunneling through an intermediate TRD. This has the added effect of connecting three TRDs.

cfg_transit_trd.png

TRD1 Configuration

## TRD1 Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85

G1 Configuration

Following is an example of two companion peer portals (on different DROs) configured via DRO XML configuration file for a TCP-only setup. Note that one must be an initiator and the other, an acceptor.

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G1-TRD1</name>
<domain-id>1</domain-id>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1-TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

TRD2 Configuration

## TRD2 Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85

G2 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G2-TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G2-TRD3</name>
<domain-id>3</domain-id>
<lbm-config>TRD3.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

TRD3 Configuration

## TRD3 Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.3.37.85


Parallel Links Configuration  <-

This example is similar in purpose to the single link, peer-to-peer example, except that a second pair of DROs is added as a backup route. You can set one of these as a secondary route by assigning a higher cost to portals along the path. In this case we set G3 and G4's portal costs to 5, forcing the lower route to be selected only if the upper (G1, G2) route fails.

Also note that we have configured the peer portals for the leftmost or odd-numbered DROs as initiators, and the rightmost or even-numbered DRO peers as acceptors.

cfg_parallel.png

TRD1 Configuration

## TRD1 Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85

G1 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G1-TRD1</name>
<domain-id>1</domain-id>
<cost>2</cost>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<peer>
<name>G1-G2</name>
<cost>2</cost>
<single-tcp>
<interface>10.30.3.101</interface>
<initiator>
<address>10.30.3.102</address>
<port>23745</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>

G2 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G2-G1</name>
<cost>2</cost>
<single-tcp>
<interface>10.30.3.102</interface>
<acceptor>
<listen-port>23745</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G2-TRD2</name>
<domain-id>2</domain-id>
<cost>2</cost>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

G3 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G3-TRD1</name>
<domain-id>1</domain-id>
<cost>5</cost>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<peer>
<name>G3-G4</name>
<cost>5</cost>
<single-tcp>
<interface>10.30.3.103</interface>
<initiator>
<address>10.30.3.104</address>
<port>23746</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>

G4 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G4-G3</name>
<cost>5</cost>
<single-tcp>
<interface>10.30.3.104</interface>
<acceptor>
<listen-port>23746</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G4-TRD2</name>
<domain-id>2</domain-id>
<cost>5</cost>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

TRD2 Configuration

## TRD2 Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85


Loop and Spur Configuration  <-

cfg_loop_spur.png

TRD1 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85

G1 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G1_to_G3</name>
<single-tcp>
<initiator>
<address>55.55.10.27</address>
<port>23801</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G1_to_G2</name>
<single-tcp>
<initiator>
<address>55.55.10.26</address>
<port>23745</port>
</initiator>
</single-tcp>
</peer>
<endpoint>
<name>G1_to_TRD1</name>
<domain-id>1</domain-id>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

G2 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G2_to_G4</name>
<single-tcp>
<initiator>
<address>55.55.10.28</address>
<port>23632</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G2_to_G1</name>
<single-tcp>
<acceptor>
<listen-port>23745</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G2_to_TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

TRD2 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85

TRD3 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.3.37.85

G3 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G3_to_G4</name>
<single-tcp>
<initiator>
<address>55.55.10.28</address>
<port>23754</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G3_to_G1</name>
<single-tcp>
<acceptor>
<listen-port>23801</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G3_to_TRD3</name>
<domain-id>3</domain-id>
<lbm-config>TRD3.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

G4 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G4_to_G3</name>
<single-tcp>
<acceptor>
<listen-port>23754</listen-port>
</acceptor>
</single-tcp>
</peer>
<endpoint>
<name>G4_to_TRD4</name>
<domain-id>4</domain-id>
<lbm-config>TRD4.cfg</lbm-config>
</endpoint>
<peer>
<name>G4_to_G2</name>
<single-tcp>
<acceptor>
<listen-port>23632</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G4_to_G5</name>
<single-tcp>
<initiator>
<address>55.55.10.29</address>
<port>23739</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>

TRD4 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.4.37.85

G5 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<endpoint>
<name>G5_to_TRD5</name>
<domain-id>5</domain-id>
<lbm-config>TRD5.cfg</lbm-config>
</endpoint>
<peer>
<name>G5_to_G4</name>
<single-tcp>
<acceptor>
<listen-port>23739</listen-port>
</acceptor>
</single-tcp>
</peer>
</portals>
</tnw-gateway>

TRD5 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.5.37.85


Star Configuration  <-

This network consists of four TRDs. Within each TRD, full multicast connectivity exists. However, no multicast connectivity exists between the four TRDs.

cfg_star.png

G1 Configuration

The configuration for this DRO also has transport statistics monitoring and the WebMonitor turned on.

<?xml version="1.0" encoding="UTF-8" ?>
<!-- UM GW xml file- 3 endpoint portals -->
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
<lbm-license-file>lic0014.txt</lbm-license-file>
<monitor interval="5">
<transport-module module="lbm" options="config=trd1.cfg"/>
</monitor>
<web-monitor>*:15304</web-monitor>
</daemon>
<portals>
<endpoint>
<name>G1_to_TRD1</name>
<domain-id>1</domain-id>
<lbm-config>trd1.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1_to_TRD2</name>
<domain-id>2</domain-id>
<lbm-config>trd2.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1_to_TRD3</name>
<domain-id>3</domain-id>
<lbm-config>trd3.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1_to_TRD4</name>
<domain-id>4</domain-id>
<lbm-config>trd4.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

TRD1 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85

TRD2 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85

TRD3 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.3.37.85

TRD4 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.4.37.85


Mesh Configuration  <-

The mesh topology utilizes many connections between many nodes, to provide a variety of alternate routes. However, meshes are not the best solution in many cases, as unneeded complexity can increase the chance for configuration errors or make it more difficult to trace problems.

cfg_mesh.png

TRD1 Configuration

### Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.1.37.85

G1 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G1_to_G5</name>
<single-tcp>
<initiator>
<address>55.55.10.105</address>
<port>23880</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G1_to_G4</name>
<single-tcp>
<initiator>
<address>55.55.10.104</address>
<port>23801</port>
</initiator>
</single-tcp>
</peer>
<endpoint>
<name>G1_to_TRD1</name>
<domain-id>1</domain-id>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G1_to_TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
<peer>
<name>G1_to_G2</name>
<single-tcp>
<initiator>
<address>55.55.10.102</address>
<port>23745</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>

G2 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G2_to_G5</name>
<single-tcp>
<initiator>
<address>55.55.10.105</address>
<port>23608</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G2_to_G4</name>
<single-tcp>
<acceptor>
<listen-port>23831</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G2_to_G1</name>
<single-tcp>
<acceptor>
<listen-port>23745</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G2_to_G3</name>
<single-tcp>
<initiator>
<address>55.55.10.103</address>
<port>23632</port>
</initiator>
</single-tcp>
</peer>
<endpoint>
<name>G2_to_TRD2</name>
<domain-id>2</domain-id>
<lbm-config>TRD2.cfg</lbm-config>
</endpoint>
</portals>
</tnw-gateway>

G3 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G3_to_G5</name>
<single-tcp>
<initiator>
<address>55.55.10.105</address>
<port>23739</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G3_to_G4</name>
<single-tcp>
<acceptor>
<listen-port>23754</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G3_to_G2</name>
<single-tcp>
<acceptor>
<listen-port>23632</listen-port>
</acceptor>
</single-tcp>
</peer>
</portals>
</tnw-gateway>

TRD2 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.2.37.85

TRD3 Configuration

## Global Configuration Options ##
context request_tcp_interface 10.29.3.0/24
context resolver_multicast_port 13965
context resolver_multicast_interface 10.29.3.0/24
context resolver_multicast_address 225.3.37.85

G4 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G4_to_G5</name>
<single-tcp>
<initiator>
<address>55.55.10.105</address>
<port>23580</port>
</initiator>
</single-tcp>
</peer>
<endpoint>
<name>G4_to_TRD1</name>
<domain-id>1</domain-id>
<lbm-config>TRD1.cfg</lbm-config>
</endpoint>
<endpoint>
<name>G4_to_TRD3</name>
<domain-id>3</domain-id>
<lbm-config>TRD3.cfg</lbm-config>
</endpoint>
<peer>
<name>G4_to_G1</name>
<single-tcp>
<acceptor>
<listen-port>23801</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G4_to_G3</name>
<single-tcp>
<initiator>
<address>55.55.10.103</address>
<port>23754</port>
</initiator>
</single-tcp>
</peer>
<peer>
<name>G4_to_G2</name>
<single-tcp>
<initiator>
<address>55.55.10.102</address>
<port>23831</port>
</initiator>
</single-tcp>
</peer>
</portals>
</tnw-gateway>

G5 Configuration

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
</daemon>
<portals>
<peer>
<name>G5_to_G4</name>
<single-tcp>
<acceptor>
<listen-port>23580</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G5_to_G1</name>
<single-tcp>
<acceptor>
<listen-port>23880</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G5_to_G3</name>
<single-tcp>
<acceptor>
<listen-port>23739</listen-port>
</acceptor>
</single-tcp>
</peer>
<peer>
<name>G5_to_G2</name>
<single-tcp>
<acceptor>
<listen-port>23608</listen-port>
</acceptor>
</single-tcp>
</peer>
</portals>
</tnw-gateway>


Using UM Configuration Files with the DRO  <-

Within the DRO configuration file, the endpoint portal's <lbm-config> element lets you import configurations from either a plain text or XML UM configuration file. However, using the XML type of UM configuration files provides the following advantages over plain text UM configuration files:

  • You can apply UM attributes per topic and/or per context.
  • You can apply attributes to all portals on a particular DRO using a UM XML template (instead of individual portal settings).
  • Using UM XML templates to set options for individual portals lets the DRO process these settings in the <daemon> element instead of within each portal's configuration.


Setting Individual Endpoint Options  <-

When setting endpoint options, first name the context of each endpoint in the DRO's XML configuration file.

<portals>
<endpoint>
<name>Endpoint_1</name>
<domain-id>1</domain-id>
<source-context-name>G1_E1</source-context-name>
<lbm-attributes>
<option name="request_tcp_interface" scope="context" value="10.29.4.0/24"/>
</lbm-attributes>
</endpoint>
<endpoint>
<name>G1-TRD2</name>
<domain-id>2</domain-id>
<receiver-context-name>G1_E2</source-context-name>
<lbm-attributes>
<option name="request_tcp_interface" scope="context" value="10.29.5.0/24" />
</lbm-attributes>
</endpoint>
</portals>

Then assign configuration templates to those contexts in the UM XML configuration file.

<application name="dro1" template="global">
<contexts>
<context name="G1_E1" template="G1-E1-options">
<sources />
</context>
<context name="G1_E2" template="G1-E2-options">
<sources />
</context>
</contexts>
</application>

You specify the unique options for each of this DRO's two endpoints in the UM XML configuration <templates> section used for G1-E1-options and G1-E2-options.


DRO and UM XML Configuration Use Cases  <-

One advantage of using UM XML configuration files with the DRO is the ability to assign unique UM attributes to the topics and contexts used for the proxy sources and receivers (which plain text UM configuration files cannot do). The following example shows how to assign a different LBTRM multicast address to a source based on its topic.

Create a new UM XML configuration template for the desired topic name.

<template name="AAA-template">
<options type="source">
<option name="transport_lbtrm_multicast_address"
default-value="225.2.37.88"/>
</options>
</template>

Then include this template in the <application> element associated with the DRO.

<application name="dro1" template="global-options">
<contexts>
<context>
<sources template="source-options">
<topic topicname="AAA" template="AAA-template" />
</sources>
</context>
</contexts>
</application>

It is also possible to assign UM attributes directly in the <application> tag. For example, the following specifies that a particular topic should use an LBT-RU transport.

<application name="dro1" template="dro1-common">
<contexts>
<context>
<sources template="source-template">
<topic topicname="LBTRU_TOPIC">
<options type="source">
<option name="transport" default-value="lbtru" />
</options>
</topic>
</sources>
</context>
</contexts>
</application>


Sample Configuration  <-

The following sample configuration incorporates many of the examples mentioned above. The DRO applies options to all UM objects created. The UM XML configuration file overwrites these options for two specific topics. The first topic, LBTRM_TOPIC, uses a different template to change its transport from TCP to LBTRM, and to set an additional property. The second topic, LBTRU_TOPIC, also changes its transport from TCP to a new value. However, its new attributes are applied directly in its associated topic tag, instead of referencing a template. In addition, this sample configuration assigns the rm-source template to all sources and receivers associated with the context endpt_1.


XML UM Configuration File  <-

<?xml version="1.0" encoding="UTF-8" ?>
<um-configuration version="1.0">
<templates>
<template name="dro1-common">
<options type="source">
<option name="transport" default-value="tcp" />
</options>
<options type="context">
<option name="request_tcp_interface" default-value="10.29.5.6" />
<option name="transport_tcp_port_low" default-value="4400" />
<option name="transport_tcp_port_high" default-value="4500" />
<option name="resolver_multicast_address" default-value="225.2.37.88"/>
</options>
</template>
<template name="rm-source">
<options type="source">
<option name="transport" default-value="lbtrm" />
<option name="transport_lbtrm_multicast_address" default-value="225.2.37.89"/>
</options>
</template>
</templates>
<applications>
<application name="dro1" template="dro1-common">
<contexts>
<context>
<sources>
<topic topicname="LBTRM_TOPIC" template="rm-source" />
<topic topicname="LBTRU_TOPIC">
<options type="source">
<option name="transport" default-value="lbtru" />
<option name="resolver_unicast_daemon" default-value="10.29.5.1:1234" />
</options>
</topic>
</sources>
</context>
<context name="endpt_1">
<sources template="rm-source"/>
</context>
</contexts>
</application>
</applications>
</um-configuration>


XML DRO Configuration File  <-

This DRO uses the above XML UM configuration file, sample-config.xml, to set its UM options. It has three endpoints, one of which has the context endpt_1.

<?xml version="1.0" encoding="UTF-8" ?>
<tnw-gateway version="1.0">
<daemon>
<log type="console"/>
<xml-config>sample-config.xml</xml-config>
</daemon>
<portals>
<endpoint>
<name>Endpoint_1</name>
<domain-id>1</domain-id>
<lbm-attributes>
<option name="context_name" scope="context" value="endpt_1" />
<option name="request_tcp_interface" scope="context"
value="10.29.4.0/24"/>
</lbm-attributes>
</endpoint>
<endpoint>
<name>Endpoint_2</name>
<domain-id>2</domain-id>
<lbm-attributes>
<option name="request_tcp_interface" scope="context"
value="10.29.5.0/24"/>
</lbm-attributes>
</endpoint>
<endpoint>
<name>Endpoint_3</name>
<domain-id>3</domain-id>
<lbm-attributes>
<option name="request_tcp_interface" scope="context"
value="10.29.6.0/24"/>
</lbm-attributes>
</endpoint>
</portals>
</tnw-gateway>


Running the DRO Daemon  <-

To run the DRO, ensure the following:

  • Library environment variable paths are set correctly (LD_LIBRARY_PATH)
  • The license environment variable LBM_LICENSE_FILENAME points to a valid DRO license file.
  • The configuration file is error free.

Typically, you run the DRO with one configuration file argument, for example:

tnwgd gw1-config.xml

(FYI: "tnwgd" stands for "Twenty Nine West Gateway Daemon", a historical name for the DRO.)

The DRO logs version information on startup. The following is an example of this information:

Version 6.0 Build: Sep 26 2012, 00:31:33 (UMS 6.0 [UMP-6.0] [UMQ-6.0] [64-bit] Build: Sep 26 2012, 00:27:17 ( DEBUG license LBT-RM LBT-RU LBT-IPC LBT-RDMA ) WC[PCRE 7.4 2007-09-21, regex, appcb] HRT[gettimeofday()])