Dynamic Routing Guide
|
The diagram below shows a UM Router bridging topic resolution domains TRD1 and TRD2, for topic AAA, in a direct link configuration. Endpoint E1 contains a proxy receiver for topic AAA and endpoint E2 has a proxy source for topic AAA.
To establish topic resolution in an already-running UM Router, the following sequence typically occurs in an example like the above figure.
When a UM Router starts, its endpoint portals issue a brief series of Topic Resolution Request messages to their respective topic resolution domains. This provokes quiescent receivers (and wildcard receivers) into sending Use Query Responses, indicating interest in various topics. Each portal then records this interest.
After a UM Router has been running, endpoint portals issue periodic Topic Use Queries and Pattern Use Queries (collectively referred to as simply Use Queries). Use Query Responses from UM contexts confirm that the receivers for these topics indeed still exist, thus maintaining these topics on the interest list. Autonomous TQRs also refresh interest and have the effect of suppressing the generation of Use Queries.
In the case of multi-hop UM Router configurations, UM Routers cannot detect interest for remote contexts via Use Queries or TQRs. They do this instead via Interest Messages. An endpoint portal generates periodic interest messages, which are picked up by adjacent UM Routers (i.e., the next hop over), at which time interest is refreshed.
You can adjust intervals, limits, and durations for these topic resolution and interest mechanisms via UM Router configuration options (see XML Configuration Reference).
To maintain a reliable connection, peer portals exchange UM Router Keepalive signals. Keepalive intervals and connection timeouts are configurable on a per-portal basis. You can also set the UM Router to send keepalives only when traffic is idle, which is the default condition. When both traffic and keepalives go silent at a portal ingress, the portal considers the connection lost and disconnects the TCP link. After the disconnect, the portal tries to reconnect. See refeatewaykeepalive.
UM Router proxy sources on endpoint portals, when deleted, send out a series of final advertisements. A final advertisement tells any receivers, including proxy receivers on other UM Routers, that the particular source has gone away. This triggers EOS and clean-up activities on the receiver relative to that specific source, which causes the receiver to begin querying according to its topic resolution configuration for the sustaining phase of querying.
In short, final advertisements announce earlier detection of a source that has gone away, instead of transport timeout. This causes a faster transition to an alternative proxy source on a different UM Router if there is a change in the routing path.
The domain-id is used by Interest Messages and other internal and UM Router-to-UM Router traffic to ensure forwarding of all messages (payload and topic resolution) to the correct recipients. This also has the effect of not creating proxy sources/receivers where they are not needed. Thus, UM Routers create proxy sources and receivers based solely on receiver interest.
If more than one source sends on a given topic, the receiving portal's single proxy receiver for that topic receives all messages sent on that topic. The sending portal, however creates a proxy source for every source sending on the topic. The UM Router maintains a table of proxy sources, each keyed by an Originating Transport ID (OTID), enabling the proxy receiver to forward each message to the correct proxy source. An OTID uniquely identifies a source's transport session, and is included in topic advertisements.
When an application creates a source, it is configured to use one of the UM transport types. When a DRO is deployed, the proxy sources are also configured to use one of the UM transport types. Although users often use the same transport type for sources and proxy sources, this is not necessary. When different transport types are configured for source and proxy source, the DRO is performing a protocol conversion.
When this is done, it is very important to configure the transports to use the same maximum datagram size. If you don't, the DRO can drop messages which cannot be recovered through normal means. For example, a source in TRD1 can be configured for TCP, which has a default maximum datagram size of 65536. If a DRO's remote portal is configured to create LBT-RU proxy sources, that has a default maximum datagram size of 8192. If the source sends a user message of 10K, the TCP source will send it as a single fragment. The DRO will receive it and will attempt to forward it on an LBT-RU proxy source, but the 10K fragment is too large for LBT-RU's maximum datagram size, so the message will be dropped.
The solution is to override the default maximum datagram sizes to be the same. Informatica generally does not recommend configuring UDP-based transports for datagram sizes above 8K, so it is advisable to set the maximum datagram sizes of all transport types to 8192, like this:
context transport_tcp_datagram_max_size 8192 context transport_lbtrm_datagram_max_size 8192 context transport_lbtru_datagram_max_size 8192 context transport_lbtipc_datagram_max_size 8192 source transport_lbtsmx_datagram_max_size 8192
See configuration options: transport_tcp_datagram_max_size (context), transport_lbtrm_datagram_max_size (context), transport_lbtru_datagram_max_size (context), transport_lbtipc_datagram_max_size (context), and transport_lbtsmx_datagram_max_size (source).
Also see Message Fragmentation and Reassembly.
Final note: the resolver_datagram_max_size (context) option also needs to be made the same in all instances of UM, including DROs.
UM can resolve topics across a span of multiple UM Routers. Consider a simple example UM Router deployment, as shown in the following figure.
In this diagram, UM Router A has two endpoint portals connected to topic resolution domains TRD1 and TRD2. UM Router B also has two endpoint portals, which bridge TRD2 and TRD3. Endpoint portal names reflect the topic resolution domain to which they connect. For example, UM Router A endpoint E2 interfaces TRD2.
TRD1 has a source for topic AAA, and TRD3, an AAA receiver. The following sequence of events enables the forwarding of topic messages from source AAA to receiver AAA.
The UM Router supports topic resolution for wildcard receivers in a manner very similar to non-wildcard receivers. Wildcard receivers in a TRD issuing a WC-TQR cause corresponding proxy wildcard receivers to be created in portals, as shown in the following figure. The UM Router creates a single proxy source for pattern match.
Forwarding a message through a UM Router incurs a cost in terms of latency, network bandwidth, and CPU utilization on the UM Router machine (which may in turn affect the latency of other forwarded messages). Transiting multiple UM Routers adds even more cumulative latency to a message. Other UM Router-related factors such as portal buffering, network bandwidth, switches, etc., can also add latency.
Factors other than latency contribute to the cost of forwarding a message. Consider a message that can be sent from one domain to its destination domain over one of two paths. A three-hop path over 1Gbps links may be faster than a single-hop path over a 100Mbps link. Further, it may be the case that the 100Mbps link is more expensive or less reliable.
You assign forwarding cost values on a per-portal basis. When summed over a path, these values determine the cost of that entire path. A network of UM Routers uses forwarding cost as the criterion for determining the best path over which to resolve a topic.
UM Routers have an awareness of other UM Routers in their network and how they are linked. Thus, they each maintain a topology map, which is periodically confirmed and updated. This map also includes forwarding cost information.
Using this information, the UM Routers can cooperate during topic resolution to determine the best (lowest cost) path over which to resolve a topic or to route control information. They do this by totaling the costs of all portals along each candidate route, then comparing the totals.
For example, the following figure shows two possible paths from TRD1 to TRD2: A-C (total route cost of 11) and B-D (total route cost of 7). In this case, the UM Routers select path B-D.
If a UM Router or link along path B-D should fail, the UM Routers detect this and reroute over path A-C. Similarly, if an administrator revises cost values along path B-D to exceed a total of 12, the UM Routers reroute to A-C.
If the UM Routers find more than one path with the same lowest total cost value, i.e., a "tie", they select the path based on a node-ID selection algorithm. Since administrators do not have access to node IDs, this will appear to be a pseudo-random selection.
You can configure multiple UM Routers in a variety of topologies. Following are several examples.
The Direct Link configuration uses a single UM Router to directly connect two TRDs. For a configuration example, see Direct Link Configuration.
A Single Link configuration connects two TRDs using a UM Router on each end of an intermediate link. The intermediate link can be a "peer" link, or a transit TRD. For configuration examples, see Peer Link Configuration and Transit TRD Link Configuration.
Parallel Links offer multiple complete paths between two TRDs. However, UM will not load-balance messages across both links. Rather, parallel links are used for failover purposes. You can set preference between the links by setting the primary path for the lowest cost and standby paths at higher costs. For a configuration example, see Parallel Links Configuration.
Loops let you route packets back to the originating UM Router without reusing any paths. Also, if any peer-peer links are interrupted, the looped UM Routers are able to find an alternate route between any two TRDs.
The Loop and Spur has a one or more UM Routers tangential to the loop and accessible only through a single UM Router participating in the loop. For a configuration example, see Loop and Spur Configuration.
Adding a TRD to the center of a loop enhances its rerouting capabilities.
A Star with a centralized TRD does not offer rerouting capabilities but does provide an economical way to join multiple disparate TRDs.
The Star with a centralized UM Router is the simplest way to bridge multiple TRDs. For a configuration example, see Star Configuration.
The Mesh topology provides peer portal interconnects between many UM Routers, approaching an all-connected-to-all configuration. This provides multiple possible paths between any two TRDs in the mesh. Note that this diagram is illustrative of the ways the UM Routers may be interconnected, and not necessarily a practical or recommended application. For a configuration example, see Mesh Configuration.
The Palm Tree has a set of series-connected TRDs fanning out to a more richly meshed set of TRDs. This topology tends to pass more concentrated traffic over common links for part of its transit while supporting a loop, star, or mesh near its terminus.
Similar to the Palm Tree, the Dumbbell has a funneled route with a loop, star, or mesh topology on each end.
When designing UM Router networks, do not use any of the following topology constructs.
Two peer-to-peer connections between the same two UM Routers:
Two endpoint connections from the same UM Router to the same TRD:
Assigning two different Domain ID values (from different UM Routers) to the same TRD:
You must install the UM Dynamic Routing Option with its companion Ultra Messaging UMS, UMP, or UMQ product, and versions must match. While most UM features are compatible with the UM Router, some are not. Following is a table of features and their compatibilities with the UM Router.
UM Feature | UM Router Compatible? | Notes |
---|---|---|
Transport Acceleration | Yes | |
Hot Failover (HF) | Yes | The UM Router can pass messages sent by HF publishers to HF receivers, however the UM Router itself cannot be configured to originate or terminate HF data streams. |
Hot Failover across contexts (HFX) | Yes | |
Late Join | Yes | |
Message Batching | Yes | |
Monitoring/Statistics | Yes | |
Multicast Immediate Messaging (MIM) | Yes | |
Multi-Transport Threads | No | |
Off-Transport Recovery (OTR) | Yes | |
Ordered Delivery | Yes | |
Pre-Defined Messaging (PDM) | Yes | |
Request/Response | Yes | |
Self-Describing Messaging (SDM) | Yes | |
Source Side Filtering | Yes | The UM Router supports transport source side filtering. You can activate this either at the originating TRD source, or at a downstream proxy source. |
Transport LBT-IPC | Yes | |
Transport LBT-RDMA | Yes | |
Transport LBT-RM | Yes | |
Transport LBT-RU | Yes | |
Transport LBT-SMX | Partial | The UM router does not support proxy sources sending data via LBT-SMX. Any proxy sources configured for LBT-SMX will be converted to TCP, with a log message warning of the transport change. The UM Router does accept LBT-SMX ingress traffic to proxy receivers. |
Transport TCP | Yes | |
Transport TCP-LB | Yes | |
JMS, via UMQ broker | No | |
UM Spectrum | Yes | The UM Router supports UM Spectrum traffic, but you cannot implement Spectrum channels in UM Router proxy sources or receivers. |
UMP Implicit/Explicit Acknowledgements | Yes | |
UMP Persistent Store | Yes | |
UMP Proxy Sources | Yes | |
UMP Quorum Consensus | Yes | |
UMP Registration ID/Session Management | Yes | |
UMP Receiver-Paced Persistence (RPP) | Yes | |
UMP Store Failover | Yes | |
UMQ Brokered Queuing | No | |
UMQ Ultra Load Balancing (ULB) | No | |
Ultra Messaging Desktop Services (UMDS) | Not for client connectivity to the UMDS server | |
Ultra Messaging Manager (UMM) | Yes | Not for UM Router management |
UM SNMP Agent | No | |
UMCache | No | |
Wildcard Receivers | Yes | |
Zero Object Delivery (ZOD) | Yes |