5. Release UMS 5.3.3 / UMP 5.3.3 / UMQ 5.3.3 - March - 2013

5.1. UMS 5.3.3

5.1.1. Updates

  • UM no longer sends a Topic Management Record (TMR) when an application deletes a receiver in a standard way with, for example lbm_rcv_delete(), when no UM Gateways are in use. TMRs specifically inform a UM Gateway that a receiver for a particular topic has been shutdown. Without the presence of UM Gateways, TMRs become unnecesary network traffic.

  • The following error log messages have been added to the Concepts Guide section, UM Log Messages.

    • Core-7421-1: Source Side Filtering Init message with no return IP, using transport IP (%s).

    • Core-7479-1: NOTICE from src (RID:%u: (%s)): store %s:%u reports it has not received TIR. Possible misconfiguration?

    • CoreAPI-6783-1: lbm_socket_sendb send error occurred while sending. The message will contain addition specific information, supplied by the operating system.

    • Gwd-6033-353: : endpoint portal [%s] received one or more UIM control messages with no stream information - these will be dropped.

    • Gwd-6033-593: peer portal [%s] received one or more UIM control messages with no stream information - these will be dropped.

5.1.2. Bug Fixes

  • Corrected a problem with UM sockets that may cause a crash while binding TCP ports.

  • Fixed a problem with lbm_set_umm_info() that caused it to always return 0 (success). The function now returns -1 on a parsing error of a returned UMM configuration file, UMM Daemon logon error or a communications error with the UMM Daemon.

  • Corrected a problem with plain text (non-XML) UM configuration files greater than 1024 bytes that when loaded via a URL (http or FTP) would result in line malformed errors. This fix resolves the Known Issue: Retrieving plain text (not XML) configurations files via an http or FTP request is limited to configuration files of 1K in size. Configuration files over 1K will result in an error: error reading config file. Informatica will fix this in a future release. A workaround is to convert the plain text configuration file to an XML configuration file.

5.1.3. Known Issues

  • The .NET API does not support message selectors with unicode symbols and causes receiver creation to fail.

  • UM does not support the use of Unicode PCRE characters in wildcard receiver patterns on any system that communicates with a HP-UX or AIX system.

  • Hot Failover is not currently supported with Multitransport Threads.

  • Configuring sources with LBT-RU, source-side filtering, and adaptive batching may cause a crash. All three options must be set to cause the problem. Informatica is working to address this issue.

  • When using the UM Gateway Informatica recommends that you do not enable the Multitransport Threads feature. Informatica does not support nor test the operation of Multitransport Threads across the UM Gateway.

  • Multitransport Threads do not support persistent stores (UMP) or queues (UMQ).

  • When using Event Queues with the Java API on Mac OS X kernel 9.4, core dumps have occurred. Mac OS X kernel versions prior to 9.4 have not produced this behavior. Informatica is investigating this issue.

  • Sending LBT-IPC messages larger than 65,535 bytes is not supported when ordered_delivery has been set to 0 (zero) unless you set transport_lbtipc_behavior to receiver_paced.

  • When using the LBT-RDMA transport with Java applications, a segfault can occur if you kill a receiver with Ctrl-C. As a workaround, use the JVM option, -Xrs. Informatica is investigating this problem.

  • If you use the current version of VMS (3.2.8), UMS 4.1 issues the following warning: LOG Level 5: LBT-RDMA: VMS Logger Message (Error): vmss_create_store: 196[E] vms_listen: rdma_bind_addr failed (r=-1). This warning indicates that rdma_bind failed for ethernet interfaces, which is expected behavior. Currently, VMS attempts rdma_bind on all interfaces. When released, VMS version 3.2.9 will only run rdma_bind on infiniband-capable interfaces.

  • When using Automatic Monitoring with ud_acceleration and the epoll file descriptor option, UMS may leave a monitoring thread running after context deletion. Informatica is investigating this problem.

  • At the present time, 32-bit applications cannot interact with 64-bt applications using the LBT-IPC transport. As a result, a 64-bit UM Gateway cannot interact with a 32-bit application using LBT-IPC. It can only interact with a 64-bit application. Likewise, a 32-bit UM Gateway can only interact with a 32-bit application.

  • The UM Gateway does not currently support gateway failover, MIM, persistence, queuing, JMS, or Ultra Load Balancing. Gateway support of these features is in development.

  • The UM Gateway is not supported as a standalone Windows service at this time. This will be resolved in a future release.

  • The UM Gateway does not currently support a four gateway "full-mesh" configuration (i.e., all gateways connected).

  • If using LBT-RDMA across the UM Gateway and you exit the gateway with Ctrl-C, you may see a segfault. Informatica is aware of this and has not observed any ill effects from this segfault.

  • Informatica recommends against configuring UM Gateways in a ring of peer portals - use configurations utilizing gateway endpoints (parallel paths) to break the ring.

  • Informatica recommends not stopping and restarting UM Gateways within the transport's activity timeout period (transport_*_activity_timeout defaults to 60 seconds for LBTRU and LBTRM).

5.1.4. Version Compatibilities

  • Although UMS 5.0 applications should run well with LBM 4.2 UM Gateways, all Gateways should always run the same version.

5.2. UMP 5.3.3

5.2.1. Updates

  • The following error log messages have been added to the Operations Guide.

    • Core-7421-1: Source Side Filtering Init message with no return IP, using transport IP (%s).

    • Core-7479-1: NOTICE from src (RID:%u: (%s)): store %s:%u reports it has not received TIR. Possible misconfiguration?

    • CoreAPI-6783-1: lbm_socket_sendb send error occurred while sending. The message will contain addition specific information, supplied by the operating system.

    • Gwd-6033-353: : endpoint portal [%s] received one or more UIM control messages with no stream information - these will be dropped.

    • Gwd-6033-593: peer portal [%s] received one or more UIM control messages with no stream information - these will be dropped.

  • When a receiver tries to register with a store using a RegID in use by another receiver, the application error log message now includes the RegID: UME registration error: error in UME registration, RegID 1508542809 is in use by a different receiver at store 0. Previously the message always displayed RegID 0 and no store number.

  • When a source tries to register with a store using a RegID in use by another source, the application error log message now includes the RegID, store number and the store's IP address and port: Error registering source with UME store: error in UME registration, RegID 2323214065 is in use by a different source at store 1 192.168.101.130.21112. Previously the message always displayed RegID 0 and no store information.

5.2.2. Bug Fixes

  • The umestored Web Monitor has been changed to correctly display the Request Receive Rate, Service Rate and Drop Rate. Previously the rates only showed activity for the last second instead of a true activity per second rate. As a result the rates often displayed 0 (zero). A link has been added to the Persistent Store Page to reset the rate calculation. The rates can be reset on a per store basis.

  • Corrected an infrequent problem that could cause a store (umestored) to perpetually send TSNI requests to a UM source at the retransmit_request_interval after the UM source has resumed sending messages.

  • Corrected a problem with the umestored daemon that resulted in the premature deletion of source state information if umestored was restarted. If the source had an active receiver before umestored was shut down, the configured source-state-lifetime was not restored, so the default lifetime of 0 seconds and default source-activity-timeout of 30 seconds was enforced.

  • Corrected an issue where stability acknowledgements from a persistent store to the UM source during a store proxy source election could be sent with invalid topic and transport indexes, which the source would discard.

5.2.3. Known Issues

  • Creating multiple receivers on the same topic in the same context using the Java API may result in a crash when calling Dispose() if the topic is a persistent data stream. It is recommended to either protect the calls to Dispose() to prevent concurrent calls or use a single receiver per topic and use application code to deliver messages to different callbacks. This will be fixed in a future release.

  • UMP stores sometimes fail to send message stability acknowledgements to sources for messages sent while the store was being reported unresponsive by the source. Informatica is investigating this problem.

  • A UMP store daemon, umestored, may crash if you enable proxy sources and the daemon is configured with a very restrictive port range, ( transport_lbtrm_multicast_address_low - transport_lbtrm_multicast_address_high, transport_tcp_port_low - transport_tcp_port_high, etc.). Informatica recommends that you use a wide port range (at least 1000 ports) in your UM configuration file if you enable proxy sources for umestored. Informatica is investigating this problem.

  • The UMP store daemon, umestored, stops persisting messages to disk if the store has a loss condition which is not resolved prior to an EOS event. As a workaround, Informatica recommends that you enable receiver delivery_control_loss_check_interval 2500 in the UM configuration file, not the umestored XML configuration file. The value of 2500 assumes the default *_nak_generation_interval. See Preventing Undetected Loss for more information. Informatica is investigating this problem.

  • Receivers using event queues and Spectrum with UMP can experience a SIGSEGV while shutting down if events still exist on the event queue when it is deleted. As a workaround, use LBM_EVQ_BLOCK when dispatching event queues. During application shutdown, call lbm_evq_unblock() after deleting receivers associated with the event queue, but before deleting any context objects. Once the dispatch thread exits, it is safe to proceed with context deletion. Informatica is working on a solution to this problem.

  • When running a store on a Solaris machine, you may experience registration failures after a few minutes. The store repeatedly reports the error, [WARNING]: wait returned error 4 [select: (22) Invalid argument]. Changing fd_management_type to devpoll prevents this problem. Informatica is investigating this problem.

  • UMP proxy source elections can only occur among stores in the same topic resolution domain connected directly via peer links. Informatica is investigating this problem.

5.3. UMQ 5.3.3

5.3.1. Bug Fixes

  • Corrected an infrequent condition that caused the fatal assert [prop_offset >= 0] if a UMQ receiver received a message containing message properties.

  • Corrected an issue that prevented a UMQ receiver for a dead letter topic from registering with a queue if a particular queue instance did not have a record of the creation of the dead letter topic.

  • Corrected an problem that caused a segmentation fault when restarting a queue if the queue daemon (umestored) was configured with a disk sinc log file (sinc-log-filename).

5.3.2. Known Issues

  • The method, getJMSRedelivered for a message always returns false for redelivered messages when using Queue Session IDs along with the topic's message-reassignment-timeout set to 0 (zero).

  • The UM configuration file specified for a queue cannot also contain source attributes for a store such as, source ume_store 127.0.0.1:14567 or source ume_store_name NYstore2.

  • When servicing large message retrieval requests from a queue browser, the queue may become busy enough that the receiver perceives it as unresponsive and re-registers with the queue, canceling the message retrieval request. Informatica recommends limiting large message retrievals to 1000 messages at a time, waiting for each retrieval to finish before beginning the next retrieval request

Copyright (c) 2004 - 2014 Informatica Corporation. All rights reserved.