25. Release LBM 4.2.2 / UME 3.2.2 / UMQ 2.1.2 - April 2011

25.1. LBM 4.2.2

25.1.1. Bug Fixes

  • In the UM Gateway, log messages that appear upon restart indicating a forwarding entry or ctxinst have been changed from WARNING to INFORMATIONAL. These log messages are normal and do not appear as the gateway discovers the surrounding environment and sets up the required forwarding information.

  • Resolved a known issue with the UM Gateway that was reported in LBM 4.2.1 Known Issues. Previously if you brought down a UM Gateway, reconfigured its number and/or type of portals and then restarted it, you had to also restart any other gateways with which that reconfigured gateway either connected to previously or currently connects to. Now you do not have to restart any connecting gateways.

  • Fixed a problem with the Java API that could cause a crash of the JVM if a user's source event callback threw an uncaught exception.

  • Corrected a problem that prevented LBM 4.0 receivers from joining multiple pre-LBM 4.0 sources sending on the same topic if the topic advertisements for those sources were sent before the receivers were created.

25.1.2. Known Issues

  • When using Event Queues with the Java API on Mac OS X kernel 9.4, core dumps have occurred. Mac OS X kernel versions prior to 9.4 have not produced this behavior. Informatica is investigating this issue.

  • When using LBT-IPC, a seg fault can occur when sending messages larger than 65,535 bytes when ordered_delivery has been set to 0 (zero). The seg fault occurs when fragments are lost. Setting transport_lbtipc_behavior to receiver_paced avoids the seg fault by eliminating loss. Informatica is investigating this issue.

  • When using the LBT-RDMA transport with Java applications, a segfault can occur if you kill a receiver with Ctrl-C. As a workaround, use the JVM option, -Xrs. Informatica is investigating this problem.

  • If you use the current version of VMS (3.2.8), LBM 4.1 issues the following warning: LOG Level 5: LBT-RDMA: VMS Logger Message (Error): vmss_create_store: 196[E] vms_listen: rdma_bind_addr failed (r=-1). This warning indicates that rdma_bind failed for ethernet interfaces, which is expected behavior. Currently, VMS attempts rdma_bind on all interfaces. When released, VMS version 3.2.9 will only run rdma_bind on infiniband-capable interfaces.

  • When using Automatic Monitoring with ud_acceleration and the epoll file descriptor option, LBM may leave a monitoring thread running after context deletion. This situation produces the following error: VMA ERROR : epoll_wait_call:36:epoll_wait_call() epfd 48 not found. Informatica is investigating this problem.

  • A requesting application using a UM Gateway may not receive the expected number of responses. The problem occurs if the responses exceed the maximum datagram size, are fragmented and the gateway must forward simultaneous fragmented responses. In this case the fragments may become intermingled and result in a response not being delivered as expected. The requesting application receives a warning similar to: WARNING: failed assertion [offset==0] at line 1627 in ../../../../src/lib/lbm/lbmqr.c. Informatica is investigating this problem.

  • At the present time, 32-bit applications cannot interact with 64-bt applications using the LBT-IPC transport. As a result, a 64-bit UM Gateway cannot interact with a 32-bit application using LBT-IPC. It can only interact with a 64-bit application. Likewise, a 32-bit UM Gateway can only interact with a 32-bit application.

  • The UM Gateway does not currently support queuing at this time, only streaming and persistence. This will be resolved in a future release.

  • The UM Gateway is not supported as a standalone Windows service at this time. This will be resolved in a future release.

  • When using the UM Gateway in a four gateway "full-mesh" configuration (i.e., all gateways connected) the following instabilities have been observed.

    • Requesting applications can receive multiple or unending responses.

    • Restarting a single gateway has resulted in the crash of other gateways.

    • Running the same set of tests simultaneously in 2 opposite directions over a set of gateways causes fatal asserts in the UM Gateways. Although this is not a typical user configuration, more complex gateway configurations could simulate this effect. The UM Gateway is designed to successfully forward the same set of topic messages sent both ways across a set of gateways.

    These behaviors have not been seen in other, more typical gateway configurations. Informatica is investigating this problem.

  • Under certain extreme conditions manufactured by configuring atypically low timeout settings, the UM Gateway has been observed to terminate with various fatal assertions. We believe this to be the result of a race condition exploited by the unusually low timer settings. This issue has not been seen with less aggressive or default timer settings. Informatica continues to investigate this problem.

  • In some cases, a UM Gateway instance does not establish a reliable peer connection with another UM Gateway instance residing on the same Solaris 64-bit host. At this time, Informatica does not recommend configuring 1 or more peer-connected gateway instances on a single Solaris 64-bit host. Informatica continues to investigate this problem.

  • If using LBT-RDMA across the UM Gateway and you exit the gateway with Ctrl-C, you may see a segfault. Informatica is aware of this and has not observed any ill effects from this segfault.

  • When using the UM Gateway in a failover configuration using LBT-RDMA, if the gateways experience multiple failures, receivers may experience deafness (unable to discovers sources). For this to occur, all gateways must fail once and at least one gateway must fail more than once.

25.2. UME 3.2.2

25.2.1. Bug Fixes

  • Added the -t option to specify store names in the .NET and Java version of umesrc.

25.2.2. Known Issues

  • Receivers using event queues and Spectrum with UME can experience a SIGSEGV while shutting down if events still exist on the event queue when it is deleted. As a workaround, use LBM_EVQ_BLOCK when dispatching event queues. During application shutdown, call lbm_evq_unblock() after deleting receivers associated with the event queue, but before deleting any context objects. Once the dispatch thread exits, it is safe to proceed with context deletion. 29West is working on a solution to this problem.

  • When running a store on a Solaris machine, you may experience registration failures after a few minutes. The store repeatedly reports the error, [WARNING]: wait returned error 4 [select: (22) Invalid argument]. Changing fd_management_type to devpoll prevents this problem. Informatica is investigating this problem.

25.3. UMQ 2.1.2

25.3.1. Bug Fixes

  • Fixed a problem that occurred when using either the Java API or the .NET API and a source that sends using a per-send client object. The problem resulted in a crash or null pointer exception if the source was configured with both ULB and UME settings. If configured with only ULB settings, a memory leak of the per-send client object occurred. The following shows how you might send messages using LBMContext.send() in Java or .NET with a per-send client object.

    LBMSourceSendExInfo exinfo = new LBMSourceSendExInfo();
    exinfo.setClientObject(something);
    context.send( ...., exinfo);
                   
    

Copyright (c) 2004 - 2014 Informatica Corporation. All rights reserved.