New Transport. Added the LBT-RDMA transport, which is a Remote Direct Memory Access (RDMA) LBM transport that allows sources to publish topic messages to a shared memory area from which receivers can read topic messages. LBT-RDMA runs across InfiniBand and 10 Gigabit Ethernet hardware. See LBT-RDMA for more information.
Added the following three new configuration options that cap the total amount of memory that a transport transmission window uses, which includes data and overhead.
Accelerated Multicast support. Added another transport acceleration option to LBM with support for Accelerated Multicast which applies to LBT-RM. You turn this acceleration on with the ud_acceleration option. This features requires InfiniBand or 10 Gigabit Ethernet hardware.
Redesigned UM Gateway The Gateway has been redesigned to allow full bidirectional forwarding, multi-hop forwarding, tunnel compression, encryption, and more. This new design requires a configuration change. For more see the new UM Gateway user guide.
Multi-Transport Threads LBM has added the ability to distribute data delivery across multiple CPUs by using a receiving thread pool. Receivers created with the configuration option, use_transport_thread set to 1 use a thread from the thread pool instead of the context thread. The option, receive_thread_pool_size controls the pool size. See Multi-Transport Threads.
Added a sequential mode for LBT-IPC which is controlled by transport_lbtipc_receiver_operational_mode. This mode requires your application to call lbm_context_process_lbtipc_messages() to process LBT-IPC messages instead of LBM automatically spawning a thread to do so.
New Monitoring Statistics: Added monitoring statistics for the new LBT-RDMA transport in the lbm_rcv_transport_stats_lbtrdma_t_stct and lbm_src_transport_stats_lbtrdma_t_stct structures. These statistics are available to the C API, the Java API and the .NET API. The new statistics have also been added to the Ultra Messaging® MIB and the InterMapper probe files. In addition, all example applications support the statistics.
Added the ability to name a context with lbm_context_set_name()
and advertise it's existence at the resolver_context_advertisement_interval. lbm_context_get_name()
was also added. This mechanism for naming
and advertising LBM contexts facilitates UM Gateway operation
especially for UME and UMQ .
Zero Object Delivery (ZOD) has been implemented for .NET, which allows .NET messaging receivers to deliver messages to an application with no per-message object creation. For more information, see Zero Object Delivery (ZOD) in the 29West Knowledgebase.
Zero Incoming Copy (ZIC) has been implemented for .NET, which provides access to message data directly through a byte pointer returned by the LBMMessage.dataPointer() method. For more information see Zero Incoming Copy (ZIC) in the 29West Knowledgebase.
TCP-LB has now been enhanced to allow fragmented messages to be delivered as fragments when you set ordered_delivery to zero.
Daemons built on 32-bit Linux previously linked with Smartheap 8.1 are now linked with Smartheap 9.0.1.
Added the following five new configuration options that establish independent datagram size limits for each transport. Deprecated transport_datagram_max_size.
Added the ability to continually report statistics based on a saved search term, such
as transport ID to the example application, lbmmoncache.c. Also updated the example applications, lbmmoncache.c and lbmmon.c to accept
the -c config argument. Also added lbm_context_topic_resolution_request()
to lbmmoncache.c to help resolve any quiescent topics.
Updated event processing to prevent short timers, or a large number of timers expiring at the same time from starving network processing. Network processing is now interspersed with timer expirations under such conditions.
Corrected an incompatibility problem between LBM 4.0 and pre-4.0 versions. When running LBM 4.0, any pre4.0 receivers were unable to discover multiple sources of the same topic. 4.0 receivers were able to discover all sources sending on the same topic. LBM 4.1 resolves this problem so all receivers discover all sources sending on the same topic.
Corrected a condition where a particular sequence of lbm_src_topic_alloc, lbm_src_delete, and lbm_src_create API calls would result in a fatal assertion.
Corrected a condition that caused a seg fault when using LBT-IPC to send large, fragmented messages (approximately over 65,000 bytes) and ordered_delivery was set to 0 (zero).
Corrected a problem that caused spurious context source wakeup events to be delivered for Unicast Immediate Messaging (UIM) when the immediate messaging had never actually been blocked.
Corrected a problem that sometimes resulted in a seg fault if you delete a source immediately after sending a message. The seg fault occurred when LBM flushed the messages for that source out of the batch.
Changed lbm_context_topic_resolution_request()
to
implement 0 (zero) for the duration_sec parameter which results
in only one request sent.
Changed the handling of arrival order and arrival order reassembly to expire records about lost packets under some conditions. This change rectifies cases where the last set of lost packets on a stream would not be reported as unrecoverably lost before the loss disappeared. With arrival order reassembly, unrecoverable loss was rarely reported.
Fixed some issues with Java context statistics methods that could result in invalid memory being accessed.
Fixed an issue where calling the LBMContext.getReceiverStatistics() could cause an access violation.
When using Event Queues with the Java API on Mac OS X kernel 9.4, core dumps have occurred. Mac OS X kernel versions prior to 9.4 have not produced this behavior. 29West is investigating this issue.
When using LBT-IPC, a seg fault can occur when sending messages larger than 65,535 bytes when ordered_delivery has been set to 0 (zero). The seg fault occurs when fragments are lost. Setting transport_lbtipc_behavior to receiver_paced avoids the seg fault by eliminating loss. 29West is investigating this issue.
When using the LBT-RDMA transport with Java applications, a segfault can occur if you kill a receiver with Ctrl-C. As a workaround, use the JVM option, -Xrs. 29West is investigating this problem.
If you use the current version of VMS (3.2.8), LBM 4.1 issues the following warning: LOG Level 5: LBT-RDMA: VMS Logger Message (Error): vmss_create_store: 196[E] vms_listen: rdma_bind_addr failed (r=-1). This warning indicates that rdma_bind failed for ethernet interfaces, which is expected behavior. Currently, VMS attempts rdma_bind on all interfaces. When released, VMS version 3.2.9 will only run rdma_bind on infiniband-capable interfaces.
Updated the umestored <interface> attribute of a <store> or <queue> element to use CIDR notation, i.e., 10.29.3.0/24.
Both UME and UMQ now deliver the Registration Complete source event every time a quorum is established instead of only once. More than one event delivery indicates quorum was lost and re-established.
Implemented an internal function to offload all non-critical log messages to a separate thread to avoid blocking I/O on critical threads inside the umestored and umestoreds daemons.
Fixed a problem that caused a UME source to erroneously register with one more configured store than needed after a failover when using quorum/consensus and the number of stores configured in the group is greater than the configured group size.
Receivers using event queues and Spectrum with UME can experience a SIGSEGV while shutting down if events still exist on the event queue when it is deleted. As a workaround, use LBM_EVQ_BLOCK when dispatching event queues. During application shutdown, call lbm_evq_unblock() after deleting receivers associated with the event queue, but before deleting any context objects. Once the dispatch thread exits, it is safe to proceed with context deletion. 29West is working on a solution to this problem.
Added a new feature called Ultra Load Balancing (ULB) that provides Once And Only Once (OAOO) delivery to receiving applications, but without a queue. The lack of a queue or broker in the message path adds ultra low latency to the load balancing abilities of this UMQ feature. Sources perform message assignment, delivery is receiver-paced and messages are not persisted. For more, see The Ultra Messaging, Queuing Edition User Guide.
UMQ now supports messages with user-supplied, chained application headers.
Added support for user-defined message headers, called application headers, to the C and Java APIs. Application headers are optional and reside outside the normal message payload.
Corrected a condition that caused the Unicast Topic Resolver (lbmrd) to segfault when processing UMQ store advertisements.
Prev | Home | Next |
Release LBM 4.1.1 / UME 3.1.1 / UMQ 1.1.1 - October 2010 | Release LBM 4.0.1 - June 2010 |
Copyright (c) 2004 - 2014 Informatica Corporation. All rights reserved.