The behavior of the context attribute network_compatibility_mode was enhanced to provide compatibility with each UM release from LBM 3.6 to the present. See also Network Compatibility Mode.
The UM Gateway is now a separate download package.
Added additional Microsoft® redistributable DLL's to UM Microsoft Windows® packages to accompany MSVCR80.
Off Transport Recovery (OTR) - Modified this feature to flag messages recovered by OTR with the OTR flag. In the previous version, OTR flagged recovered messages with the RETRANSMIT flag. (This may involve a code change for some users.) Also, OTR now initiates independently of NAK behavior by receivers. Added 3 new OTR configurations options.
See also Off-Transport Recovery (OTR).
The SmartHeap library has been removed from the UM distribution, as it is not used by UM.
Changed all UM daemons to print consistent version information when using the -h option.
Important - Users are strongly advised not to configure any port or port range with values which fall within your platform's ephemeral port range. This includes the umestored port, LBT-TCP source ports, resolver UDP ports, RM destination and source ports or any port that you can configure. All UM default ports explicitly avoid all platform ephemeral port values.
Resolved the Known Issue: When using the LBT-IPC Resource Manager tool to view the currently allocated IPC transport sessions, the Session ID is presented in the reverse byte order from what is given to the application via the source string. The byte order and order shared memory region name now present correctly.
Corrected various memory leaks associated with the Arrival Order with Reassembly feature. ( ordered_delivery = -1.)
Corrected a problem with Arrival Order with Reassembly that could cause a crash when reassembling messages with more than 256 fragments. ( ordered_delivery = -1.)
Corrected an issue with the UM Gateway prevented a persistent source from resolving the name of a store which had been restarted.
Added additional validation logic to Topic Resolution advertisement (TIR) processing to prevent malformed packets from causing an invalid memory access, potentially leading to a crash.
Corrected a problem with the LBT-IPC transport that caused a UM source to abort when multiple receivers were killed while the source was sending data. A lock was added to serialize receiver deletion.
Fixed an issue in the C API only that could cause a SEGV if a you reuse a message properties object.
Fixed an issue in the C API only where properties could be duplicated if you reused the same message properties object.
Fixed a small memory leak when creating multiple receivers on the same topic and context.
Corrected a problem with LBT-IPC on Microsoft Windows that caused receivers to stop receiving messages.
Corrected the Java API isFragment()
method to return TRUE if a message is a Fragment and
false if it is an assembled message, just like the .NET API. Also created a new C API function, lbm_msg_is_fragment()
.
The .NET API does not support message selectors with unicode symbols and causes receiver creation to fail.
UM does not support the use of Unicode PCRE characters in wildcard receiver patterns on any system that communicates with a HP-UX or AIX system.
Hot Failover is not currently supported with Multitransport Threads.
Configuring sources with LBT-RU, source-side filtering, and adaptive batching may cause a crash. All three options must be set to cause the problem. Informatica is working to address this issue.
When using the UM Gateway Informatica recommends that you do not enable the Multitransport Threads feature. Informatica does not support nor test the operation of Multitransport Threads across the UM Gateway.
Multitransport Threads do not support persistent stores (UMP) or queues (UMQ).
When using Event Queues with the Java API on Mac OS X kernel 9.4, core dumps have occurred. Mac OS X kernel versions prior to 9.4 have not produced this behavior. Informatica is investigating this issue.
Sending LBT-IPC messages larger than 65,535 bytes is not supported when ordered_delivery has been set to 0 (zero) unless you set transport_lbtipc_behavior to receiver_paced.
When using the LBT-RDMA transport with Java applications, a segfault can occur if you kill a receiver with Ctrl-C. As a workaround, use the JVM option, -Xrs. Informatica is investigating this problem.
If you use the current version of VMS (3.2.8), UMS 4.1 issues the following warning: LOG Level 5: LBT-RDMA: VMS Logger Message (Error): vmss_create_store: 196[E] vms_listen: rdma_bind_addr failed (r=-1). This warning indicates that rdma_bind failed for ethernet interfaces, which is expected behavior. Currently, VMS attempts rdma_bind on all interfaces. When released, VMS version 3.2.9 will only run rdma_bind on infiniband-capable interfaces.
When using Automatic Monitoring with ud_acceleration and the epoll file descriptor option, UMS may leave a monitoring thread running after context deletion. Informatica is investigating this problem.
At the present time, 32-bit applications cannot interact with 64-bt applications using the LBT-IPC transport. As a result, a 64-bit UM Gateway cannot interact with a 32-bit application using LBT-IPC. It can only interact with a 64-bit application. Likewise, a 32-bit UM Gateway can only interact with a 32-bit application.
The UM Gateway does not currently support gateway failover, MIM, persistence, queuing, JMS, or Ultra Load Balancing. Gateway support of these features is in development.
The UM Gateway is not supported as a standalone Windows service at this time. This will be resolved in a future release.
The UM Gateway does not currently support a four gateway "full-mesh" configuration (i.e., all gateways connected).
If using LBT-RDMA across the UM Gateway and you exit the gateway with Ctrl-C, you may see a segfault. Informatica is aware of this and has not observed any ill effects from this segfault.
Informatica recommends against configuring UM Gateways in a ring of peer portals - use configurations utilizing gateway endpoints (parallel paths) to break the ring.
Informatica recommends not stopping and restarting UM Gateways within the transport's activity timeout period (transport_*_activity_timeout defaults to 60 seconds for LBTRU and LBTRM).
Although UMS 5.0 applications should run well with LBM 4.2 UM Gateways, all Gateways should always run the same version.
Receiver-paced Persistence (RPP) - This feature retains RPP messages until all RPP receivers acknowledge consumption. The repository maintains an accurate count of all RPP receivers. You enable RPP with UM configuration options. No special API calls are needed. See Receiver-paced Persistence Operations.
Added the repository type reduced-fd, which reduces the number of file descriptors used by UM See Source Repositories
Added the UM configuration option, ume_flight_size_bytes.
Enabled UM to send retransmissions from a thread separate from the main context thread so as not to impede live message data processing. The <store> configuration option, retransmission-request-processing-rate, sets the store's capacity to process retransmission requests. See also Options for a Store's ume-attributes Element and Receiver-paced Persistence Operations.
Added the UM configuration option, ume_repository_ack_on_reception. When set by a UM source in a RPP configuration, this option's value can reset the repository's option, repository-allow-ack-on-reception. See also Options for a Topic's ume-attributes Element and Receiver-paced Persistence Operations
Added the UM configuration option, ume_write_delay. When set by a UM source in a RPP configuration, this option's value can reset the repository's option, repository-disk-write-delay. See also Options for a Topic's ume-attributes Element and Receiver-paced Persistence Operations
UMP store daemons (umestored) now include a hexadecimal session ID in log messages that include a registration ID. Also all registration ID are now printed as an unsigned decimal number.
Added improvements to speed the start-up of the UMP store daemon (umestoreds) when run as a Microsoft Windows service. At startup, umestoreds creates a separate thread to communicate with the Microsoft Windows Service Manager while the cache file loads.
Added the ability in Ultra Messaging® Manager to configure the LBM license in the umm.properties file.
Added UMP liveness detection creation and deletion callbacks to the sample applications, umesrc.c, umesrc.cs and umesrc.java.
umestored now logs a warning if it releases a message and that message has not been consumed by at least one receiver whose state is maintained in umestored. The warning has the identifier: Store-6007-2.
Resolved the Known Issue: If a source application stops and you restart it before the receiving application declares the EOS event, the receiving application does not send a new keepalive message. The source requires a keepalive message in order to declare a receiver "alive."
Corrected a problem that caused a restarted store to delete source and topic information after a short period of time. A restarted store now correctly persists the source lifetime timeouts which prevents the source from deleting the source and topic.
Fixed a memory leak that occurred in the UMP store daemon (umestored) when deleting proxy sources.
Fixed a small memory leak on UMP sources related to named store resolution.
Fixed a problem that could cause TSNIs with incorrect sequence numbers to be sent when a UMP source re-registers. This could trigger unrecoverable loss at non-UMP receivers.
Changed a non-fatal assert [nnode!=NULL] to a debug statement stating the last sequence number in the ASL, and the sequence number currently searched on. For example: [27461:1100937536|1041.662]:nnode is NULL. Last SQN in ASL 37e SQN number requested 37f
Corrected a problem with umestored that causes a receiver's state to be improperly created during store recovery.
Corrected a problem that caused a store upon startup todelay persisting messages for up to the configured retransmit_request_generation_interval (default 10 seconds), while attempting to recover messages that the source has already released. Now source reports the unavailability of the requested messages to the store and the store resume persistence without delay.
UMP stores sometimes fail to send message stability acknowledgements to sources for messages sent while the store was being reported unresponsive by the source. Informatica is investigating this problem.
A UMP store daemon, umestored, may crash if you enable proxy sources and the daemon is configured with a very restrictive port range, ( transport_lbtrm_multicast_address_low - transport_lbtrm_multicast_address_high, transport_tcp_port_low - transport_tcp_port_high, etc.). Informatica recommends that you use a wide port range (at least 1000 ports) in your UM configuration file if you enable proxy sources for umestored. Informatica is investigating this problem.
The UMP store daemon, umestored, stops persisting messages to disk if the store has a loss condition which is not resolved prior to an EOS event. As a workaround, Informatica recommends that you enable receiver delivery_control_loss_check_interval 2500 in the UM configuration file, not the umestored XML configuration file. The value of 2500 assumes the default *_nak_generation_interval. See Preventing Undetected Loss for more information. Informatica is investigating this problem.
Receivers using event queues and Spectrum with UMP can experience a SIGSEGV while shutting down if events still exist on the event queue when it is deleted. As a workaround, use LBM_EVQ_BLOCK when dispatching event queues. During application shutdown, call lbm_evq_unblock() after deleting receivers associated with the event queue, but before deleting any context objects. Once the dispatch thread exits, it is safe to proceed with context deletion. Informatica is working on a solution to this problem.
When running a store on a Solaris machine, you may experience registration failures after a few minutes. The store repeatedly reports the error, [WARNING]: wait returned error 4 [select: (22) Invalid argument]. Changing fd_management_type to devpoll prevents this problem. Informatica is investigating this problem.
UMP proxy source elections can only occur among stores in the same topic resolution domain connected directly via peer links. Informatica is investigating this problem.
JMS Message Selectors - Support for message selectors was added to Ultra Messaging JMS. The feature can also be used in other APIs by setting the message_selector receiver option to a valid message selector string. This allows receivers to filter messages based on the message properties of each incoming message. See Message Selectors.
Automatic Reassignment on Receiver Failure - A receiver that fails can now return with its original assignment ID and continue to receive queued messages in the correct order. To enable this feature, use the new UM configuration option, umq_session_id. See also, Queue Session IDs.
Ultra Messaging JMS now supports UMP Session IDs. - Ultra Messaging JMS now supports the use of UMP Session IDs. Implementation of this Ultra Messaging JMS feature involves the new ConnectionFactory configuration option, use_ump_session_ids. See also UMP Session IDs.
Updated the umqsltool UMQ SINC log utility to understand two new SINC log event types new to this release (RCV_REG and CTX_REG SINC), and added printing a message's topic to the default dump tool.
Added C API and Java API supports for message
selectors to the queue browser commands lbm_rcv_umq_queue_msg_list()
and LBMReceive.queueMessageList()
. The message selector can be used
by setting the message_selector structure in the API function
input parameter list. This allows the queue to control which messages get listed by using
the message selector string to filter based on the message properties of each currently
enqueued message.
Changed the createConnection() methods in the ConnectionFactory to be thread safe. This problem caused various exceptions in multithreaded applications.
Corrected a queue master election problem that prevented a former slave queue instance running on the Solaris SPARC platform (Solaris x86 was not affected) from correctly electing itself master in the event of a master queue instance failure.
Corrected a problem with Dead Letter Queue that caused expired messages to be correctly placed on the Master Queue's Dead Letter Queue but not created on one or more Slave Queue instances.
Corrected the LBMMessage.queueIndexInfo() method to now simply return null if no UMQ indexed queuing index information is present for a message, rather than throwing an exception. This matches the existing behavior of the .NET UMQ API.
Changed the Ultra Messaging JMS example application pong.sh to ensure that the JAVA_HOME environment variable has been set.
Corrected problems with the QueueReceiver.java example application. Not the application is consistent with the JMS interfaces and can be be recompiled with just the jms.jar.
Fixed a problem that could cause a crash in UM applications with multi-threaded transports enabled if a multi-threaded transport context happened to re-use the request port from an earlier instance of an application and received a UMQ keepalive packet intended for the previous application.
Fixed a possible deadlock in Ultra Messaging JMS that could occur when closing a session that had asynchronous consumers.
Corrected a problem with the umqsltool utility that could cause a crash when working with a SINC log file that contained an application set for which no receivers had ever registered.
Corrected a problem in Ultra Messaging JMS that resulted in the truncation of Unicode Text messages with non-ASCII characters.
Corrected a problem with multiple JMS producers sending on the same topic within the same LBMContext (JMSConnection). A single UM source now caches all JMS producers sending on the topic.
Corrected a problem with JMS messages received in a durable subscriber sent by a native UM source which resulted in an exception. Now UM checks the source of a JMS message before determining and sending the proper response.
Corrected a problem with the UMQ queue daemon (umestored) that caused a crash in some circumstances when it received a message resubmission for a message it already had.
Corrected a problem with proxy source election that resulted in proxy sources to bounce between configured stores due to a short proxy election interval. This interval has been lengthened.
Fixed a problem that could cause a fatal assertion when an queue browser receiver or observer received a message from the queue in response to a previously canceled message retrieval request.
Fixed a problem that could cause a [topic->rcv!=NULL] fatal assert when deleting a receiver with an outstanding UMQ deregister event.
Corrected a problem with the UM JMS method setJMSType()
that caused getJMSType()
to always return NULL.
Fixed an issue that could sometimes cause a UMQ queue daemon (umestored) configured as a disk queue to improperly start up if it had previously received messages that contained application headers (e.g. messages with the total message lifetime option set on the source, messages with an index set, or most JMS messages).
Fixed a UMQ issue that could cause a receiver's portion size to be ignored when assigning messages with indices to that receiver.
Fixed a UMQ issue that could result in several non-fatal asserts, such as Core-5688-8: WARNING: failed assertion [stream->reassembler.full_msg->len==total_len] and Core-5688-8: WARNING: failed assertion [stream->reassembler.first_sqn==first_sqn]. A system crash was also possible.
Fixed a UMQ issue that would prevent context registrations in the queue from succeeding when a new context registered on an IP and port that had already been in use by a previous context.
The method, getJMSRedelivered
for a message always
returns false for redelivered messages when using Queue Session IDs along with the
topic's message-reassignment-timeout set to 0 (zero).
The UM configuration file specified for a queue cannot also contain source attributes for a store such as, source ume_store 127.0.0.1:14567 or source ume_store_name NYstore2.
When servicing large message retrieval requests from a queue browser, the queue may become busy enough that the receiver perceives it as unresponsive and re-registers with the queue, canceling the message retrieval request. Informatica recommends limiting large message retrievals to 1000 messages at a time, waiting for each retrieval to finish before beginning the next retrieval request
Prev | Home | Next |
Release UMS 5.3.1 / UMP 5.3.1 / UMQ 5.3.1 - September - 2012 | Release UMS 5.2.2 / UMP 5.2.2 / UMQ 5.2.2 - March - 2012 |
Copyright (c) 2004 - 2014 Informatica Corporation. All rights reserved.