Concepts Guide
UM Objects

Many UM documents use the term object. Be aware that with the C API, they do not refer to formal class-based objects as supported by C++. The term is used here in an informal sense to denote an entity that can be created, used, and (usually) deleted, has functionality and data associated with it, and is managed through the API. The handle that is used to refer to an object is usually implemented as a pointer to a data structure (defined in lbm.h), but the internal structure of an object is said to be opaque, meaning that application code should not read or write the structure directly.

However, the UM Java and .NET APIs are formally object oriented, with Java/C# class-based objects. See the UM Java API and the UM .NET API documentation.

The UM API is designed around a set of UM objects. The most-often used object types are:

  • Context - a fairly "heavy-weight" parent object for most other UM objects. Also represents an "instance" of UM, with all foreground and background functions.
  • Source - a publisher creates one or more Source objects, one per topic to be published.
  • Receiver - a subscriber creates one or more Receivers, one per topic that is subscribed.
  • Wildcard Receiver - a subscriber can optionally create Wildcard Receivers to subscribe to patterns that can match multiple topics.
  • Hot Failover Receiver - a specialized type of Receiver object that allows for redundant publishing.
  • HFX Receiver - a specialized type of Hot Failover Receiver object that allows redundant sources across multiple Context objects. (Note: HFX is deprecated.)
  • Event Queue - an active object (requires one or more threads) which is used to queue and deliver UM events (including received messages) to the application using a separate thread. Event queues are normally not required; by default UM delivers events using the Context's own thread. But there are circumstances where it is useful to transfer the received message or event to an independent thread for delivery.

A typical UM application program manages UM objects following certain ordering rules:

  1. If desired, Event Queues are typically created first. (Most UM applications do not use Event Queues.)
  2. Create a Context object. A typical application creates only one context, but there are some specialized use cases in which a small number of contexts (typically less than 5) are useful.
  3. Create one or more Source and Receiver objects. It is not unusual for an application to create thousands of sources or receivers.
  4. The application's main processing phase consists of publishing messages using the Source objects, and receiving messages using the Receiver objects. UM is most efficiently used if those Source and Receiver objects are created during initialization. Dynamic object creation during normal operation is possible, but can require special coding.
  5. When the application is ready to shut down, Sources and Receivers should be deleted.
  6. Contexts are deleted after all Sources and Receivers are deleted. Note that if the context uses Sequential Mode, the event processing thread should be unblocked and joined prior to Context deletion.
  7. Event Queues, if used, are deleted last. Note that the Event Queue dispatching thread(s) should be unblocked and joined prior to deletion of the Event Queue.

Note that it is very important to enforce the above deletion order. For example, it can lead to crashes if you delete a context while it still has active Sources or Receivers associated with it. Similarly, crashes can result if you delete an event queue while it still has active Contexts associated with it.

Also, note that deletion of Source objects can affect the reliability of message delivery. UM Receivers are designed to detect lost packets and request retransmission. However, once a Source object is deleted, it can no longer fulfill retransmission requests. It is usually best for an application to delay a few seconds after sending its last messages before deleting the Sources. This is especially important if Implicit Batching is used since outgoing messages might be held in the Batcher.


Context Object  <-

A UM context object conceptually is an environment in which UM runs. An application creates a context, typically during initialization, and uses it for most other UM operations. In the process of creating the context, UM normally starts an independent thread (the context thread) to do the necessary background processing such as the following:

  • Topic resolution
  • Enforce rate controls for sending messages
  • Manage timers
  • Manage state
  • Implement UM protocols
  • Manage Transport Sessions

You create a context with lbm_context_create(). When an application is finished with the context (no more message passing needed), it should delete the context by calling lbm_context_delete().

Warning
Before deleting a context, you must first delete all objects contained within that context (sources, receivers, wildcard receivers). See Deleting UM Objects.

Your application can give a context a name, which are optional but should be unique across your UM network. You set a context name before calling lbm_context_create() in the following ways:

Context names are optional but should be unique within a process. UM does not enforce uniqueness, rather issues a log warning if it encounters duplicate context names. Application context names are only used to load template and individual option values within an XML UM configuration file.

One of the more important functions of a context is to hold configuration information that is of context scope. See the UM Configuration Guide for options that are of context scope.

Most UM applications create a single context. However, there are some specialized circumstances where an application would create multiple contexts. For example, with appropriate configuration options, two contexts can provide separate topic name spaces. Also, multiple contexts can be used to portion available bandwidth across topic sub-spaces (in effect allocating more bandwidth to high-priority topics).

Attention
Regardless of the number of contexts created by your application, a good practice is to keep them open throughout the life of your application. Do not close them until you close the application.


Topic Object  <-

A UM topic object is conceptually very simple; it is little more than a container for a string (the topic name). However, UM uses the topic object to hold a variety of state information used by UM for internal processing. It is conceptually contained within a context. Topic objects are used by applications in the creation of sources and receivers.

Technically, the user's application does not create or delete topic objects. Their management is handled internally by UM, as needed. The application uses APIs to gain access to topic objects. A publishing application calls lbm_src_topic_alloc() to get a reference to a topic object that it intends to use for creation of a Source Object. A subscribing application calls lbm_rcv_topic_lookup() to get a reference to a topic object that it intends to use for creation of a Receiver Object.

The application does not need to explicitly tell UM when it no longer needs the topic object. The application's reference can simply be discarded.


Source Object  <-

A UM source object is used to send messages to the topic that it is bound to. It is conceptually contained within a context.

You create a source object by calling lbm_src_create(). One of its parameters is a Topic Object. A source object can be bound to only one topic. The application is responsible for deleting a source object when it is no longer needed by calling lbm_src_delete(). See Deleting UM Objects.


Source String  <-

Every source that a publishing application creates has associated with it a unique "source string". Note that if multiple publishing UM contexts (applications) create sources for the same topic, each context's source will have its own unique source string. Similarly, if one publishing UM context (application) creates multiple sources for different topics, each topic's source will have its own unique source string. So a source string identifies one specific instance of a topic within a UM context.

The source string is used in a few different ways in the UM API, for example to identify which Transport Session to retrieve statistics for in lbm_rcv_retrieve_transport_stats(). The source string is made available to the application in several callbacks, for example lbm_src_notify_function_cb, or the "source" field of lbm_msg_t_stct of a received message. See also Sending to Sources.

The format of a source string depends on the transport type:

  • TCP:src_ip:src_port:session_id[topic_idx]
    example: TCP:10.29.3.88:45789:f1789bcc[1539853954]

  • LBTRM:src_ip:src_port:session_id:mc_group:dest_port[topic_idx]
    example: LBTRM:10.29.3.88:14390:e0679abb:231.13.13.13:14400[1539853954]

  • LBT-RU:src_ip:src_port:session_id[topic_idx]
    example: LBT-RU:10.29.3.88:14382:263170a3[1539853954]

  • LBT-IPC:session_id:transport_id[topic_idx]
    example: LBT-IPC:6481f8d4:20000[1539853954]

  • LBT-SMX:session_id:transport_id[topic_idx]
    example: LBT-SMX:6481f8d4:20000[1539853954]

  • BROKER
    example: BROKER

Please note that the topic index field (topic_idx) may or may not be included, depending on the context in which it is presented, your version of UM, and the setting for configuration option source_includes_topic_index (context). For example, a receiver callback probably will include the topic index since the callback is specific to a topic, while Monitoring output will not, since it is referring to a transport session, not an individual topic.

See also lbm_transport_source_format() and lbm_transport_source_parse().


Source Strings in a Routed Network  <-

In a UM network consisting of multiple Topic Resolution Domains (TRDs) connected by DROs, a given source will be uniquely identifiable within a TRD by its source string. However, that same source will have different source strings in different TRDs. For receivers in the same TRD as the source, the source string will refer to the source. But in remote TRDs, that same source's source string will refer to the proxy source of the DRO on the shortest path to the source. The IP information contained in the source string will refer to the DRO.

This can lead to a situation where multiple originating sources located elsewhere in the UM network will have source strings with the same IP information in a given TRD. They will differ by Topic Index number, but even that topic index will be different in different TRDs.


Source Configuration and Transport Sessions  <-

As with contexts, a source holds configuration information that is of source scope. This includes network options, operational options and reliability options for LBT-RU and LBT-RM. For example, each source can use a different transport and would therefore configure a different network address to which to send topic messages. See the UM Configuration Guide for source configuration options.

As stated in UM Transports, many topics (and therefore sources) can be mapped to a single transport. Many of the configuration options for sources actually control or influence Transport Session activity. If many sources are sending topic messages over a single Transport Session (TCP, LBT-RU or LBT-RM), UM only uses the configuration options for the first source assigned to the transport.

For example, if the first source to use a LBT-RM Transport Session sets the transport_lbtrm_nak_generation_interval (receiver) to 24 MB and the second source sets the same option to 2 MB, UM assigns 24 MB to the Transport Session's transport_lbtrm_nak_generation_interval (receiver).

The UM Configuration Guide identifies the source configuration options that may be ignored when UM assigns the source to an existing Transport Session. Log file warnings also appear when UM ignores source configuration options.


Receiver Object  <-

A UM receiver object is used to receive messages from the topic that it is bound to. It is conceptually contained within a context. Messages are delivered to the application by an application callback function, specified when the receiver object is created.

You create a receiver object by calling lbm_rcv_create(). One of its parameters is a Topic Object. A receiver object can be bound to only one topic. The application is responsible for deleting a receiver object when it is no longer needed by calling lbm_rcv_delete(). See Deleting UM Objects.

Multiple receiver objects can be created for the same topic within a single context, which can be used to trigger multiple delivery callbacks when messages arrive for that topic.


Receiver Configuration and Transport Sessions  <-

A receiver holds configuration information that is of receiver scope. This includes network options, operational options and reliability options for LBT-RU and LBT-RM. See the UM Configuration Guide for receiver configuration options.

As stated above in Source Configuration and Transport Sessions, multiple topics (and therefore receivers) can be mapped to a single transport. As with source configuration options, many receiver configuration options control or influence Transport Session activity. If multiple receivers are receiving topic messages over a single Transport Session (TCP, LBT-RU or LBT-RM), UM only uses the configuration options for the first receiver assigned to the transport.

For example, if the first receiver to use a LBT-RM Transport Session sets transport_lbtrm_nak_generation_interval (receiver) to 10 seconds, that value is applied to the Transport Session. If a second receiver using the same transport session sets the same option to 2 seconds, that value is ignored.

The UM Configuration Guide identifies the receiver configuration options that may be ignored when UM assigns the receiver to an existing Transport Session. Log file warnings also appear when UM ignores receiver configuration options.


UM Wildcard Receivers  <-

You create a wildcard receiver object by calling lbm_wildcard_rcv_create(). Instead of a topic object, the caller supplies a pattern which UM uses to match multiple topics. Because the application does not explicitly lookup the topics, UM passes the topic attribute into lbm_wildcard_rcv_create() so that it can set options. Also, wildcard receivers have their own set of options, such as pattern type. The application is responsible for deleting a wildcard receiver object when it is no longer needed by calling lbm_wildcard_rcv_delete().

The wildcard pattern supplied for matching is a PCRE regular expression that Perl recognizes. See http://perldoc.perl.org/perlrequick.html for details about PCRE.

Note
Ultra Messaging has deprecated two other wildcard receiver pattern types, regex POSIX extended regular expressions and appcb application callback, as of UM Version 6.1. Only PCRE is supported.

Be aware that some platforms may not support all of the regular expression wildcard types. For example, UM does not support the use of Unicode PCRE characters in wildcard receiver patterns on any system that communicates with a HP-UX or AIX system. See the Informatica Knowledge Base article, Platform-Specific Dependencies for details.

For examples of wildcard usage, see Example lbmwrcv.c, Example lbmwrcv.java, and Example lbmwrcv.cs.

For more information on wildcard receivers, see Wildcard Receiver Topic Resolution, and Wildcard Receiver Options.

TIBCO users see the Informatica Knowledge Base articles, Wildcard topic regular expressions and SmartSockets wildcards and Wildcard topic regular expressions and Rendezvous wildcards.

Overlapping Receivers

Suppose an application creates three receivers:

  1. Wildcard receiver1: "^example[0-9]$"
  2. Single-topic receiver: "example1".
  3. Wildcard receiver2: "^[a-z]*1$"

A source for topic "example1" will match all three receivers. Each receiver's object's "receiver callback" will be invoked sequentially for each received message. However, be aware that these are simply multiple callbacks from a single underlying UM receiver; they are not independent.

This becomes significant if different "receiver" scoped configuration options are specified for the different receiver objects, a usage that Informatica recommends against. Only one of the receiver object's configuration is used to create the underlying receiver; the others are ignored. In the above example, it is not necessarily the first receiver object which applies its configuration. If, for example, the source is not yet discovered until later, UM does not define which of the above three receiver objects will be used.

Informatica recommends users always use the same configuration options when creating multiple receiver objects that overlap.


Transport Services Provider Object  <-

The Transport Services Provider object ("XSP") is introduced with UM version 6.11 and beyond to manage sockets, threads, and other receive-side resources associated with subscribed Transport Sessions. The primary purpose for an XSP object is to allow the programmer to control the threading of received messages, based on the Transport Sessions of those messages.

For more information on XSP, see Transport Services Provider (XSP).


UM Hot Failover Across Contexts Objects  <-

Hot Failover Across Contexts objects ("HFX") provide a form of hot failover that can operate across multiple network interfaces.

Note that the HFX feature is deprecated.

For more information, see Hot Failover Across Multiple Contexts (HFX).


Event Queue Object  <-

Most UM events, like received messages, are delivered to the application via event handler callbacks, like a receiver callback. A UM "event queue" object is a queue for moving execution of a UM callbacks to a different thread - the event queue dispatch thread.

Note that unlike other UM objects, event queues are not owned by a context. An event queue is a top-level object, and can be associated with contexts, sources, and/or receivers.

Warning
Before deleting an event queue, you must first delete all objects associated with it (contexts, sources, receivers, timers). See Deleting UM Objects.

By default, UM events will be delivered from a variety of different threads, frequently a context or XSP thread. Context/XSP thread callbacks are the most efficient form of event delivery, but place restrictions on your callback code. For example:

  • The application function is not allowed to make certain API calls (mostly having to do with creating or deleting UM objects).
  • The application function must execute very quickly without blocking.
  • The application does not have control over when the callback executes. For example, events might be delivered concurrently from different application and context/XSP threads.

For these reasons, you might want to transfer handling of UM events to a different thread. This can impose strict serialization and remove restrictions.

You could make use of your own queue for this, perhaps a standard library queue (although be careful of multiple threads enqueuing on the same queue), or perhaps LMAX's "Disruptor". Or you can use the UM event queue.

An advantage of the UM event queue is that it retains all of the semantics of the UM callback. The callback code is structured the same, regardless of whether it is called from a context/XSP thread or from an event queue dispatch thread. A disadvantage of the UM event queue is efficiency: it makes a kernel call for each enqueue and each dequeue.

UM event queues are unbounded, non-blocking queues and provide the following features:

  • Queue length monitoring. Informatica strongly recommends using the event queue monitor callback to warn if the configured size or latency thresholds are exceeded. (Note: exceeding a threshold does not prevent new events from being enqueued.) See Event Queue Monitor for more information.
  • The application callback function has no UM API restrictions.
  • Your application can control exactly when UM delivers queued events with lbm_event_dispatch(). And you can have control return to your application either when specifically asked to do so (by calling lbm_event_dispatch_unblock()), or optionally when there are no events left to deliver.
  • Your application can take advantage of parallel processing on multiple processor hardware since UM processes asynchronously on a separate thread from your application's processing of received messages. By using multiple application threads to dispatch an event queue, or by using multiple event queues, each with its own dispatch thread, your application can further increase parallelism.

You create an UM event queue in the C API by calling lbm_event_queue_create(). When finished with an event queue, delete it by calling lbm_event_queue_delete().

In the Java API and the .NET API, use the LBMEventQueue class.

See Event Queue Options for configuration options related to event queues.


Using an Event Queue  <-

To use an Event Queue, an application typically performs the following actions:

  1. Create the Event Queue using lbm_event_queue_create().
    err = lbm_event_queue_create(&evq, NULL, NULL, NULL);
  2. Create a new thread of execution to be the dispatch thread. This new thread should call lbm_event_dispatch() in a loop.
    evq_running = 1;
    while (evq_running) {
    err = lbm_event_dispatch(evq, LBM_EVENT_QUEUE_BLOCK);
    if (err == LBM_FAILURE) { ... handle error ... }
    }
    /* Exit the thread (OS-dependent). */
    Note that the return value should be compared to LBM_FAILURE (-1), and not the normal success value of 0. This is because on success, lbm_event_dispatch() returns the number of events that were dispatched during its execution.
  3. Create other UM objects whose events you want processed by the event queue. For example, creating a UM Receiver object with the Event Queue will call your message receiver callback through the event queue, using your dispatch thread.
    lbm_rcv_t *rcv;
    err = lbm_rcv_create(&rcv, ctx, topic, app_rcv_callback, NULL, evq);
    From this point, your application receiver callback function will be called from the dispatch loop for received messages and other receiver events.
  4. When it is time to shut down the program, the UM objects that refer to the event queue must first be deleted.
    err = lbm_rcv_delete(rcv);
    Remember that an event queue might be handling events for many UM objects; they must all be deleted prior to deleting the event queue.
  5. Now shut down dispatching the event queue.
    evq_running = 0;
    /* "Join" the dispatch thread (OS-dependent). */
    The unblock forces the lbm_event_dispatch() function to return. Typically at that point, the dispatch thread is "joined", which blocks until the dispatch thread exits.
  6. Delete the event queue.

Here are some lesser used event queue APIs:


"Deleting an Event Queue"  <-

Before you can delete an event queue, you must ensure that your application's dispatch thread is no longer running it. Here is the typical order of operation for deleting an event queue:

  1. Unblock the event queue (C API / Java and .NET API). This causes the application's dispatch thread to return from the dispatch function (C API / Java and .NET API).

  2. Exit the application's dispatch thread, typically with a thread join function.

  3. Delete the event queue (C API / Java and .NET API).


Event Queue Efficiency  <-

A UM event queue introduces a kernel call with each enqueue and dequeue. Kernel calls are considered costly for very low latency or high throughput applications.

Compared with a lockless, busy waiting queue like LMAX's "Disruptor", the UM event queue will have a lower maximum sustainable throughput and higher latency. The latency penalty will be especially apparent at low message rates where the dispatch thread is put to sleep waiting for new events.

The performance of the UM event queue can be improved by using Receive-Side Batching and by polling the event queue in a tight loop (see Event Queue Timeout).


Event Queue Timeout  <-

The second parameter passed to lbm_event_dispatch() is a timeout. There are two special values:

Any other value specifies a timeout in milliseconds.

However, this timeout does not necessarily limit the time spent waiting inside the dispatch function. The purpose of the timeout is to set a minimum time that the dispatch function will process events, not a maximum.

The implementation of the event queue uses an unbounded wait for incoming events. When an event is delivered to the queue, the dispatch function wakes up and processes the event (calls the appropriate application callback). Then the dispatch function checks the time to see if the timeout has been exceeded. If so, the dispatch function returns. Otherwise, it waits for the next event.

However, suppose that no further events are delivered to the event queue. In this case, the dispatch function will wait without bound. The timeout parameter will not cause the dispatch function to stop waiting.

If the application needs an upper limit to the time spent dispatching, the timeout can be combined with the use of an external timer that calls lbm_event_dispatch_unblock() when the maximum time has expired. UM's timer system may be used by calling lbm_schedule_timer().


Event Queue Monitor  <-

The UM event queue is unbounded, meaning that it doesn't have a fixed maximum size. Rather, if the incoming event (e.g. message) rate exceeds the callback processing rate, the event queue will grow without bound, eventually consuming the maximum available memory, usually crashing the program.

The only way to prevent this unbounded memory growth is to ensure that your event handler callback can keep up with the incoming event rate. However, this is usually impossible to guarantee, so it is important for applications to monitor queue growth and at least raise an alert if it exceeds some threshold.

The event queue size can change very quickly during traffic bursts. Automatic Monitoring is not a good solution for detecting these rapid spikes in queue length. UM has a special form of monitoring designed specifically for event queues: a monitoring callback.

You set the size threshold, in number of events, using queue_size_warning (event_queue). You can also set a latency threshold, in microseconds that events are waiting in the event queue, using queue_delay_warning (event_queue). It is also possible to be notified every time an event is added to the event queue, using queue_enqueue_notification (event_queue) (no information is available about the event).

UM tests the queue against the thresholds during the process of dispatching events. When the dispatch thread dequeues an event, it checks the size of the queue and the amount of time the dequeued event spent in the queue. If either or both exceed their thresholds, the monitor callback is called. If there is a burst of traffic, the size threshold can be exceeded by many events. In that case, the monitor callback will be called repeatedly for each dequeued event that exceeds a threshold.

Informatica strongly recommends minimizing the monitor callback to perform as little work as possible. Remember that the event queue grows when the event handling callback cannot keep up with the incoming event rate. Given that the monitor callback is called by the dispatch thread, its execution time is added to your event handler, slowing it down even further. We recommend setting a "high water mark" of the maximum detected queue size, and letting a separate thread periodically check that high water mark and raise an alert.

Note that if an application event handling callback were to deadlock and never return, the dispatch thread would be effectively halted, and the monitor callback would never be called. The monitor callback cannot be used to detect a hung application.

C Code Example:

/* Define monitor callback. */
int evq_monitor(lbm_event_queue_t *evq, int event, size_t evq_size,
lbm_ulong_t evq_delay_usec, void *clientd)
{
/* The "event" parameter is sometimes used as a bitmap, and other times
* used as an absolute value. */
if ((event == LBM_EVENT_QUEUE_ENQUEUE_NOTIFICATION)
&& (evq_size == 1) && (evq_delay_usec == 0)) {
/* This is an ENQUEUE notification. No info available on the event. */
...
}
else {
/* "event" is a bitmap; one or both conditions might be active. */
if (event & LBM_EVENT_QUEUE_SIZE_WARNING) {
/* The size threshold is exceeded for the next event to be dispatched.
* There are evq_size events in the queue. */
...
}
if (event & LBM_EVENT_QUEUE_DELAY_WARNING) {
/* The delay threshold is exceeded for the next event to be dispatched.
* The next event to be dispatched waited in the queue for evq_delay_usec */
...
}
}
} /* evq_monitor */
...
/* Initialization code, create event queue. */
...
err = lbm_event_queue_create(&evq, evq_monitor, NULL, NULL);

Java and .NET Code Example:

/* Define monitor callback. */
class MyEventQueue extends LBMEventQueue implements LBMEventQueueCallback
{
public MyEventQueue() throws LBMException
{
super();
addMonitor(this);
}
public void monitor(Object cbArg, int event, int evqSize, long evqDelayUsec)
{
/* The "event" parameter is sometimes used as a bitmap, and other times
* used as an absolute value. */
if ((event == LBM.EVENT_QUEUE_ENQUEUE_NOTIFICATION)
&& (evqSize == 1) && (evqDelayUsec == 0)) {
/* This is an ENQUEUE notification. No info available on the event. */
...
}
else {
/* "event" is a bitmap; one or both conditions might be active. */
if (event & LBM.EVENT_QUEUE_SIZE_WARNING) {
/* The size threshold is exceeded for the next event to be dispatched. */
...
}
if (event & LBM.EVENT_QUEUE_DELAY_WARNING) {
/* The delay threshold is exceeded for the next event to be dispatched. */
...
}
}
} /* monitor */
}
...
/* Initialization, create event queue. */
MyEventQueue evq = new MyEventQueue();


Message Object  <-

When an application subscribes to a topic to which publishers are sending messages, the received messages are delivered to the application by an application callback function (see Event Delivery). One of the parameters that UM passes to the application callback is a message object. This object gives the application access to the content of the message, as well as some metadata about the message, such as the topic.

Unlike other objects described above, the user does not create these message objects by API call. UM creates and initializes the objects internally.

The default life-span of a message object is different between C and Java or .NET.


Message Object Deletion  <-

C API

In C, by default, the message object is deleted when the receiver callback returns. No action is necessary by the application to trigger that deletion.

See Message Reception for details, including code examples.

Java or .NET API

In Java or .NET, the passed-in message object is not automatically deleted when the receiver application callback returns. Instead, the message object is fully deleted only when all references to the object are lost and the garbage collector reclaims the object.

However, applications which allow this kind of garbage buildup and collection usually suffer from large latency outliers (jitter), and while garbage collection can be tuned to minimize its impact, it is usually recommended that latency-sensitive applications manage their objects more carefully to prevent creation of garbage. See Zero Object Delivery.

Also, there are some UM features in which specific actions are triggered by the deletion of messages, and the application designer usually wants to control when those actions are performed (for example, Persistence Message Consumption).

For these reasons, Java and .NET developers are strongly advised to explicitly dispose of a message object when the application is finished with it. It does this by calling the "dispose()" method of the message object. In the simple case, this should be done in the receiver application callback just before returning.

See Message Reception for details, including code examples.


Message Object Retention  <-

Some applications are designed to process received messages in ways that cannot be completed by the time the receiver callback returns. In these cases, the application must extend the life span of the message object beyond the return from the receiver application callback. This is called "message retention".

Note that message retention prevents the recycling of the UM receive buffer in the UM library. See Receive Buffer Recycling.

C API

To prevent automatic deletion of the message object when the receiver application callback returns, the callback must call lbm_msg_retain(). This allows the application to transfer the message object to another thread, work queue, or control flow.

When a received message is retained, it becomes the application's responsibility to delete the message explicitly by calling lbm_msg_delete(). Failure to delete retained messages can lead to unbounded memory growth.

See Message Reception for details, including code examples.

Java or .NET

The receiver application callback typically calls the "promote()" method of the message object prior to returning. See Retaining Messages.

See Message Reception for details, including code examples.


Attributes Object  <-

An attribute object is used to programmatically configure other UM objects. Their use is usually optional; omitting them results in default configurations (potentially overridden by configuration files). See Configuration Overview for details.

However, there is a class of configuration options that require the use of an attribute object: configuring application callbacks. With these options, you are setting a value that includes a pointer to a C function. This cannot be done from a configuration file. For example, see source_notification_function (receiver).

See Attributes Objects for more details on the types, creation, use, and deletion of attributes objects.


UM Timers  <-

UM also provides a timer function whereby the application can schedule callbacks to be made some number of milliseconds in the future. When a given timer expires, its callback is made. A timer can also be canceled prior to expiration.

These timers are not designed like other UM objects. For example, in the C API, you don't "create" and "delete" them, you "schedule" and optionally "cancel" them. In the Java and .NET APIs, you do instantiate a timer object in the normal way, but you don't "close" them when done.

Warning
UM timers are not designed for high accuracy. For example, to improve internal efficiency, UM will sometimes expire a timer up to 3 milliseconds early. Also, depending on other work being done, a timer can expire but its callback execution can be delayed an undefined amount of time.

Main timer API functions: