Timers

TIBCO BusinessEvents® Extreme provides support for highly available timers. Highly available timers are transparently migrated to new nodes during failover, restoration, and partition migration. Highly available timers are created by installing a partition mapper for the application defined com.kabira.platform.swtimer.TimerNotifier types and mapping the timers into a valid partition with replica nodes defined.

[Warning]

Using asynchronous replication of timer operations may cause a loss of a timer notifications in the case of an active node failure.

Highly available timers are transactional. If a timer is executing on the currently active node, but it does not commit before a node failure, the timer will be executed on the node that assumes the work for the failed node.

Timer identifiers are a unique identifier for the timer on all nodes associated with the partition. The application can rely on this identifier being the same on all nodes. Timers are started using the number of seconds from the current time, i.e. a relative, not an absolute time. The timer fires when this time expires. The relative time is transmitted to the replica nodes for the timer and the current time on the replica nodes is used to calculate when the timer should fire. This minimizes the impact of clock drift between the active and replica nodes.

[Warning]

It is strongly recommend that a network time synchronization protocol be used to keep all system clocks consistent.

An application defined object is specified when creating a timer. This object is passed to the timerNotifier method when it is called. This provides a mechanism for an application to provide context for each timer that is started. It is strongly recommended that the application context object used to start the timer be in the same partition as the timer notifier. This ensures that the context object is still valid if failover or migration occur. However, it is legal to use context objects in different partitions, or even one that are not highly available.

Failover and migration

When an active node fails, any pending timers are automatically restarted on the new active node.

One-shot timers will only be executed on the failover node if they have not executed on the original active node before the failure. If the expiration timer expired during the migration to the failover node it will fire immediately, otherwise it will be executed at the initially scheduled time. The time is adjusted based on when the timer was started on the original active node.

A recurring timer will execute on the failover node at the next scheduled time. The initial execution on the failover node is adjusted based on when the timer last fired on the original active node. It will then continue to execute on the new active node using the initial interval. If a recurring timer was missed due to a delay between the initial active node failure and the failover node taking on the work, these scheduled timer executions will be dropped - there are no "makeup" executions for recurring timers.

Migrating a partition that contains active timers will cause the timer to be canceled on the old active node and restarted on the new active node. The same notifier execution rules as described for failover apply.

Timer example

Example 7.16, “Highly available timers” shows how a highly available timer is created, migrated to a new node, and terminated.

Example 7.16. Highly available timers

//     $Revision: 1.1.2.5 $
package com.kabira.snippets.highavailability;

import com.kabira.platform.ManagedObject;
import com.kabira.platform.Transaction;
import com.kabira.platform.annotation.Managed;
import com.kabira.platform.highavailability.Partition;
import com.kabira.platform.highavailability.PartitionManager;
import static com.kabira.platform.highavailability.PartitionManager.EnableAction.JOIN_CLUSTER;
import com.kabira.platform.highavailability.PartitionMapper;
import com.kabira.platform.highavailability.ReplicaNode;
import static com.kabira.platform.highavailability.ReplicaNode.ReplicationType.SYNCHRONOUS;
import com.kabira.platform.property.Status;
import com.kabira.platform.swtimer.TimerNotifier;

/**
 * This snippet shows how to use highly available timers.
 *
 * <p>
 * <h2> Target Nodes</h2>
 * <ul>
 * <li> <b>domainname</b> = Development
 * </ul>
 */
public class Timer
{
	/**
	 * Partition mapper that maps objects to a specific partition
	 */
	private static class TimerPartitionMapper extends PartitionMapper
	{
		@Override
		public String getPartition(Object obj)
		{
			return Timer.PARTITION_NAME;
		}
	}

	/**
	 * Timer notifier
	 * <p>
	 * Timer notifier must be in same partition as the object passed to the
	 * notifier.
	 */
	private static class Notifier extends TimerNotifier
	{
		/**
		 * Timer notifier
		 *
		 * @param timerId Timer identifier
		 * @param object Timer context object
		 */
		@Override
		public void timerNotify(String timerId, Object object)
		{
			Context c1 = (Context) object;
			c1.count += 1;

			System.out.println("Timer Id:" + timerId + " Value: " + c1.count);
		}
	}

	/**
	 * Context passed to timer notifier
	 */
	@Managed
	private static class Context
	{
		int count;
	}

	/**
	 * Main entry point
	 *
	 * @param args Not used
	 * @throws java.lang.InterruptedException
	 */
	public static void main(String[] args) throws InterruptedException
	{
		initialize();

		//
		//	Start timer on node A
		//
		if (m_nodeName.equals("A") == true)
		{
			startTimer();
		}

        //
		//    Wait for timer to fire a few times
		//
		Thread.sleep(10000);
		
		//
		//	Migrate the partition to node B
		//
		if (m_nodeName.equals("A") == true)
		{
			migratePartition();
		}
			
		//
		//    Wait for timer to fire a few times
		//
		Thread.sleep(10000);

        //
		//  Stop the timer on node B
		//
		if (m_nodeName.equals("B") == true)
		{
			stopTimer();
		}

	}

	private static void initialize()
	{
		new Transaction("Initialize")
		{
			@Override
			protected void run() throws Transaction.Rollback
			{
                //
				//  Install a partition mapper
				//
				TimerPartitionMapper mapper = new TimerPartitionMapper();
				PartitionManager.setMapper(Notifier.class, mapper);
				PartitionManager.setMapper(Context.class, mapper);

                //
				//  Define and enable the test partition
				//
				ReplicaNode[] replicas = new ReplicaNode[]
				{
					new ReplicaNode("B", SYNCHRONOUS),
					new ReplicaNode("C", SYNCHRONOUS)
				};
				PartitionManager.definePartition(PARTITION_NAME, null, "A", replicas);
				PartitionManager.enablePartitions(JOIN_CLUSTER);
			}
		}.execute();
	}
	
	private static void startTimer()
	{
		new Transaction("Start Timer")
		{
			@Override
			protected void run() throws Transaction.Rollback
			{
				Notifier notifier = new Notifier();
				Context c1 = new Context();

				System.out.println("Starting one second recurring timer");
				notifier.startRecurring(1, c1);
			}
		}.execute();		
	}
	
	private static void stopTimer()
	{
		new Transaction("Stop Timer")
		{
			@Override
			protected void run() throws Transaction.Rollback
			{
                //
				//    Stop timer - just delete the notifier
				//
				for (Notifier notifier : ManagedObject.extent(Notifier.class))
				{
					System.out.println("Stopping one second recurring timer");
					ManagedObject.delete(notifier);
				}
			}
		}.execute();
	}
	
	private static void migratePartition()
	{
		System.out.println("Migrating partition to node B");

		new Transaction("Migrate Partition")
		{
			@Override
			protected void run() throws Transaction.Rollback
			{
				Partition partition = PartitionManager.getPartition(PARTITION_NAME);
				
				assert partition != null : PARTITION_NAME;
				
				//
				//	Migrate partition to node B
				//

				ReplicaNode[] replicas = new ReplicaNode[]
				{
					new ReplicaNode("C", SYNCHRONOUS),
					new ReplicaNode("A", SYNCHRONOUS)
				};
				partition.migrate(null, "B", replicas);
			}
		}.execute();
	}

	private static final String PARTITION_NAME = "Timer Snippet";
	private static final String m_nodeName = System.getProperty(Status.NODE_NAME);
}

When Example 7.16, “Highly available timers” is run the output in Example 7.17, “Highly available timer output” is seen (annotations added and non-essential output deleted).

Example 7.17. Highly available timer output

#
#     Timer started on node A
#
[A] Starting one second recurring timer

#
#     Timer notifier called on node A
#
[A] Timer Id:20799919691:107:0 Value: 1
[A] Timer Id:20799919691:107:0 Value: 2
[A] Timer Id:20799919691:107:0 Value: 3
[A] Timer Id:20799919691:107:0 Value: 4
[A] Timer Id:20799919691:107:0 Value: 5
[A] Timer Id:20799919691:107:0 Value: 6
[A] Timer Id:20799919691:107:0 Value: 7
[A] Timer Id:20799919691:107:0 Value: 8
[A] Timer Id:20799919691:107:0 Value: 9

#
#     Timer migrated to node B
#
[A] Migrating partition to node B

#
#    Timer notifier now called on node B
#  
[B] Timer Id:20799919691:107:0 Value: 10
[B] Timer Id:20799919691:107:0 Value: 11
[B] Timer Id:20799919691:107:0 Value: 12
[B] Timer Id:20799919691:107:0 Value: 13
[B] Timer Id:20799919691:107:0 Value: 14
[B] Timer Id:20799919691:107:0 Value: 15
[B] Timer Id:20799919691:107:0 Value: 16
[B] Timer Id:20799919691:107:0 Value: 17
[B] Timer Id:20799919691:107:0 Value: 18

#
#    Timer stopped on node B
#
[B] Stopping one second recurring timer