Restoring a tibdg Node

When restoring the nodes of a data grid, all nodes must be restored using the same checkpoint to ensure a consistent state between the primary and secondary nodes of each copyset and between copysets.

When a checkpoint is created, each running node saves the files needed to restore the node to the following directory:
<node_data_dir>/checkpoints/<timestamp>_<epoch>_<counter>_<checkp
oint_name>/data

Procedure

  1. Stop the node.
  2. Move the node’s current data directory to a backup location.
  3. Re-create the node's data directory by copying the appropriate checkpoint directories back to their original location under the checkpoints directory.
    <node_data_dir>/checkpoints/<timestamp>_<epoch>_<counter>_<checkpoint_name>
    The nodes read the rollback record in the realm and restore their live directory from the checkpoint directory specified in the rollback record. This ensures all nodes are restoring the same checkpoint because they fail to startup if they can't find the checkpoint directory specified by the rollback record.
  4. Restart the node.
    For example, suppose node cs1_n1 is started with the data directory ./cs1_n1_data. Then on UNIX you would do the following:
    tibdg -r http://10.0.1.25:8080 node stop cs1_n1
    mv mv ./cs1_n1_data cs1_n1_backup
    mkdir cs1_n1_data
    cp -R cs1_n1_backup/checkpoints/<timestamp>_00000000_00000001_chkpt 
    cs1_n1/checkpoints/.
    tibdgnode -r http://10.0.1.25:8080 -n cs1_n1