Launching TDV Containers (Cluster) - Using Docker Run
This section will explain how to start a TDV Docker container in a DV Cluster configuration. If you need to review TDV container sizing guidelines refer Sizing Guidelines for TDV. For further information regarding the TDV Cluster, refer to the TDV Active Cluster Guide.
General example for launching two TDV Docker containers to create a TDV Cluster
Below is a generic example for launching two docker containers to create a TDV Cluster. Note that you must execute the “docker run” command for all 3 TDV containers (repo, cache and server).
$ docker volume rm -f tdv-clustercache-vol
$ docker volume create --name "tdv-clustercache-vol"
$ docker volume rm -f tdv-clusterrepo-vol
$ docker volume create --name "tdv-clusterrepo-vol"
$ docker volume rm -f tdv-cluster-vol
$ docker volume create --name "tdv-cluster-vol"
$ docker network create --internal tdv-bridge-internal
$ docker network create tdv-bridge
$ docker run -itd --cpus 1.0 --restart unless-stopped --memory 2g --network tdv-bridge-internal -p 9404:9404 -v tdv-clustercache-vol:/var/lib/postgresql/data --env POSTGRES_USER=root --env POSTGRES_PASSWORD=admin1 --env POSTGRES_DB=postgres --env POSTGRES_INITDB_ARGS="-E UTF-8" --env POSTGRES_HOST_AUTH_METHOD="password" --env PGDATA="/var/lib/postgresql/data/tdv" --name tdv-clustercache tibco/tdvcache:8.5.5 postgres
$ docker run -itd --cpus 1.0 --restart unless-stopped --memory 2g --network tdv-bridge-internal -p 9408:9408 -v tdv-clusterrepo-vol:/var/lib/postgresql/data --env POSTGRES_USER=root --env POSTGRES_PASSWORD=admin1 --env POSTGRES_DB=postgres --env POSTGRES_INITDB_ARGS="-E UTF-8" --env POSTGRES_HOST_AUTH_METHOD="password" --env PGDATA="/var/lib/postgresql/data/tdv" --name tdv-clusterrepo tibco/tdvrepo:8.5.5 postgres
$ docker run -itd --cpus 2.0 --restart unless-stopped --memory 8g --network tdv-bridge-internal -p 9400-9403:9400-9403 -p 9405:9405 -p 9409:9409 -p 9300-9306:9300-9306 -p 9407:9407 -v tdv-cluster-vol:/opt/TIBCO --name tdv-cluster --env TDV_ADMIN_PASSWORD=admin1 --env TDV_MAX_MEMORY=7000 --env TDV_CACHE_HOSTNAME=tdv-clustercache --env TDV_REPO_HOSTNAME=tdv-clusterrepo tibco/tdv:8.5.5 tdv.server
$ docker network connect tdv-bridge tdv-cluster
To configure node 1 as timekeeper node on "tdv-cluster:9400" with cluster name "tdv-cluster":
$ docker exec -it tdv-cluster /bin/bash -c “\$TDV_INSTALL_DIR/bin/cluster_util.sh -server tdv-cluster -port 9400 -user admin -password admin1 -create -clusterName tdv-cluster”
$ docker volume rm -f tdv-cluster2cache-vol
$ docker volume create --name "tdv-cluster2cache-vol"
$ docker volume rm -f tdv-cluster2repo-vol
$ docker volume create --name "tdv-cluster2repo-vol"
$ docker volume rm -f tdv-cluster2-vol
$ docker volume create --name "tdv-cluster2-vol"
$ docker network create --internal tdv-bridge-internal
$ docker network create tdv-bridge
For Linux and Windows:
$ docker run -itd --cpus 1.0 --restart unless-stopped --memory 2g --network tdv-bridge-internal -p 10404:9404 -v tdv-cluster2cache-vol:/var/lib/postgresql/data --env POSTGRES_USER=root --env POSTGRES_PASSWORD=admin1 --env POSTGRES_DB=postgres --env POSTGRES_INITDB_ARGS="-E UTF-8" --env POSTGRES_HOST_AUTH_METHOD="password" --env PGDATA="/var/lib/postgresql/data/tdv" --name tdv-cluster2cache tibco/tdvcache:8.5.5 postgres
For Mac:
$ docker run -itd --cpus 1.0 --restart unless-stopped --memory 2g --network tdv-bridge-internal -p 10404:9404 —hostname=localhost -v tdv-cluster2cache-vol:/var/lib/postgresql/data --env POSTGRES_USER=root --env POSTGRES_PASSWORD=admin1 --env POSTGRES_DB=postgres --env POSTGRES_INITDB_ARGS="-E UTF-8" --env POSTGRES_HOST_AUTH_METHOD="password" --env PGDATA="/var/lib/postgresql/data/tdv" --name tdv-cluster2cache tibco/tdvcache:8.5.5 postgres
For Linux and Windows:
$ docker run -itd --cpus 1.0 --restart unless-stopped --memory 2g --network tdv-bridge-internal -p 10408:9408 -v tdv-cluster2repo-vol:/var/lib/postgresql/data --env POSTGRES_USER=root --env POSTGRES_PASSWORD=admin1 --env POSTGRES_DB=postgres --env POSTGRES_INITDB_ARGS="-E UTF-8" --env POSTGRES_HOST_AUTH_METHOD="password" --env PGDATA="/var/lib/postgresql/data/tdv" --name tdv-cluster2repo tibco/tdvrepo:8.5.5 postgres
For Mac:
$ docker run -itd --cpus 1.0 --restart unless-stopped --memory 2g --network tdv-bridge-internal -p 10408:9408 —hostname=localhost -v tdv-cluster2repo-vol:/var/lib/postgresql/data --env POSTGRES_USER=root --env POSTGRES_PASSWORD=admin1 --env POSTGRES_DB=postgres --env POSTGRES_INITDB_ARGS="-E UTF-8" --env POSTGRES_HOST_AUTH_METHOD="password" --env PGDATA="/var/lib/postgresql/data/tdv" --name tdv-cluster2repo tibco/tdvrepo:8.5.5 postgres
For Linux and Windows:
$ docker run -itd --cpus 2.0 --restart unless-stopped --memory 8g --network tdv-bridge-internal -p 10400-10403:9400-9403 -p 10405:9405 -p 10409:9409 -p 10300-10306:9300-9306 -p 10407:9407 -v tdv-cluster2-vol:/opt/TIBCO --name tdv-cluster2 --env TDV_ADMIN_PASSWORD=admin1 --env TDV_MAX_MEMORY=7000 --env TDV_CACHE_HOSTNAME=tdv-cluster2cache --env TDV_REPO_HOSTNAME=tdv-cluster2repo tibco/tdv:8.5.5 tdv.server
For Mac:
$ docker run -itd --cpus 2.0 --restart unless-stopped --memory 8g --network tdv-bridge-internal -p 10400-10403:9400-9403 -p 10405:9405 -p 10409:9409 -p 10300-10306:9300-9306 -p 10407:9407 —hostname=localhost -v tdv-cluster2-vol:/opt/TIBCO --name tdv-cluster2 --env TDV_ADMIN_PASSWORD=admin1 --env TDV_MAX_MEMORY=7000 --env TDV_CACHE_HOSTNAME=tdv-cluster2cache --env TDV_REPO_HOSTNAME=tdv-cluster2repo tibco/tdv:8.5.5 tdv.server
$ docker network connect tdv-bridge tdv-cluster2
To configure node 2 as a non timekeeper node on "tdv-cluster2:9400" with cluster name "tdv-cluster".
$ docker exec -it tdv-cluster2 /bin/bash -c "\$TDV_INSTALL_DIR/bin/cluster_util.sh -server tdv-cluster2 -port 9400 -user <user> -password <tdv admin password> -join -memberServer tdv-cluster -memberPort 9400 -memberUser <user> -memberPassword <tdv admin password>”
Note: When launching a container, you can either give a password or use a password file that has the password stored in it. Refer to the section TDV Admin Password for ways to specify the Admin Password.
References
Refer the table below for a description of the different Docker commands:
|
Option |
Docker Help Reference |
|
--restart |
Used to configure the restart policy for a container. Refer https://docs.docker.com/config/containers/start-containers-automatically/ |
| -t |
Allocate a pseudo-tty - https://docs.docker.com/engine/reference/run/ |
|
-i |
Keep STDIN open even if not attached - https://docs.docker.com/engine/reference/run/ |
|
-d |
Detach and run the container in background and print container ID - https://docs.docker.com/engine/reference/run/#detached--d |
|
-v |
(TDV Required) The tdv container requires a persistent storage area when running as a Docker container. See Sizing Guidelines for TDV for size recommendations. https://docs.docker.com/storage/bind-mounts/ Usage: -v <path to the file on the host machine>:<path where the file is mounted in the container>:<optional parameters> Example: -v $CONTAINER_CACHE_VOLUME:$TDV_DATABASE_DIR Note: mount point must have a valid volume existing before starting the TDV Container. |
|
(TDV Recommended) The tdv container works best with 2 CPUs/cores in general. See Sizing Guidelines for TDV for value recommendations. https://docs.docker.com/config/containers/resource_constraints/ |
|
|
--m |
(TDV Required) The tdv container requires a mininum of 8 GB of memory. Higher tdv workloads require more. See Sizing Guidelines for TDV for value recommendations. https://docs.docker.com/config/containers/resource_constraints/ |
|
<tdv-container-name> |
Container name for your TDV Docker container. Recommendation is to have tdv in the name. Examples: tdv, tdv-1, tdv-2, tdv-dev, tdv-prod, etc https://docs.docker.com/config/containers/resource_constraints/ |
|
<repo-name> |
Repository name for your TDV Docker image. https://docs.docker.com/config/containers/resource_constraints/ |
|
<image-name> |
Recommendation is to use tdv. Of course, you can change this to any name though https://docs.docker.com/config/containers/resource_constraints/ |
|
<image-tag> |
Recommendation is to use the TDV version for this. Example: 8.4 https://docs.docker.com/config/containers/resource_constraints/ |
Linux
This section explains how to start two TDV Docker containers configured as a DV Cluster configuration on a Docker environment hosted on the Linux platform. Use a docker network that will allow your TDV containers to communicate with each other.
The bridge, host and user specified bridge network options in the docker should work for the TDV containers on this platform. Refer to the TDV Active Cluster Guide on how to configure TDV and create a new active cluster.
Note: Ensure that both TDV containers are running and accessible.
TDV Docker Container Example
Resource configuration: small (poc/demo) : 2 CPUs/cores, 8 GB memory, external container volume tdv-vol with 8GB persistent readable/writable storage.
TDV configuration: base port (9400), admin password (required), server memory (default). Refer to the docker container files (Dockerfile.tdv, Dockerfile.tdv.repo and Dockerfile.tdv.cache) for TDV Docker image default values.
Network configuration: user bridge docker network
Refer to General example for launching two TDV Docker containers to create a TDV Cluster - For Node 1:for examples.
Verify you can access port 9400 for Node #1 from outside of your Docker environment.
Once that is done, follow the TDV configuration steps in "Creating a New Active Cluster" section in the TDV Active Cluster Guide.
That will setup a new DV cluster on Node #1.
Refer to General example for launching two TDV Docker containers to create a TDV Cluster - For Node 2 to setup Node 2.
Verify you can access port 9400 for Node #2 from outside of your Docker environment.
Once that is done, following the DV configuration steps in "Adding a TDV Server to an Active Cluster" section in the TDV Active Cluster Guide.
That will setup Node #2 to join the TDV Cluster created on Node #1.
Now your DV Cluster is configured and ready for usage.
You can verify this by opening a browser client and going to http://<IP_NODE_#1>:9400/manager.
Select “Cluster”.
MacOS
This section explains how to start two TDV Docker containers configured as a DV Cluster configuration on a Docker environment hosted on the Mac OS platform. Use a docker network that will allow your TDV containers to communicate with each other.
The bridge, host and user specified bridge network options in the docker should work for the TDV containers on this platform. Refer to the TDV Active Cluster Guide on how to configure TDV and create a new active cluster.
Note: Ensure that both TDV containers are running and accessible.
TDV Docker container example
Resource configuration: small (poc/demo): 2 CPUs/cores, 8 GB memory, external container volume tdv-vol with 8GB persistent readable/writable storage.
MacOS specific configuration: -p <host-port>:<container-port> for all DV ports exposed and --hostname=localhost
TDV configuration: base port (9400), admin password (required), server memory (default). Refer to the docker container files (Dockerfile.tdv, Dockerfile.tdv.repo and Dockerfile.tdv.cache) for TDV Docker image default values.
Network configuration: user bridge docker network.
Refer to General example for launching two TDV Docker containers to create a TDV Cluster - For Node 1:for examples.
Verify you can access port 9400 for Node #1 from outside of your Docker environment.
Once that is done, follow the TDV configuration steps in "Creating a New Active Cluster" section in the TDV Active Cluster Guide.
That will setup a new DV cluster on Node #1.
Refer to General example for launching two TDV Docker containers to create a TDV Cluster - For Node 2 to setup Node 2.
Verify you can access port 9400 for Node #2 from outside of your Docker environment.
Once that is done, following the DV configuration steps in "Adding a TDV Server to an Active Cluster" section in the TDV Active Cluster Guide.
That will setup Node #2 to join the TDV Cluster created on Node #1.
Now your DV Cluster is configured and ready for usage.
You can verify this by opening a browser client and going to http://<IP_NODE_#1>:9400/manager.
Select “Cluster”.
Windows
This section explains how to start two TDV Docker containers configured as a DV Cluster configuration on a Docker environment hosted on the Windows platform. Use a docker network that will allow your TDV containers to communicate with each other.
The bridge, host and user specified bridge network options in the docker should work for the TDV containers on this platform. Refer to the TDV Active Cluster Guide on how to configure TDV and create a new active cluster.
Note: Ensure that both TDV containers are running and accessible.
TDV Docker container example
Resource configuration: small (poc/demo): 2 CPUs/cores, 8 GB memory, external container volume tdv-vol with 8GB persistent readable/writable storage.
Windows specific configuration: -p <host-port>:<container-port> for all DV ports exposed and --hostname=localhost or --hostname=<ip-or-hostname>
TDV configuration: base port (9400), admin password (default), server memory (default). Refer to the docker container files (Dockerfile.tdv, Dockerfile.tdv.repo and Dockerfile.tdv.cache) for TDV Docker image default values.
Network configuration: user bridge docker network.
Refer to General example for launching two TDV Docker containers to create a TDV Cluster - For Node 1:for examples.
Verify you can access port 9400 for Node #1 from outside of your Docker environment.
Once that is done, follow the TDV configuration steps in "Creating a New Active Cluster" section in the TDV Active Cluster Guide.
That will setup a new DV cluster on Node #1.
Refer to General example for launching two TDV Docker containers to create a TDV Cluster - For Node 2 to setup Node 2.
Verify you can access port 9400 for Node #2 from outside of your Docker environment.
Once that is done, following the DV configuration steps in "Adding a TDV Server to an Active Cluster" section in the TDV Active Cluster Guide.
That will setup Node #2 to join the TDV Cluster created on Node #1.
Now your DV Cluster is configured and ready for usage.
You can verify this by opening a browser client and going to http://<IP_NODE_#1>:9400/manager.
Select “Cluster”.