Hawk Container Edition Components YAML Files
As per their requirement, each component (Hawk agent, Hawk Cluster Manager, and Hawk Console) of Hawk Container Edition uses different strategy for deployment.
Hawk Cluster Manager YAML File for AWS
The Hawk Cluster Manager (hkce_clustermanager) is a seed node to which all other containers connect. Thus, this container should be reachable from all the containers. Also the network information should not be lost even when it is restarted or rescheduled.
Use the Kubernetes StatefulSets for deploying Hawk Cluster Manager. Like a Deployment, a StatefulSet manages Pods that are based on an identical container specifications. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. For details about Kubernetes StatefulSets, see the Kubernetes documentation. Also, create a headless service so that all other containers can reach Hawk Cluster Manager container. In a headless service you specify the clusterIP as None . For details about headless service, see to the Kubernetes documentation. For the Pods created by StatefulSets with headless service, DNS entries are created in the form <pod-name>.<headless-service-name>.
For example, for a StatefulSet with N replicas, each Pod in the StatefulSet is assigned an integer ordinal, from 0 up through N-1, that is unique over the Set. So if your StatefulSets has a name "my-stateful-set" with 3 replicas, it creates three pods with names my-stateful-set-0, my-stateful-set-1 and my-stateful-set-2. Also, if you create a headless service with the name "my-service", then you can connect to these StatefulSets pods using the host name my-stateful-set-0.my-service, my-stateful-set-1.my-service, and so on.
For the Hawk Cluster Manager, create only one hkce_clustermanager pod using StatefulSets with the name hkce-clustermanager-set and headless service with the name hkce-clustermanager-service . Thus, other components can connect to this pod using the hostname hkce-clustermanager-set-0.hkce-clustermanager-service. Set the clusterIP as None for the headless service.
Set up the container (the containers tag) with the Docker image (image) of the Hawk Cluster Manager.
Sample Content for Hawk Cluster Manager YAML File
apiVersion: v1 kind: Service metadata: name: hkce-service spec: ports: - port: 2561 protocol: TCP targetPort: 2561 selector: app: hkce-clustermanager clusterIP: None --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: hkce-clustermanager name: hkce-clustermanager-set spec: serviceName: hkce-service selector: matchLabels: app: hkce-clustermanager template: metadata: labels: app: hkce-clustermanager spec: containers: - name: hkce-clustermanager image: <aws_docker_registry>/hkce_clustermanager:2.0 imagePullPolicy: Always env: - name: tcp_self_url value: hkce-clustermanager-set-0.hkce-service:2561 - name: tcp_daemon_url value: hkce-clustermanager-set-0.hkce-service:2561 ports: - containerPort: 2561 protocol: TCP
Hawk Agent YAML File for AWS Deployment
Hawk agent must be deployed on all nodes. Thus, for Hawk agent the resource type should be DaemonSet. A DaemonSet ensures that all (or some) nodes run a copy of a Pod.
Set up the container (the containers tag) with the Docker image (image) of the Hawk agent. The tcp_self_url should be set to the hostname of the pod in which Hawk agent container is running. You can use the Kubernetes downward API to get the hostname of the pod. For this set fieldPath to status.podIP . This ensures that each hkce_agent has a unique tcp_self_url. Also the ami_tcp_session for each agent should be set to the node IP on which this hkce_agent container is deployed. You can use the Kubernetes downward API to get the hostname of the node. For this set fieldPath to status.hostIP. The AMI applications running on that node can use same status.hostIP in their daemon_url. Specify the protocol and port to connect to this service.
Sample Content for Hawk Agent YAML File
apiVersion: apps/v1 kind: DaemonSet metadata: name: hkce-agent-set labels: app: hkce-agent spec: selector: matchLabels: name: hkce-agent-set template: metadata: labels: name: hkce-agent-set spec: containers: - name: hkce-agent image: <aws_docker_registry>/hkce_agent:2.0 imagePullPolicy: Always env: - name: HOST_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: HOST_IP valueFrom: fieldRef: fieldPath: status.hostIP - name: tcp_self_url value: ${HOST_NAME}:2551 - name: tcp_daemon_url value: hkce-clustermanager-set-0.hkce-service:2561 - name: ami_tcp_session value: ${HOST_IP}:2571 - name: config_path value: /tibco.home/hkce/2.0/config/ - name: DOCKER_HOST value: unix:///var/run/docker.sock volumeMounts: - mountPath: /var/run/docker.sock name: docker-sock-volume ports: - containerPort: 2551 name: agentport protocol: TCP - containerPort: 2571 hostPort: 2571 protocol: TCP volumes: - name: docker-sock-volume hostPath: path: /var/run/docker.sock
Hawk Console YAML File for AWS Deployment
Setup the a pod with load-balancer service for deploying Hawk Console. The load balancer provides an externally-accessible IP address that sends traffic to the correct port on your cluster nodes.
The load-balancer service would enable us to access the console 8083 port from outside. Set up the container ( the containers tag) with the Docker image (the image tag) of the Hawk Console. Also like Hawk agent, the Hawk Console uses the Kubernetes downward API to access hostname of the pod. For this set fieldPath of HOST_NAME to status.podIP and use this in the tcp_self_url.
Sample Content for Hawk Console YAML File
apiVersion: v1 kind: Service metadata: name: hkce-console-service spec: type: LoadBalancer ports: - port: 8083 targetPort: 8083 selector: app: hkce-console --- apiVersion: v1 kind: Pod metadata: name: hkce-console labels: name: hkce-console app: hkce-console spec: containers: - name: hkce-console image: <aws_docker_registry>/hkce_console:2.0 imagePullPolicy: Always env: - name: HOST_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: tcp_self_url value: ${HOST_NAME}:2551 - name: tcp_daemon_url value: hkce-clustermanager-set-0.hkce-service:2561 - name: hawk_domain value: default - name: hawk_console_repository_path value: /tibco.home/hkce/2.0/repo ports: - containerPort: 2551 name: consoleport protocol: TCP - containerPort: 8083