Data Plane Network Policies
You can use Kubernetes network policies to control incoming and outgoing traffic from pods. Network policies provide better security by limiting traffic to and from pods. In TIBCO Control Plane, creation of network policies is disabled by default. However, if you want to create network policies defined by TIBCO Control Plane, they can by enabled by setting a flag as described in the following section.
Enabling Deployment of Network Policies
To enable deployment of network policies, perform the following step. Network policies must be created in all data plane namespaces including primary and application namespaces.
- Procedure
- When registering a data plane, navigate to the last screen of data plane registration wizard. The command to create service account is displayed. For more information, see Registering a Kubernetes Cluster.
- Append the following mandatory values in the command:
--set networkPolicy.create=true
--set networkPolicy.nodeCidrIpBlock=<NodeIpCIDR>
You can optionally add the podCidrIpBlock
, if it is different from nodeCidrIpBlock.
--set networkPolicy.podCidrIpBlock=<PodIpCIDR>
The NodeIpCIDR can be obtained from AWS console with the value of VPC CIDR. NodeIpCIDR is the IP range of Nodes VPC in AWS or VNet address space in Azure (CIDR notation). For example: 10.200.0.0/16. PodIpCidr is the IP range of Pod IP CIDR (CIDR notation). If you are using different subnet for pod CIDR, you need to explicitly specify PodIpCIDR. For example: 192.168.0.0/16
After you run the preceding command, the default network policies defined in the dp-configure-namespace
chart get created.
Implementing Network Policies in EKS
To use network policies on AWS, it is recommended to use Amazon VPC CNI plugin for Kubernetes. For more information on setting up Amazon VPC CNI plugin, refer Amazon EKS documentation.
Implementing Network Policies in AKS
Azure provides two ways to implement network policies: Azure Network Policy Manager and Calico Network Policies. You must select a Network Policy option when you create an AKS cluster. For more information, refer Azure documentation.
Default Network Policies
This section explains the default network policies that are created for the data plane namespace and additional policies that can be applied by applying a label to the pods or namespaces.
Outgoing Traffic (Egress)
The following are allowed destinations for pods in the data plane:
-
Egress to pods within namespaces of the same data plane on all ports
-
Pods in a namespace with label
networking.platform.tibco.com/non-dp-ns:enable
-
Egress to pods matching labels
k8s-app: kube-dns
on ports 53 with TCP and UDP
Incoming Traffic (Ingress)
In the data plane, the following incoming traffic is allowed for pods:
-
Ingress from the pods within namespaces of the same data plane on all ports
-
Ingress from the pods in a namespace with label
networking.platform.tibco.com/non-dp-ns:enable
Supported Network Policies and Labels
The following network policies and labels are supported in data plane to control the Ingress and Egress traffic:
Network Policy | Label | Description |
---|---|---|
kubernetes-api
|
networking.platform.tibco.com/kubernetes-api: enable
|
Apply this label to pods in data plane namespaces to allow outgoing traffic and receive incoming traffic from Kubernetes API server. Egress traffic on TCP port 443 and 6443, and Ingress on all ports is supported. This does not allow authentication to Kubernetes API server. It is done by Kubernetes RBAC. This enables only network access. |
internet-all
|
egress.networking.platform.tibco.com/internet-all: enable
|
Apply this label to pods in data plane namespaces to allow the pods to connect to the Internet on all ports. Note that this excludes connecting to node or pod address space. |
internet-web
|
egress.networking.platform.tibco.com/internet-web: enable
|
Apply this label to pods in data plane namespaces to allow the pods to connect to the Web only (HTTP port 80, HTTPS port 443). |
internet-access
|
ingress.networking.platform.tibco.com/internet-access: enable
|
Apply this label to pods to receive traffic from internet on all ports. Note that this excludes ingress from node or pod address space. |
cluster-access
|
ingress.networking.platform.tibco.com/cluster-access: enable
|
Apply this label to pods in data plane namespaces to receive the traffic from cluster CIDR (the node CIDR and pod CIDR). |
For the policies related to Kubernetes API, Internet and Cluster CIDR in the preceding table, use the following parameters in service account creation command:
-
Use
--set networkPolicy.nodeCidrIpBlock=<NodeIPCIDR>
Here NodeIPCIDR is IP range of Nodes VPC or VNet address space (CIDR notation). Example: 10.200.0.0/16
-
Use
--set networkPolicy.podCidrIpBlock=<PodIPCIDR>
Here PodIPCIDR is IP range of Pod IP CIDR (CIDR notation). Example: 192.168.0.0/16
-
Use
--set networkPolicy.serviceCidrIpBlock=<ServiceIPCIDR>
Here ServiceIPCIDR is IP range of Service CIDR (CIDR notation). Example: 172.20.0.0/16
Default value for ServiceIPCIDR is set as 172.20.0.0/16 in chart values.
Apply the following label to namespaces outside data plane, which can send traffic to data plane namespaces and receive traffic from data plane namespaces. For example: namespaces in which ingress controller, elastic search, and prometheus are deployed.
networking.platform.tibco.com/non-dp-ns: enable