Planning and Configuring
Follow the steps below to configure an OpenShift cluster on Azure:
- Sign in to Azure. Follow the instructions for signing.
Command:
az login
- Create a resource group for a key vault. The following example command creates a resource group named
suneelrg.
Example Command:
az group create --name suneelrg --location eastus
- Create a key vault. The following example command creates a key vault named
suneelvault.
Example Command:
az keyvault create --resource-group suneelrg --name suneelvault --enabled-for-template-deployment true --location eastus
- Create an SSH key without any password.
Example Command:
ssh-keygen -f ~/.ssh/openshift_rsa -t rsa -N ''
- Store the SSH private key in the Azure key vault that was created in step 3.
Example Command:
az keyvault secret set --vault-name suneelvault --name keysecret --file ~/.ssh/openshift_rsa
-
Get suneelrg id. Sample output string for the id: /subscriptions/1db82ccd-abfe-46ec-8ad2-7f2d8cf050d5/resourceGroups/suneelrg.
Example Command:az group show --name suneelrg --query id
- Create a service principal and assign it contributor permissions to the key vault resource group created in step 2. Use the output string in step 6 for the
--scopes option.
Example Command:
az ad sp create-for-rbac --name suneelSP --role Contributor --password Ocazure@18 --scopes /subscriptions/1db82ccd-abfe-46ec-8ad2-7f2d8cf050d5/resourceGroups/suneelrg
You might get an error if you do not have appropriate permission. If the above command runs successfully, you will get output similar to below. Take note of appId, tenant, and password. These are needed for deployment later on.{ "appId": "31bf3682-39b6-4ba1-931d-6d66d8887ad0", "displayName": "suneelSP", "name": "http://suneelsp", "password": "Ocazure@18", "tenant": "cde6fa59-abb3-4971-be01-2443c417cbda" }
- Create a resource group for the OpenShift cluster.
suneelOSFTCluster is the name of the resource group in the following command.
Example Command:
az group create --name suneelOSFTCluster --location eastus
- Assign contributor role to
appId (received in step 7) for the above resource group.
Example Command:
az role assignment create --assignee 31bf3682-39b6-4ba1-931d-6d66d8887ad0 --resource-group suneelOSFTCluster --role Contributor
- Output of the following command should not be blank.
Example Command:Click 'Purchase' link. If everything goes well, it takes around 45 minutes to 1 hour to create the cluster. If the deployment is successful, then go to the OpenShift cluster UI by accessing the successfully-deployed template details.
az role assignment list --resource-group suneelOSFTCluster
Go to the page https://github.com/Microsoft/openshift-container-platform/tree/release-3.10 and click the Deploy to Azure link. It will take you to the Azure portal. Fill in all the required parameters. Make sure you take note of your OpenShift admin user name and password. That is required when you log into the OpenShift console, once it is successfully deployed. The following are descriptions of some important parameters:- Resource Group: Select "Use Existing" radio button. (Created in step 8)
- Openshift Password: Enter password of your choice.
- Key Vault Secret: The name you gave for the key in Step 5 ("keysecret" in Step 5 here).
- Red Hat user name/organization and password/activation key: If you create your own account for OpenShift on Red Hat portal, use that. You will need the Red Hat pool id for the subscription.
- ssh public key: This is a public key you created in step 4. Content will be in the file ~/.ssh/openshift_rsa.pub
- Select the VM size as Standard_E2s_V3. Storage kind should be 'undamaged".
- Select 'true' in Enable Azure field.
- Aad client id and secret: Use appId received in step 7 and use the same password you used in step 7.
- masterInstanceCount field value: select 1.
- infraInstanceCount field value: select 1.
- nodeInstanceCount field value: select 3.
Once deployment is complete, get to the Openshift Console by accessing:az group deployment show --name Microsoft.Template --resource-group suneelOSFTCluster | grep ".azure.com:8443/console"
You can login into the above url using the user name ocpadmin and password you have given in the above for the Openshift password.
Mashery Local Components Configuration
Since Kubernetes clusters could be set up in many different ways and they can be shared by many applications, it is the Administrator's responsibility to have the cluster set up and be ready for deployment.
Note for MacOS: If you have already installed minikube or kubetctl before, it's better to remove those to do a clean install (remove any trace of previous Kubernetes installation):
The following settings should be customized:
"tmg_cluster_name": "Tibco Mashery Gateway Reference Cluster", "k8s_azure_zones": ["westus2"],
Customize TIBCO Mashery Local Cluster
Note: IMPORTANT! Before you customize your TIBCO Mashery Local cluster, make sure the
tml_image_tag matches your Docker image build. For example, to verify
"tml_image_tag": "v5.0.0.1", you can check the Docker image build in Jenkins as shown below:
The following settings can be customized: The
tml_image_tag must be updated to match your docker image build.
tml_image_registry_host is the registry host for Docker images in the OpenShift container registry.
tml_image_repo is the registry name which is generally the project name in the OpenShift container registry.
k8s_deploy_namespace is the namespace in the OpenShift cluster, which again is the project name.
Sample Cluster Configuration:
"tml_image_tag": "v5.0.0.1", "tml_image_registry_host" : "docker-registry-default.x.x.94.123.nip.io", "tml_image_repo": "ml5-openshift", "tml_cm_count": 1, "tml_tm_count": 2, "tml_cache_count": 1, "tml_sql_count": 1, "tml_log_count": 1, "tml_nosql_count": 1, "k8s_storage_type": "pd-standard", "k8s_storage_provisioner": "kubernetes.io/azure-disk", "k8s_deploy_namespace": "ml5-openshift", "k8s_storage_account_type": "Standard_LRS", "tml_sql_storage_size": "10Gi", "tml_log_storage_size": "10Gi", "tml_nosql_storage_size": "10Gi", "tml_tm_http_enabled": "true", "tml_tm_http_port": 80, "tml_tm_https_enabled": "true", "tml_tm_https_port": 443, "tml_tm_oauth_enabled": "true", "tml_tm_oauth_port": 8083, "cassandra_max_heap": "512M", "cassandra_replication_factor": 1
Single Zone Deployment | |
---|---|
tml_cm_count | Number of Cluster Manager Containers |
tml_tm_count | Number of Traffic Manager Containers |
tml_cache_count | Number of Memcached Containers |
tml_sql_count | Number of MySQL Containers |
tml_log_count | Number of Log Service Containers |
tml_nosql_count | Number of Cassandra Containers |
Setting up Mashery Service Configuration Data
Mashery Local offers the option of importing service configuration data offline. A sample data.zip is provided with the TIBCO Mashery Local build that can be loaded into the database during TIBCO Mashery Local cluster creation.
Copyright © Cloud Software Group, Inc. All rights reserved.