Deploy a Semarchy xDM instance in GCP
You will be guided to deploy Semarchy xDM in GCP. The estimated time to complete these steps is two hours.
It is highly recommended not to use the root user for any deployment or operations described below. You are advised to always follow the policy of least privilege for all access granted as part of the deployment, such as Security Groups.
Step 1. Check the GCP credentials
Before starting the deployment check that the GCP Command Line Interface (gcloud CLI) is correctly configured:
-
Run the following command to retrieve your GCP user:
$ gcloud auth list
Command outputCredentialed Accounts ACTIVE ACCOUNT * <your eamil>
If the output is different it means that GCP CLI is not configured correctly. Refer to Install the Google Cloud CLI for more information.
-
All the following commands will be executed with the default project. Run the following command to modify the GCP CLI configuration if necessary:
$ gcloud init
Refer to Initializing the gcloud CLI if the project is not set correctly.
Step 2. (Optional) Create the Cloud SQL instance
This step is optional. If you already have a running database instance, you can proceed to the next section.
This guide provides gcloud CLI commands to create the database instance. This can also be achieved using the Google Cloud console UI.
These commands only provide mandatory parameters but you need to ensure they meet your requirements. For more information, see the official Google Cloud SDK documentation.
You must first create the database instance that will store the Semarchy xDM repository and, optionally, the data locations for your applications.
-
Run the following command to create the database cluster:
$ gcloud sql instances create <db_cluster_name> \ --database-version=POSTGRES_14 \ --cpu=2 \ --memory=7680MB \ --region=<cluster_region> \ --availability-type=REGIONAL \ --root-password=<db_cluster_password> \ --network=default \ --no-assign-ip
Command outputCreating Cloud SQL instance for POSTGRES_14...done. Created [https://sqladmin.googleapis.com/sql/v1beta4/projects/<cluster_project>/instances/<db_cluster_name>]. NAME DATABASE_VERSION LOCATION TIER PRIMARY_ADDRESS PRIVATE_ADDRESS STATUS <db_cluster_name> POSTGRES_14 <cluster_region> db-custom-2-7680 - xxx.xxx.xxx.xxx RUNNABLE
-
Run the following command to create the database on the cluster:
$ gcloud sql databse create "semarchy_repository" \ --instance=<db_cluster_name> \ --charset="UTF8"
Command outputCreating Cloud SQL database...done. Created database [semarchy_repository]. charset: UTF8 instance: <db_cluster_name> name: semarchy_repository project: <your_gcp_rpoject>
-
(Optional) Add reader instances to dispatch the load between multiple reader instance(s) for read-only operations such as dashboards and BI tools:
$ gcloud sql instances create <reader_instance_name> \ --master-instance-name=<db_cluster_name> \ --network=default \ --no-assign-ip
Command outputCreating Cloud SQL instance for POSTGRES_14...done. Created [https://sqladmin.googleapis.com/sql/v1beta4/projects/<cluster_project>/instances/<reader_instance_name>]. NAME DATABASE_VERSION LOCATION TIER PRIMARY_ADDRESS PRIVATE_ADDRESS STATUS <reader_instance_name> POSTGRES_14 cluster_region db-custom-2-7680 - xxx.xxx.xxx.xxx RUNNABLE
Step 3. (optional) Create the GKE cluster
This step is optional. If you already have a GKE (Google Kubernetes Engine) cluster configured with your GCP account, you can move to the next section.
This section guides you to create the GKE cluster needed to deploy the Semarchy xDM images.
-
Run the following command to create a GKE cluster:
$ gcloud beta container clusters create clusterAutoScaled \ --zone europe-west9 \ --enable-autoscaling \ --num-nodes "2" \ --min-nodes 1 \ --max-nodes 2 \ --cluster-version "1.21.14-gke.8500" \ --release-channel "None" \ --machine-type "e2-standard-2"
Command outputNAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS <cluster-name> <cluster-region> 1.21.14-gke.8500 xxx.xxx.xxx.xx e2-standard-2 1.21.14-gke.8500 6 RUNNING
The operation takes approximately 10 minutes.
-
Run the following command to configure kubectl with your cluster. For more information on fetching credentials for a running cluster, see the official Google Cloud SDK documentation.
$ gcloud container clusters get-credentials <cluster_name>
Command outputFetching cluster endpoint and auth data. kubeconfig entry generated for test-kube.
-
Run the following command to test your kubectl configuration:
$ kubectl get svc
Command outputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 23m
Step 4. (Optional) Create the Ingress load balancer cluster
This step is optional. If you already have a Load Balancer configured with your GKE cluster, you can proceed to the next section.
The load balancer is necessary to route users to the active instance or the passive instances. In the example, you will be using the NGINX Ingress Controller but it can be replaced by any load balancer supporting sticky sessions (it is mandatory for the passive instances).
-
Run the following command to install the NGINX Ingress Controller:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml
Command outputnamespace/ingress-nginx created serviceaccount/ingress-nginx created serviceaccount/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created configmap/ingress-nginx-controller created service/ingress-nginx-controller created service/ingress-nginx-controller-admission created deployment.apps/ingress-nginx-controller created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
-
Run the following command to retrieve the internal IP of the load balancer:
$ kubectl get svc -n ingress-nginx ingress-nginx-controller
Command outputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer <Cluster_IP> <External_IP> 80:31347/TCP,443:32449/TCP 29s
Take note of the cluster IP; it will be used later to confirm that the deployment was successful.
Step 5. Set the ConfigMap
At this step, you need to define a Kubernetes ConfigMap to set environment variables shared with every pod you deploy.
All the following Kubernetes commands are executed in the default namespace. |
Seven environment variables are defined and used for the Semarchy xDM startup configuration:
-
SEMARCHY_SETUP_TOKEN
: the setup token that you need to enter during the Semarchy Repository creation. -
XDM_REPOSITORY_DRIVER
: JDBC driver class for the repository database. Leave the default value for a Postgres database. -
XDM_REPOSITORY_URL
: JDBC URL for the repository database. It needs to match your Database writer instance endpoint. -
XDM_REPOSITORY_USERNAME
: database user to connect the repository database. -
XDM_REPOSITORY_PASSWORD
: database user password. -
XDM_REPOSITORY_READONLY_USERNAME
: database read-only user to connect the repository database. -
XDM_REPOSITORY_READONLY_PASSWORD
: database read-only user password.
It is mandatory to modify the above values (except XDM_REPOSITORY_DRIVER ).
|
-
Download the sample manifest file for the ConfigMap.
-
Edit the file and save your modifications.
-
Run the following command from the folder containing your manifest file:
$ kubectl apply -f <configmap_file>.yaml
Command outputconfigmap/semarchy-config created
Step 6. Execute the SQL init script
At this step, you need to configure the database schemas required to create the Semarchy repository. As the Cloud SQl cluster is not accessible on the internet by default, use the Kubernetes pod to access it and run the SQL initialization script. If necessary you can Configure public IP to make it available outside of the VPC.
-
Download the sample manifest file for the disposable pod (based on a Debian image). This pod will be on the same virtual network as the database instance(s) and will be able to access it.
-
Run the following command to deploy a disposable pod:
$ kubectl apply -f <disposable_pod_file>.yaml
Command outputpod/semarchy-disposable-pod created
-
Run the following command until the pod is started (Status: Running):
$ kubectl get pod semarchy-disposable-pod
Command outputNAME READY STATUS RESTARTS AGE semarchy-disposable-pod 1/1 Running 0 16s
The pod can take about 10 to 20 seconds to get running.
-
Download the SQL script and edit it to match the values you have set in the ConfigMap.
-
semarchy_repository
: database used for the repository. -
semarchy_repository_username
: database username to connect to the repository database. -
semarchy_repository_password
: database user password. -
semarchy_repository_ro_username
: database read-only user to connect to the repository database. -
semarchy_repository_ro_password
: database read-only user password.
-
-
Save the file and run the following command to copy the script file to the disposable pod
tmp
folder:$ kubectl cp init-db.sql semarchy-disposable-pod:/tmp
-
Run the following command to access the disposable pod bash:
$ kubectl exec -it semarchy-disposable-pod -- bash
-
Install the following curl command that you will use later:
$ apk add curl
-
Run the following command to go to the
tmp
folder:$ cd /tmp
-
Run the following command to connect to the Database writer instance and execute the initialization script:
$ psql --host "<writer_instance_endpoint>" --username "<db_cluster_username>" --dbname "semarchy_repository" < init-db.sql
Command outputPassword for user <db_cluster_username>:
-
Enter the database cluster master password (
<db_cluster_password>
) and press Enter:Command outputCREATE SCHEMA GRANT ALTER DEFAULT PRIVILEGES ALTER DATABASE CREATE EXTENSION CREATE EXTENSION CREATE ROLE GRANT ROLE CREATE SCHEMA CREATE ROLE GRANT ALTER ROLE GRANT
-
Run the following command to connect to the database writer instance with the Semarchy repository user (
XDM_REPOSITORY_USERNAME
) created with the initialization script:$ psql --host "<writer_instance_endpoints>" --username "<xdm_repository_username>" --dbname "semarchy_repository"
Example 1. Command outputPassword for user <xdm_repository_username>:
-
Enter the database cluster master password (
<xdm_repository_password>
) and press Enter. -
Run the following command to list the existing database schemas:
$ \l
Command outputsemarchy_repository=> \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges --------------------+----------+----------+-------------+-------------+---------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | xxxxxxxx | xxxxxxxx | UTF8 | en_US.UTF-8 | en_US.UTF-8 | xxxxxx=CTc/xxxxxx semarchy_repository | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres + …
-
Exit from the PSQL command with the following command:
$ exit
-
Exit from the disposable pod with the following command. You will re-use this pod later to confirm the deployment completion:
$ exit
Step 8. Deploy the active pod
Once the database instance and EKS cluster are running, you can deploy the docker image of the application server.
You need to start by deploying the active node with a unique pod:
-
Download the sample manifest file for the application server active node.
This file defines the deployment of the application server active node and the service associated with exposing the application. You can edit the content of the file to match your specific requirements. For more information, see the Kubernetes documentation. -
Run the following command from the folder containing the manifest file to deploy the active pod:
$ kubectl apply -f <appserver_active_file>.yaml
Command outputdeployment.apps/semarchy-appserver-active created service/semarchy-appserver-active created
-
Run the following command to check the deployment progress until the status becomes Ready (this can take a few minutes):
$ kubectl get deployments
Command outputNAME READY UP-TO-DATE AVAILABLE AGE semarchy-appserver-active 1/1 1 1 13m
The deployment can take about 10 to 20 seconds to be ready.
Step 9. Deploy the passive pods
At this step, deploy two instances of the passive application server image for a high availability configuration.
-
Download the sample manifest file for the application server passive node. This file defines the deployment of the application server passive node and the service associated with exposing the app. You can edit the content of the file to match your specific requirements. For more information, see the Kubernetes documentation.
-
Run the following command from the folder containing the manifest file:
$ kubectl apply -f <appserver_passive_file>.yaml
Command outputdeployment.apps/semarchy-appserver-passive created service/semarchy-appserver-passive created
-
Execute the following command to ensure that the passive nodes are deployed and ready:
$ kubectl get deployments
Command outputNAME READY UP-TO-DATE AVAILABLE AGE semarchy-appserver-active 1/1 1 1 13m semarchy-appserver-passive 2/2 2 2 12m
Step 10. Configure the load balancer
Finally, you need to expose your Kubernetes pods on your networks.
Set a load balancer using sticky sessions to route the users to the active and passive pods:
-
Download the sample Ingress manifest file and edit it to match your requirements.
This file deploys an Ingress resource and configures it to use the sticky sessions for the passive instances.
For more information, see the Kubernetes documentation. -
Run the following command to apply the configuration:
$ kubectl apply -f <ingress_file>.yaml
Command outputingress.networking.k8s.io/ingress created
Step 11. Check the platform connection
At this step, you have deployed all the required resources to run Semarchy xDM on GCP with Kubernetes. You need the <cluster_ip> you have retrieved at Step 4.
By default, the pods are not exposed to the internet. Hence, you have to use the disposable pod to check the platform connection with the active and passive nodes:
-
Run the following command to confirm that the load balancer is routing to the active instance:
$ kubectl exec -it semarchy-disposable-pod -- curl -v --resolve semarchy-appserver-active:80:<cluster_ip> semarchy-appserver-active:80/semarchy/api/rest/probes/started
Command output* Added semarchy-appserver-active:80:<cluster_ip> to DNS cache * Hostname semarchy-appserver-active was found in DNS cache * Trying <cluster_ip>:80... * Connected to semarchy-appserver-active (<cluster_ip>) port 80 (#0) > GET /semarchy/api/rest/probes/started HTTP/1.1 > Host: semarchy-appserver-active > User-Agent: curl/7.80.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 204 …
This command executes a curl command on the disposable pod to query the REST API prob endpoints. -
Run the following command to confirm that the load balancer is routing to the passive instances:
$ kubectl exec -it semarchy-disposable-pod -- curl -v --resolve semarchy-appserver-passive:80:<cluster_ip> semarchy-appserver-passive:80/semarchy/api/rest/probes/started
Command output* Added semarchy-appserver-passive:80:<cluster_ip> to DNS cache * Hostname semarchy-appserver-passive was found in DNS cache * Trying <cluster_ip>:80... * Connected to semarchy-appserver-passive (<cluster_ip>) port 80 (#0) > GET /semarchy/api/rest/probes/started HTTP/1.1 > Host: semarchy-appserver-passive > User-Agent: curl/7.80.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 204 …
-
Delete the disposable pod with the following command:
$ kubectl delete pod semarchy-disposable-pod
Command outputpod "semarchy-disposable-pod" deleted