Install Percona Server for MongoDB on Amazon Elastic Kubernetes Service (EKS)¶
This guide shows you how to deploy Percona Operator for MongoDB on Amazon Elastic Kubernetes Service (EKS). The document assumes some experience with the platform. For more information on the EKS, see the Amazon EKS official documentation .
Prerequisites¶
The following tools are used in this guide and therefore should be preinstalled:
-
AWS Command Line Interface (AWS CLI) for interacting with the different parts of AWS. You can install it following the official installation instructions for your system .
-
eksctl to simplify cluster creation on EKS. It can be installed along its installation notes on GitHub .
-
kubectl to manage and deploy applications on Kubernetes. Install it following the official installation instructions .
Also, you need to configure AWS CLI with your credentials according to the official guide .
Create the EKS cluster¶
-
To create your cluster, you will need the following data:
- name of your EKS cluster,
- AWS region in which you wish to deploy your cluster,
- the amount of nodes you would like tho have,
- the desired ratio between on-demand and spot instances in the total number of nodes.
Note
spot instances are not recommended for production environment, but may be useful e.g. for testing purposes.
After you have settled all the needed details, create your EKS cluster following the official cluster creation instructions .
-
After you have created the EKS cluster, you also need to install the Amazon EBS CSI driver on your cluster. See the official documentation on adding it as an Amazon EKS add-on.
Install the Operator and deploy your MongoDB cluster¶
-
Deploy the Operator. By default deployment will be done in the
default
namespace. If that’s not the desired one, you can create a new namespace and/or set the context for the namespace as follows (replace the<namespace name>
placeholder with some descriptive name):$ kubectl create namespace <namespace name> $ kubectl config set-context $(kubectl config current-context) --namespace=<namespace name>
At success, you will see the message that
namespace/<namespace name>
was created, and the context was modified.Deploy the Operator by applying the
deploy/bundle.yaml
manifest from the Operator source tree.You can apply it without downloading, using the following command:
$ kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v1.19.1/deploy/bundle.yaml
Expected output
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied role.rbac.authorization.k8s.io/percona-server-mongodb-operator serverside-applied serviceaccount/percona-server-mongodb-operator serverside-applied rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator serverside-applied deployment.apps/percona-server-mongodb-operator serverside-applied
Clone the repository with all manifests and source code by executing the following command:
$ git clone -b v1.19.1 https://github.com/percona/percona-server-mongodb-operator
Edit the
deploy/bundle.yaml
file: add the following affinity rules to thespec
part of thepercona-server-mongodb-operator
Deployment:apiVersion: apps/v1 kind: Deployment metadata: name: percona-server-mongodb-operator spec: replicas: 1 selector: matchLabels: name: percona-server-mongodb-operator template: metadata: labels: name: percona-server-mongodb-operator spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64
After editing, apply your modified
deploy/bundle.yaml
file as follows:$ kubectl apply --server-side -f deploy/bundle.yaml
Expected output
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied role.rbac.authorization.k8s.io/percona-server-mongodb-operator serverside-applied serviceaccount/percona-server-mongodb-operator serverside-applied rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator serverside-applied deployment.apps/percona-server-mongodb-operator serverside-applied
-
The Operator has been started, and you can deploy your MongoDB cluster:
$ kubectl apply -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v1.19.1/deploy/cr.yaml
Expected output
perconaservermongodb.psmdb.percona.com/my-cluster-name created
Note
This deploys default MongoDB cluster configuration, three mongod, three mongos, and three config server instances. Please see deploy/cr.yaml and Custom Resource Options for the configuration options. You can clone the repository with all manifests and source code by executing the following command:
$ git clone -b v1.19.1 https://github.com/percona/percona-server-mongodb-operator
After editing the needed options, apply your modified
deploy/cr.yaml
file as follows:$ kubectl apply -f deploy/cr.yaml
Edit the
deploy/cr.yaml
file: set the following affinity rules in allaffinity
subsections:.... affinity: advanced: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64
Also, set
image
andbackup.image
Custom Resource options to special multi-architecture image versions by adding a-multi
suffix to their tags:... image: percona/percona-server-mongodb:7.0.14-8-multi ... backup: ... image: percona/percona-backup-mongodb:2.8.0-multi
Please note, that currently monitoring with PMM is not supported on ARM64 configurations.
After editing, apply your modified
deploy/cr.yaml
file as follows:$ kubectl apply -f deploy/cr.yaml
Expected output
perconaservermongodb.psmdb.percona.com/my-cluster-name created
The creation process may take some time. When the process is over your cluster will obtain the
ready
status. You can check it with the following command:$ kubectl get psmdb
Expected output
NAME ENDPOINT STATUS AGE my-cluster-name my-cluster-name-mongos.default.svc.cluster.local ready 5m26s
Verifying the cluster operation¶
It may take ten minutes to get the cluster started. When kubectl get psmdb
command finally shows you the cluster status as ready
, you can try to connect
to the cluster.
To connect to Percona Server for MongoDB you need to construct the MongoDB connection URI string. It includes the credentials of the admin user, which are stored in the Secrets object.
-
List the Secrets objects
$ kubectl get secrets -n <namespace>
The Secrets object you are interested in has the
my-cluster-name-secrets
name by default. -
View the Secret contents to retrive the admin user credentials.
The command returns the YAML file with generated Secrets, including the$ kubectl get secret my-cluster-name-secrets -o yaml
MONGODB_DATABASE_ADMIN_USER
andMONGODB_DATABASE_ADMIN_PASSWORD
strings, which should look as follows:Sample output
... data: ... MONGODB_DATABASE_ADMIN_PASSWORD: aDAzQ0pCY3NSWEZ2ZUIzS1I= MONGODB_DATABASE_ADMIN_USER: ZGF0YWJhc2VBZG1pbg==
The actual login name and password on the output are base64-encoded. To bring it back to a human-readable form, run:
$ echo 'MONGODB_DATABASE_ADMIN_USER' | base64 --decode $ echo 'MONGODB_DATABASE_ADMIN_PASSWORD' | base64 --decode
-
Run a container with a MongoDB client and connect its console output to your terminal. The following command does this, naming the new Pod
percona-client
:$ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:7.0.14-8 --restart=Never -- bash -il
Executing it may require some time to deploy the corresponding Pod.
-
Now run
mongosh
tool inside thepercona-client
command shell using the admin user credentialds you obtained from the Secret, and a proper namespace name instead of the<namespace name>
placeholder. The command will look different depending on whether sharding is on (the default behavior) or off:$ mongosh "mongodb://databaseAdmin:databaseAdminPassword@my-cluster-name-mongos.<namespace name>.svc.cluster.local/admin?ssl=false"
$ mongosh "mongodb+srv://databaseAdmin:databaseAdminPassword@my-cluster-name-rs0.<namespace name>.svc.cluster.local/admin?replicaSet=rs0&ssl=false"
Note
If you are using MongoDB versions earler than 6.x (such as 5.0.29-25 instead of the default 7.0.14-8 variant), substitute
mongosh
command withmongo
in the above examples.
Troubleshooting¶
If kubectl get psmdb
command doesn’t show ready
status too long, you can
check the creation process with the kubectl get pods
command:
$ kubectl get pods
Expected output
NAME READY STATUS RESTARTS AGE
my-cluster-name-cfg-0 2/2 Running 0 11m
my-cluster-name-cfg-1 2/2 Running 1 10m
my-cluster-name-cfg-2 2/2 Running 1 9m
my-cluster-name-mongos-0 1/1 Running 0 11m
my-cluster-name-mongos-1 1/1 Running 0 11m
my-cluster-name-mongos-2 1/1 Running 0 11m
my-cluster-name-rs0-0 2/2 Running 0 11m
my-cluster-name-rs0-1 2/2 Running 0 10m
my-cluster-name-rs0-2 2/2 Running 0 9m
percona-server-mongodb-operator-665cd69f9b-xg5dl 1/1 Running 0 37m
If the command output had shown some errors, you can examine the problematic
Pod with the kubectl describe <pod name>
command as follows:
$ kubectl describe pod my-cluster-name-rs0-2
Review the detailed information for Warning
statements and then correct the
configuration. An example of a warning is as follows:
Warning FailedScheduling 68s (x4 over 2m22s) default-scheduler 0/1 nodes are available: 1 node(s) didn’t match pod affinity/anti-affinity, 1 node(s) didn’t satisfy existing pods anti-affinity rules.
Removing the EKS cluster¶
To delete your cluster, you will need the following data:
- name of your EKS cluster,
- AWS region in which you have deployed your cluster.
You can clean up the cluster with the eksctl
command as follows (with
real names instead of <region>
and <cluster name>
placeholders):
$ eksctl delete cluster --region=<region> --name="<cluster name>"
The cluster deletion may take time.
Warning
After deleting the cluster, all data stored in it will be lost!