Install the GitLab Helm Chart: helm repo add gitlab helm install gitlab -f gitlab/gitlab If you want to modify some GitLab settings, you can use the above-mentioned configuration as a base and create your own YAML file. Monitor the installation progress via helm status gitlab and minikube dashboard.
There may be simpler possibilities for installing ElasticSearch on Docker. However, here, we will choose a way that can be easily expanded for production use: the installation of ElasticSearch on Kubernetes via Helm charts.
That’s where Helm Charts comes into play. Helm coordinates the deployment of applications using standardized templates called Charts. Charts are used to define, install, and upgrade your applications at any level of complexity. A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool,. A good amount of my day-to-day involves creating, modifying, and deploying Helm charts to manage the deployment of applications. Helm is an application package manager for Kubernetes, which coordinates the download, installation, and deployment of apps. Helm charts are the way we can define an application as a collection of related Kubernetes resources.
While installing ElasticSearch using Helm implements best practice rules that make the solution fit for production, the resource needs of such a solution is tremendous. What, if we want to start on a small system with 2 vCPU and 8 GB RAM? In that case, we need to scale down the solution, thus sacrificing the production-readiness. This is, what we will do here: instead of creating a production-ready deployment with three masters, two data nodes and two clients, we will create a scaled-down version of the production solution with only one node each. It is clear that this scaled-down solution is not designed for production use. However, once your Kubernetes infrastructure is large enough, it is simple to remove the command-line options and to follow the defaults that are fit for production use.
Contents
- Step 3: Check the Solution
- Appendix: Troubleshooting Helm
- The Helm charts can be found in the GIT Repo: https://github.com/helm/charts.
- Installation scripts used in this article can be found on our own GIT repo: https://github.com/oveits/kubernetes-install-elasticsearch-via-helm.git
- Tested with: 2 vCPU, 8GB RAM, 40GB + 20GB external disk (however, < 8 GB disk are sufficient for Kubernetes and the ElasticSearch database will need less than 50 MB in this minimalistic hello world test). However, make sure that you have 8 GB RAm available. You can expect the installation to fail if, say, only 4 GB RAM is available.
We are following https://github.com/helm/charts/tree/master/stable/elasticsearch:
You most probably will need to adapt the STORAGE_CLASS name to your situation or remove the corresponding storageClass option altogether.
Now let us apply the file:
Step 3.1: Check the Volumes
If everything goes right, you will see that the persistent volumes are bound:
Step 3.2: Check the PODs
If so, also the PODs should come up after two or three minutes:
Step 3.3: Check the Health
Let us check the health of the installation via the cluster health URL. My first test had revealed a red status:
However, once I got the MIN_REPLICAS value right, the system has shown a yellow state:
Note: the documentation says: „yellow
means that the primary shard is allocated but replicas are not.“ Since we have installed a single node cluster with no replicas, yellow
is the best status value we can expect.
Now we can test, whether the ElasticSearch server is functional by adding some data:
We can read all available entries with the following command:
Now let us delete the installation:
Note: you can add a --purge
option, if you wish to re-use the same RELEASE again. However, better increase the version number for the next version.
Also, note that the helm del command does not touch thee persistent volumes: therefore, you might need to delete persistent volume claims to move the corresponding persistent volumes from Bound state to Released state. Here an example for persistent volume claims that are still bound, and another set, that has been released with kubectl delete commands:
To move all persistent volumes of the node from „Released“ to „Available“ state, you can apply the shell script found on our GIT Repo named kubernetes-retain-released-volumes. It will wipe all data from „Released“ volumes. Use at your own risk.
Helm Chart Command
If you want to know, which objects a helm install will create, you can perform the following steps:
Step A.1: Download or Clone the Helm Chart
Step A.2: Create Templating Results
Now we can create all YAML files helm will create during an installation process:
Now the YAML file can be reviewed. Also, you can install the YAML files step by step by applying them as follows:
The status of the created objects can be inspected with a describe command:
Here, the kind and name can be read from the YAML file or can be taken from the „created“ message after the apply command.
The corresponding objects can be deleted again with:
In our most recent blog post, we have shown how to install ElasticSearch via Helm. This time, we will add a Kibana server (via Helm again), so we can visualize the data stored in the ElasticSearch database.
Compared to the installation of the ElasticSearch, the installation of Kibana is quite simple. The default stable Helm chart for Kibana is just a single node installation and the resource needs are not too high.
Contents
- ElasticSearch is installed and the ElasticSearch URL is accessible. You may want to follow the ElasticSearch installation instructions on our previous article.
In case the ElasticSearch server has not been installed via Helm by following our previous article, it might be necessary to manually define the ElasticSearch URL via something like follows:
For installing Kibana, we need to tell Kibana
Here we have chosen to overwrite the externalPort 443 with 5601 since we have not planned to install TLS.
The output should look similar to the following:
Shortly after the installation, the Kibana POD should get up and reads:
We read the cluster IP address from the output of the following command:
Let us access the service locally by accessing the CLUSTER-IP and the PORT 5601:
We can see, that we have reached the Kibana server and we have received a redirect message pointing to /app/kibana. We can follow the redirect with the -L option:
There are several options on how we can access the service remotely. For a quick test, and if the Kubernetes worker node is located in a secured network, you can follow the instructions that were given after the helm install command was given and expose the service as follows:
This will bind the address 127.0.0.1 port 5601 to the POD. However, we want to access the system from outside so we have to add the address 0.0.0.0:
Helm Package Chart
Security Note: we recommend to perform those steps only, if your server is secured from the Internet.
Helm Chart Cheat Sheet Dnd
The output of the commands is:
Note: the input on the command line is blocked until we stop the port-forwarding with a <Ctrl>-C.
As long as the forwarding is active, the service can be accessed:
For permanent access, other ways of access need to be established. In a simple situation with a single node kubernetes „cluster“, the (IMO) simplest way is to adapt the service from Cluster IP to HostPort:
The editor will open and we can manipulate the lines that are shown in blue below:
With that, the service can be accessed permanently on port 30000 of the IP address of the kubernetes node:
Note that the NodePorts need to be in the range 30000-32767. If you desire to use standard ports like 5601, check out this blog post for an example how to expose services on HTTP or HTTPS via an NginX-based Ingress solution. The latter solution can handle services on a Kubernetes cluster.
Helm Chart Template
In this article, we have learned, how easy it is to deploy a Kibana server on Kubernetes via Helm. We just had to manipulate the ElasticSearch URL and the external Port. We also have shown how to access the service via Kubernetes port-forwarding (temporary solution) or via a simple node port configuration with a high TCP port beyond 30.000 (persistent solution). Better, NginX-based solutions are found here.