Home / News / Migrate your OpenShift logging stack from Elasticsearch to Loki

Migrate your OpenShift logging stack from Elasticsearch to Loki

To use the latest features of logging for Red Hat OpenShift 6.0, you must migrate from Elasticsearch to Loki. This article is a guide for users to test these changes in their development and test environments, and develop a plan for implementing these changes in production. Here’s a more sarcastic take on the content:

To use the latest Loki features, you need to follow a series of hasty steps to ensure your development and testing environments are up to date with the latest Red Hat OpenShift 6.0 logging infrastructure. Here’s a step-by-step guide to migrating your OpenShift logging stack from Elasticsearch to Loki:

1. **Why migrate to Loki?** *Loki is a highly performant and horizontally scalable log aggregation system, designed to cater to the needs of Red Hat OpenShift development and testing environments. It’s a GA log store that allows multiple tenants to use a single Loki instance, reducing complexity and minimizing resources. In addition, the Loki configuration provided by OpenShift logging is optimized for fast troubleshooting, making it ideal for quickly identifying and resolving issues in your production environment. By migrating to Loki, you can ensure that your logs are stored and managed in a way that meets the specific requirements of your development and testing needs, while also providing a foundation for

To use the latest features of logging for Red Hat OpenShift 6.0, you must migrate from Elasticsearch to Loki. This article is a guide for users to test these changes in their development and test environments and develop a plan for implementing these changes in production.

Loki is a horizontally scalable, highly available, multitenant log aggregation system offered as a general availability (GA) log store for logging for Red Hat OpenShift. It can be visualized with the Red Hat OpenShift observability UI. The Loki configuration provided by OpenShift logging is a short-term log store designed to helps users perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster.

Why migrate to Loki?

Our experience is that Loki is highly performant, which is attributed to Loki’s use of log labels instead of log lines from an indexing perspective. We also prefer how Loki allows multiple tenants to use a single Loki instance, which reduces complexity and uses fewer compute resources. In addition, Elastic and Kibana are deprecated in logging 5.x versions.

Migrate the default log store to Loki

The following describes how to migrate the OpenShift logging storage service from Elasticsearch to LokiStack. This article includes steps to switch forwarding logs from Elasticsearch to LokiStack. It does not include any steps for migrating data between the two. It aims to ensure both log storage stacks run in parallel until the informed user can confidently shut down Elasticsearch.

In summary, after applying the steps:

  • The old logs will still be served by Elasticsearch and visible only through Kibana.
  • The new logs will be served by LokiStack and visible through the OpenShift console logs pages (for example, AdminObserveLogs).

Prerequisites

  • Installed logging for Red Hat OpenShift operator (current stable when writing v5.5.5).
  • Installed OpenShift Elasticsearch operator (current stable when writing v5.5.5).
  • Installed Loki operator provided by Red Hat (current stable when writing v5.5.5).
  • Ensure sufficient resources on the target nodes for running Elasticsearch and LokiStack (consider the LokiStack deployment sizing table).

Current stack

Note: If Fluentd is the collector type, consider reading the Red Hat Knowledgebase article Migrating the log collector from Fluentd to Vector reducing the number of logs duplicated in RHOCP 4.

Assume your current stack looks like the following block, which represents a fully managed OpenShift logging stack with logStore: Elastisearch and Kibana, including collection, forwarding, storage, and visualization. Disclaimer: The stack might vary regarding the resources/nodes/tolerations/selectors/collector type/back-end storage used.

apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: gp2 size: 80Gi resources: requests: memory: 16Gi limits: memory: 16Gi redundancyPolicy: "SingleRedundancy" retentionPolicy: application: maxAge: 24h audit: maxAge: 24h infra: maxAge: 24h visualization: type: "kibana" kibana: replicas: 1 collection: [...]

Bonus: Using ClusterLogForwarder to forward audit logs

In the case of using the Forwarding audit logs to the log store guide to forward audit logs to the default store, you do not need to change anything on the ClusterLogForwarder resource. The collector pods will be configured to continue sending audit logs to forward new audit logs to LokiStack, too.

Install LokiStack only

Following the guide Deploying LokiStack, apply only the next two steps:

An example of the documented in the guide Deploying LokiStack follows. For more details and options, review the documentation.

Step 1: Create the S3 secret

For this example, the secret created will be for AWS S3, but review the fields needed for other kind of object storage in the documentation section Loki object storage:

cat << EOF |oc create -f - apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging data: access_key_id: $(echo "PUT_S3_ACCESS_KEY_ID_HERE"|base64 -w0) access_key_secret: $(echo "PUT_S3_ACCESS_KEY_SECRET_HERE"|base64 -w0) bucketnames: $(echo "s3-bucket-name"|base64 -w0) endpoint: $(echo "https://s3.eu-central-1.amazonaws.com"|base64 -w0) region: $(echo "eu-central-1"|base64 -w0) EOF

Step 2: Deploy LokiStack CR

Deploy the LokiStack Custom Resource (CR), changing the spec.size as needed:

cat << EOF |oc create -f - apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 type: s3 storageClassName: gp2 tenants: mode: openshift-logging EOF

Disconnect Elasticsearch and Kibana CRs from ClusterLogging

To ensure Elasticsearch and Kibana continue to run on the cluster while you switch ClusterLogging from them to LokiStack/OpenShift console, you need to disconnect the custom resources from being owned by ClusterLogging.

Step 1: Temporarily set ClusterLogging to State Unmanaged

Enter:

oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge

Step 2: Remove ClusterLogging ownerReferences from the Elasticsearch resource

The following command ensures that the ClusterLogging does not own the Elasticsearch resource anymore. This means updates on the ClusterLogging resource’s logStore field will not be applied to the Elasticsearch resource anymore.

oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge

Step 3: Remove ClusterLogging ownerReferences from the Kibana resource

The following command ensures that the ClusterLogging does not own the Kibana resource anymore. This means updates on the ClusterLogging resource’s visualization field will not be applied to the Kibana resource anymore.

oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge

Step 4: Back up Elasticsearch and Kibana resources

To ensure that no accidental deletes destroy the previous storage/visualization components, namely Elasticsearch and Kibana, the following steps describe how to back up the resources. (This requires the small utility yq.)

Elasticsearch:

oc -n openshift-logging get elasticsearch elasticsearch -o yaml \ | yq -r 'del(.status,.metadata | .resourceVersion,.uid,.generation,.creationTimestamp,.selfLink)' > /tmp/cr-elasticsearch.yaml

Kibana:

oc -n openshift-logging get kibana kibana -o yaml \ | yq -r 'del(.status,.metadata | .resourceVersion,.uid,.generation,.creationTimestamp,.selfLink)' > /tmp/cr-kibana.yaml

Switch ClusterLogging to LokiStack

Now that you’ve disconnected the Elasticsearch and Kibana custom resources, you can update the ClusterLogging resource to point to LokiStack.

Step 1: Switch log storage to LokiStack

The manifest applies several changes to the ClusterLogging resource:

  • It re-instantiates the management state to Managed again.
  • It switches the logStore spec from elasticsearch to lokistack. In turn, this restarts the collector pods to start forwarding logs to lokistack from now on.
  • It removes the visualization spec. In turn, the cluster-logging-operator will install the logging-view-plugin that enables observing lokistack logs in the OpenShift console.
  • Replace the current spec.collection section with the available in the running cluster.
cat << EOF |oc replace -f - apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "lokistack" lokistack: name: logging-loki collection: <------------------ replace with the current collection configuration [...] visualization: #Keep this section as long as you need to keep Kibana. kibana: replicas: 1 type: kibana EOF

Step 2: Re-instantiate Kibana resource

Because we removed the visualization field entirely in the previous step in favor of the operator to install the OpenShift console integration, the same operator will remove the Kibana resource, too. This is unfortunately a non-critical issue as long as you have a backup of the Kibana resource.

The reason is the operator removes the Kibana resource named kibana from openshift-logging automatically without checking any owner references. This used to be correct as long as Kibana was the only supported visualization component on logging for Red Hat OpenShift.

oc -n openshift-logging apply -f /tmp/cr-kibana.yaml

Step 3: Enable the console view plug-in

You will need to enable the console view plug-in, if it isn’t already, to view the logs integrated from the OpenShift Container Platform console → ObserveLogs. Enter:

oc patch consoles.operator.openshift.io cluster --type=merge --patch '{ "spec": { "plugins": ["logging-view-plugin"] } }'

Delete the Elasticsearch stack

When the retention period for the log stored in the Elasticsearch log store is expired and no more logs are visible in the Kibana instance, it is possible to remove the old stack to release resources.

Step 1: Delete the Elasticsearch and Kibana resources:

oc -n openshift-logging delete kibana/kibana elasticsearch/elasticsearch

Step 2: Delete the PVCs used by the Elasticsearch instances:

oc delete -n openshift-logging pvc -l logging-cluster=elasticsearch

Summary

Migrating to Loki is necessary to use the latest logging features in logging for Red Hat OpenShift 6.0, as Elasticsearch and Kibana are deprecated in logging for Red Hat OpenShift 5.x versions. This article described how to test these changes in your development and test environments and to create a plan for production implementation.

The post Migrate your OpenShift logging stack from Elasticsearch to Loki appeared first on Red Hat Developer.

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *