Thursday, January 9, 2025
Google search engine
HomeGuest BlogsHow To Send OpenShift Logs and Events to Splunk

How To Send OpenShift Logs and Events to Splunk

As a cluster administrator, you will definitely want to aggregate all the logs from your OpenShift Container Platform cluster, such as container logs, node system logs, application container logs, and so forth. In this article we will schedule cluster logging pods and other resources necessary to support sending of Logs, Events and Cluster Metrics to Splunk.

We will be using Splunk Connect for Kubernetes which provides a way to import and search your OpenShift or Kubernetes logging, object, and metrics data in Splunk. Splunk Connect for Kubernetes utilizes and supports multiple CNCF components in the development of these tools to get data into Splunk.

Setup Requirements

For this setup you need the following items.

  • Working OpenShift Cluster with oc command line tool configured. Administrative access is required.
  • Splunk Enterprise 7.0 or later
  • Helm installed in your workstation
  • At least two Splunk Indexes
  • An HEC token used by the HTTP Event Collector to authenticate the event data

There will be three types of deployments on OpenShift for this purpose.

  1. Deployment for collecting changes in OpenShift objects.
  2. One DaemonSet on each OpenShift node for metrics collection.
  3. One DaemonSet on each OpenShift node for logs collection.

The actual implementation will be as shown in the diagram below.

openshift logging splunk

Step 1: Create Helm Indexes

You will need at least two indexes for this deployment. One for logs and events and another one for Metrics.

Login to Splunk as Admin user:

send openshift logs events splunk 01

Create events and Logs Index. The Input Data Type Should be Events.

send openshift logs events splunk 02

For Metrics Index the Input Data type can be Metrics.

send openshift logs events splunk 03

Confirm the indexes are available.

send openshift logs events splunk 04

Step 2: Create Splunk HEC Token

The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. As HEC uses a token-based authentication model we need to generate new token.

This is done under Data Inputs configuration section.

send openshift logs events splunk 05

Select “HTTP Event Collector” then fill in the name and click next.

send openshift logs events splunk 06

In the next page permit the token to write to the two indexes we created.

send openshift logs events splunk 07

Review and Submit the settings.

send openshift logs events splunk 08

Step 3: Install Helm

If you don’t have helm already installed in your workstation or bastion server checkout the guide in below link.

You can validate installation by checking available version of helm.

$ helm version
version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.19.3"}

Step 4: Deploy Splunk Connect for Kubernetes

Create a namespace for Splunk connect namespace.

oc new-project splunk-hec-logging

Upon project creation it should be your current working project. But you can as well switch to the project at any point in time.

oc project splunk-hec-logging

Create values yaml file for the installation.

vim ocp-splunk-hec-values.yaml

Mine has been modified to look similar to below.

global:
  logLevel: info
  journalLogPath: /run/log/journal
  splunk:
    hec:
      host: <splunk-ip> # Set Splunk IP address
      port: <splunk-hec-port> # Set Splunk HEC port
      protocol: http
      token: <hec-token> # Hec token created
      insecureSSL: true
      indexName: <indexname> # default index if others not set
  kubernetes:
    clusterName: "<clustername>"
    openshift: true
splunk-kubernetes-metrics:
  enabled: true
  splunk:
    hec:
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName: <metrics-indexname>
  kubernetes:
    openshift: true
splunk-kubernetes-logging:
  enabled: true
  logLevel: debug
  splunk:
    hec:
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName: <logging-indexname>
  containers:
    logFormatType: cri
  logs:
    kube-audit:
      from:
        file:
          path: /var/log/kube-apiserver/audit.log
splunk-kubernetes-objects:
  enabled: true
  kubernetes:
    openshift: true
  splunk:
    hec:
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName:  <objects-indexname>

Fill the values accordingly then initiate deployment. Get latest release URL before installation.

VER=$(curl -s https://api.github.com/repos/splunk/splunk-connect-for-kubernetes/releases/latest|grep tag_name|cut -d '"' -f 4|sed 's/v//')
helm install splunk-kubernetes-logging -f ocp-splunk-hec-values.yaml https://github.com/splunk/splunk-connect-for-kubernetes/releases/download/$VER/splunk-connect-for-kubernetes-$VER.tgz

Deployment output:

AME: splunk-kubernetes-logging
LAST DEPLOYED: Thu Aug 24 22:58:53 2023
NAMESPACE: temp
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
███████╗██████╗ ██╗     ██╗   ██╗███╗   ██╗██╗  ██╗██╗
██╔════╝██╔══██╗██║     ██║   ██║████╗  ██║██║ ██╔╝╚██╗
███████╗██████╔╝██║     ██║   ██║██╔██╗ ██║█████╔╝  ╚██╗
╚════██║██╔═══╝ ██║     ██║   ██║██║╚██╗██║██╔═██╗  ██╔╝
███████║██║     ███████╗╚██████╔╝██║ ╚████║██║  ██╗██╔╝
╚══════╝╚═╝     ╚══════╝ ╚═════╝ ╚═╝  ╚═══╝╚═╝  ╚═╝╚═╝

Listen to your data.

Splunk Connect for Kubernetes is spinning up in your cluster.
After a few minutes, you should see data being indexed in your Splunk.

If you get stuck, we're here to help.
Look for answers here: http://docs.splunk.com

If changes are made in the values file update deployment using:

helm upgrade splunk-kubernetes-logging -f ocp-splunk-hec-values.yaml splunk/splunk-connect-for-kubernetes

Check running nodes:

$ oc get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
splunk-kubernetes-logging-splunk-kubernetes-metrics-4bvkp         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-4skrm         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-55f8t         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-7xj2n         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-8r2vj         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-agg-5bppqqn   1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-f8psk         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-fp88w         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-s45wx         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-xtq5g         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-objects-b4f8f4m67vg   1/1     Running   0          48s

Give Privileged SCC to service accounts

for sa in $(oc  get sa --no-headers  | grep splunk | awk '{ print $1 }'); do
  oc adm policy add-scc-to-user privileged -z $sa
done

Login to Splunk and check if Logs, Events and metrics are being send.

send openshift logs events splunk 10

This might not be the Red Hat recommended way of Storing OpenShift Events and Logs. Refer to OpenShift documentation for more details on Cluster Logging.

More articles on OpenShift:

RELATED ARTICLES

Most Popular

Recent Comments