Saturday, November 16, 2024
Google search engine
HomeGuest BlogsHow To Install Metrics Server on a Kubernetes Cluster

How To Install Metrics Server on a Kubernetes Cluster

The Kubernetes Metrics Server is a cluster-wide aggregator of resource usage data. Its work is to collect metrics from the Summary API, exposed by Kubelet on each node. Resource usage metrics, such as container CPU and memory usage are helpful when troubleshooting weird resource utilization. All these metrics are available in Kubernetes through the Metrics API.

The Metrics API has the amount of resource currently used by a given node or a given pod. Since it doesn’t store the metric values, Metrics Server is used for this purpose. The deployment yamls files are provided for installation in the Metrics Server project source code.

Metrics Server Requirements

Metrics Server has specific requirements for cluster and network configuration. These requirements aren’t the default for all cluster distributions. Please ensure that your cluster distribution supports these requirements before using Metrics Server:

For Helm chart installation check: Deploy Metrics Server in Kubernetes using Helm Chart

Deploy Metrics Server to Kubernetes

There are two deployment methods.

Option 1) Deploy metrics server as single instance

Download manifest file.

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml

Modify the settings to your liking by editing the file.

vim metrics-server-components.yaml

Once you have made the customization you need, deploy metrics-server in your Kubernetes cluster. Switch to correct cluster if you have multiple Kubernetes clusters: Easily Manage Multiple Kubernetes Clusters with kubectl & kubectx.

Apply Metrics Server manifests which are available on Metrics Server releases making them installable via url:

kubectl apply -f metrics-server-components.yaml

Here is the output of the resources being created.

serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

Option 2) Deploy metrics server in HA Mode

Note that this configuration requires having a cluster with at least 2 nodes on which Metrics Server can be scheduled. To install the latest Metrics Server release in high availability mode from the high-availability.yaml manifest, run the following command.

wget  https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml -O metrics-server-ha.yaml

To maximize the efficiency of this highly available configuration, it is recommended to add the --enable-aggregator-routing=true CLI flag to the kube-apiserver so that requests sent to Metrics Server are load balanced between the 2 instances.

Apply deployment manifest:

kubectl apply -f  metrics-server-ha.yaml

Validating Metrics server installation

Use the following command to verify that the metrics-server deployment is running the desired number of pods:

$ kubectl get deployment metrics-server -n kube-system

NAME             READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server   1/1     1            1           7m23s

$ kubectl get pods -n kube-system | grep metrics

metrics-server-7cb45bbfd5-kbrt7   1/1     Running   0          8m42s

Confirm Metrics server is active.

$ kubectl get apiservice v1beta1.metrics.k8s.io -o yaml
piVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apiregistration.k8s.io/v1","kind":"APIService","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"metrics-adapter","app.kubernetes.io/name":"prometheus-adapter","app.kubernetes.io/part-of":"kube-prometheus","app.kubernetes.io/version":"0.10.0"},"name":"v1beta1.metrics.k8s.io"},"spec":{"group":"metrics.k8s.io","groupPriorityMinimum":100,"insecureSkipTLSVerify":true,"service":{"name":"prometheus-adapter","namespace":"monitoring"},"version":"v1beta1","versionPriority":100}}
  creationTimestamp: "2022-12-19T18:56:43Z"
  labels:
    app.kubernetes.io/component: metrics-adapter
    app.kubernetes.io/name: prometheus-adapter
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.10.0
  name: v1beta1.metrics.k8s.io
  resourceVersion: "83076616"
  uid: 9089293f-48cf-44e9-9c30-bab0b82a544f
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: prometheus-adapter
    namespace: monitoring
    port: 443
  version: v1beta1
  versionPriority: 100
status:
  conditions:
  - lastTransitionTime: "2023-08-19T16:35:55Z"
    message: all checks passed
    reason: Passed
    status: "True"
    type: Available

Metrics API can also be accessed by using the kubectl top command. This makes it easier to debug autoscaling pipelines.

$ kubectl top --help
Display Resource (CPU/Memory/Storage) usage.

 The top command allows you to see the resource consumption for nodes or pods.

 This command requires Metrics Server to be correctly configured and working on the server.

Available Commands:
  node        Display Resource (CPU/Memory/Storage) usage of nodes
  pod         Display Resource (CPU/Memory/Storage) usage of pods

Usage:
  kubectl top [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

To display cluster nodes resource usage – CPU/Memory/Storage you’ll run the command:

$ kubectl top nodes
NAME                                            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
ip-192-168-138-244.eu-west-1.compute.internal   50m          2%     445Mi           13%
ip-192-168-176-247.eu-west-1.compute.internal   58m          3%     451Mi           13%

Similar command can be used for pods.

$ kubectl top pods -A
NAMESPACE     NAME                              CPU(cores)   MEMORY(bytes)
kube-system   aws-node-glfrs                    4m           51Mi
kube-system   aws-node-sgh8p                    5m           51Mi
kube-system   coredns-6987776bbd-2mgxp          2m           6Mi
kube-system   coredns-6987776bbd-vdn8j          2m           6Mi
kube-system   kube-proxy-5glzs                  1m           7Mi
kube-system   kube-proxy-hgqm5                  1m           8Mi
kube-system   metrics-server-7cb45bbfd5-kbrt7   1m           11Mi

You can also access use kubectl get –raw to pull raw resource usage metrics for all nodes in the cluster.

$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq

{
  "kind": "NodeMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
  },
  "items": [
    {
      "metadata": {
        "name": "ip-192-168-176-247.eu-west-1.compute.internal",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/ip-192-168-176-247.eu-west-1.compute.internal",
        "creationTimestamp": "2023-08-12T11:44:41Z"
      },
      "timestamp": "2023-08-12T11:44:17Z",
      "window": "30s",
      "usage": {
        "cpu": "55646953n",
        "memory": "461980Ki"
      }
    },
    {
      "metadata": {
        "name": "ip-192-168-138-244.eu-west-1.compute.internal",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/ip-192-168-138-244.eu-west-1.compute.internal",
        "creationTimestamp": "2023-08-12T11:44:41Z"
      },
      "timestamp": "2023-08-12T11:44:09Z",
      "window": "30s",
      "usage": {
        "cpu": "47815890n",
        "memory": "454944Ki"
      }
    }
  ]
}

Other Customizations

These are the extra customizations that can be done before installing Metrics Server on Kubernetes.

Setting Flags

Metrics Server supports all the standard Kubernetes API server flags, as well as the standard Kubernetes glog logging flags. The most commonly-used ones are:

  • --logtostderr: log to standard error instead of files in the container. You generally want this on.
  • --v=<X>: set log verbosity. It’s generally a good idea to run a log level 1 or 2 unless you’re encountering errors. At log level 10, large amounts of diagnostic information will be reported, include API request and response bodies, and raw metric results from Kubelet.
  • --secure-port=<port>: set the secure port. If you’re not running as root, you’ll want to set this to something other than the default (port 443).
  • --tls-cert-file, --tls-private-key-file: the serving certificate and key files. If not specified, self-signed certificates will be generated. Use non-self-signed certificates in production.
  • --kubelet-certificate-authority: the path of the CA certificate to use for validate the Kubelet’s serving certificates.

Other flags to change Metrics Server behavior are:

  • --metric-resolution=<duration>: Interval at which metrics are scraped from Kubelets (defaults to 60s).
  • --kubelet-insecure-tls: skip verifying Kubelet CA certificates.
  • --kubelet-port: Port used to connect to the Kubelet (defaults to the default secure Kubelet port, 10250).
  • --kubelet-preferred-address-types: Order to consider Kubelet node address types when connecting to Kubelet.

Setting node address types order

I’ll modify the deployment manifest file to add the order in which to consider different Kubelet node address types when connecting to Kubelet.

vim metrics-server-components.yaml

Modify like below:

...............
containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:vx.y.z
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

Apply after modification:

kubectl apply -f  metrics-server-components.yaml

Disabling insecure CA certificates verification

If you’re using self signed certificates, you can use –kubelet-insecure-tls flag to skip verifying Kubelet CA certificates. In your logs you’ll get output similar to one below:

E0927 00:52:21.463625       1 scraper.go:139] "Failed to scrape node" err="Get \"https://192.168.200.13:10250/stats/summary?only_cpu_and_memory=true\": x509: cannot validate certificate for 192.168.200.13 because it doesn't contain any IP SANs" node="k8s-worker-01.example.com"

Edit the file to add the flag:

...
containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls

Apply the change:

kubectl apply -f  metrics-server-components.yaml

Test Metrics server installation

Lets display resource usage of Nodes – CPU/Memory/Storage:

$ kubectl top nodes
NAME                                  CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8smaster01.geeksforgeeks.org     196m         4%     1053Mi          14%       
k8sworker01.geeksforgeeks.org     107m         2%     2080Mi          27%       
k8sworker02.geeksforgeeks.org     107m         2%     2080Mi          27%       
k8sworker03.geeksforgeeks.org     107m         2%     2080Mi          27%  

We can do same for pods – Show metrics for all pods in the default namespace

$ kubectl top pods
NAMESPACE     NAME                                                        CPU(cores)   MEMORY(bytes)   
kube-system   calico-kube-controllers-5c45f5bd9f-dk8jp                    1m           11Mi            
kube-system   calico-node-4h67w                                           32m          27Mi            
kube-system   calico-node-99vkm                                           35m          27Mi            
kube-system   calico-node-qdqb8                                           21m          27Mi            
kube-system   calico-node-sd9r8                                           21m          43Mi            
kube-system   coredns-6955765f44-d4g99                                    2m           12Mi            
kube-system   coredns-6955765f44-hqc4q                                    2m           11Mi            
kube-system   kube-proxy-h87zf                                            1m           12Mi            
kube-system   kube-proxy-lcnvx                                            1m           14Mi            
kube-system   kube-proxy-x6tfx                                            1m           16Mi            
kube-system   kube-proxy-xplz4                                            1m           16Mi            
kube-system   metrics-server-7bd949b8b6-mpmk9                             1m           10Mi        

Fore more command options check:

kubectl top pod --help
kubectl top node --help

Check other Kubernetes guides:

RELATED ARTICLES

Most Popular

Recent Comments