preloader
blog-post

Running Elasticsearch, FluentD, Kibana (EFK) on Oracle Kubernetes Engine

author image

FluentD is the opensource data collector for unified logging layer. FluentD is CNCF graduated project. Kubernetes pods are frequently created, sometimes crash/fail and in some cases the nodes die or may go offline due to node pool upgrade. So the challenge is, as developers the log data is not preserved and not available for future analysis. Thus tools like FluentD becomes so handy for analyzing the logs.

EFK - Elasticsearch, FluentD and Kibana provides a good combination of opensource tool set for providing indexing ,storing and forwarding to nice visualization of viewing searchable logs on graphical user interface.

Since you need stateful storage and log files on the nodes needs to be captured and forwarded on your kubernetes cluster it is advisable to run the fluentD as daemonsets.

Installation

Here is the namespace.yaml

kind: Namespace
apiVersion: v1
metadata:
  name: kube-logging

Apply the above using the following command

kubectl apply -f namespace.yaml

Here is the elasticservice_svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: kube-logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node

Apply the above yaml using the following command

kubectl apply -f elasticservice_svc.yaml

Here is the es_statefulsets.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
        - name: elasticsearch
          image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
          resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
          ports:
            - containerPort: 9200
              name: rest
              protocol: TCP
            - containerPort: 9300
              name: inter-node
              protocol: TCP
          volumeMounts:
            - name: data
              mountPath: /usr/share/elasticsearch/data
          env:
            - name: cluster.name
              value: k8s-logs
            - name: node.name
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: discovery.seed_hosts
              value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
            - name: cluster.initial_master_nodes
              value: "es-cluster-0,es-cluster-1,es-cluster-2"
            - name: ES_JAVA_OPTS
              value: "-Xms512m -Xmx512m"
      initContainers:
        - name: fix-permissions
          image: busybox
          command:
            ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
          securityContext:
            privileged: true
          volumeMounts:
            - name: data
              mountPath: /usr/share/elasticsearch/data
        - name: increase-vm-max-map
          image: busybox
          command: ["sysctl", "-w", "vm.max_map_count=262144"]
          securityContext:
            privileged: true
        - name: increase-fd-ulimit
          image: busybox
          command: ["sh", "-c", "ulimit -n 65536"]
          securityContext:
            privileged: true
  volumeClaimTemplates:
    - metadata:
        name: data
        labels:
          app: elasticsearch
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: "oci"
        resources:
          requests:
            storage: 100Gi

Apply the above yaml using the following command

kubectl apply -f es_statefulsets.yaml

Here is the kibana.yaml

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  ports:
    - port: 5601
  selector:
    app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
        - name: kibana
          image: docker.elastic.co/kibana/kibana:7.5.2
          resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
          env:
            - name: ELASTICSEARCH_URL
              value: http://elasticsearch:9200
          ports:
            - containerPort: 5601

Apply the above kibana.yaml using the below command

kubectl apply -f kibana.yaml

Here is the fluentd.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - namespaces
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
  - kind: ServiceAccount
    name: fluentd
    namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      containers:
        - name: fluentd
          image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
          env:
            - name: FLUENT_ELASTICSEARCH_HOST
              value: "elasticsearch.kube-logging.svc.cluster.local"
            - name: FLUENT_ELASTICSEARCH_PORT
              value: "9200"
            - name: FLUENT_ELASTICSEARCH_SCHEME
              value: "http"
            - name: FLUENTD_SYSTEMD_CONF
              value: disable
          resources:
            limits:
              memory: 512Mi
            requests:
              cpu: 100m
              memory: 200Mi
          volumeMounts:
            - name: varlog
              mountPath: /var
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
        - name: varlog
          hostPath:
            path: /var
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers

Apply the above fluentd.yaml using the following command

kubectl apply -f fluentd.yaml

Here is the list of all resources created under kube-logging namespace

kubectl get all -n kube-logging

NAME                          READY   STATUS    RESTARTS   AGE

pod/es-cluster-0              1/1     Running   0          1d

pod/es-cluster-1              1/1     Running   0          1d

pod/es-cluster-2              1/1     Running   0          1d

pod/fluentd-kiju7             1/1     Running   0          1d

pod/fluentd-hgt54             1/1     Running   0          1d

pod/fluentd-kjhgh             1/1     Running   0          1d

pod/kibana-6c98dcf5ff-huyjh   1/1     Running   0          1d



NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE

service/elasticsearch   ClusterIP   None            <none>        9200/TCP,9300/TCP   1d

service/kibana          ClusterIP   10.87.166.100   <none>        5601/TCP            1d



NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE

daemonset.apps/fluentd   3         3         3       3            3           <none>          1d



NAME                     READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/kibana   1/1     1            1           1d



NAME                                DESIRED   CURRENT   READY   AGE

replicaset.apps/kibana-uhyg6578j   1         1         1       1d



NAME                          READY   AGE

statefulset.apps/es-cluster   3/3     1d

Here is the view of persistent volume it created


kubectl get pv

ocid1.volume.oc1.ca-xxx-1.yyy   100Gi      RWO            Delete           Bound    kube-logging/data-es-cluster-0                   oci                     1d   Filesystem

ocid1.volume.oc1.ca-xxx-1.yyy   100Gi      RWO            Delete           Bound    kube-logging/data-es-cluster-2                   oci                     1d   Filesystem

ocid1.volume.oc1.ca-xx-1.yyy   100Gi      RWO            Delete           Bound    kube-logging/data-es-cluster-1                   oci                     1d   Filesystem

Once you confirm all the resources and pv’s created then issue the following command to pull the grafana dashboard

kubectl -n kube-logging port-forward $(kubectl -n kube-logging get pod -l app=kibana -o name) 5601:5601

To open the dashboard point the browser to http://localhost:5601

comments powered by Disqus

Recent Articles

blog-post

Kubernetes the hard way using LXD.

I am writing an enhanced tutorial for deploying kubernetes cluster on a limited resource system. Highlights: This …

Get Connected Now

Reach out to our experts for 30 minute free consultation.

Connect to hear more
*