Panduan Linux

Logging Terpusat Kubernetes Menggunakan Fluentd + Elasticsearch

Yo wazzup! Kalo kalian pernah ngoprek Kubernetes (K8s), pasti udah familiar sama betapa ribetnya nge-log aplikasi yang jalan di situ. Docker logs jatuhnya cuma ke stdout, jadi kalo mau liat log suatu container, mesti masukin terminal ke container itu dan liat outputnya. Gila kan? Belum lagi kalo skalanya gede, rasanya kayak nyari jarum di tumpukan jerami. Nah, di sini kita perlu sistem logging terpusat yang bisa ngumpulin semua log dari banyak container dan node. So, in this article, kita bakal ngebahas gimana caranya bikin logging terpusat di Kubernetes pake Fluentd dan Elascticsearch.

Kenapa Harus Pake Fluentd + Elasticsearch?

Sebelum kita mulai, mungkin kalian mikir, kenapa sih pake Fluentd + Elasticsearch? Fluentd ni biasanya dipake buat ngumpulin log dari berbagai sumber, terus ngeforward-nya ke tujuan yang kalian mau, salah satunya ke Elasticsearch. Nah, Elasticsearch ni super powerful buat nyimpen dan nge-index data, jadi bisa dipake buat nge-query log pake Kibana.

Jadi gini, Fluentd bakal ngumpulin log dari semua container di K8s, trus dikasih tambahan metadata kayak nama pod, nama container, namespace, dll. Setelah itu, Fluentd bakal ngeforward log tersebut ke Elasticsearch. Kita juga bisa pake Kibana buat ngeliat log yang udah disimpen di Elasticsearch.

Step-by-Step Guide: Setting Up Fluentd + Elasticsearch di K8s

Nah, sekarang kita bakal ngejelasin step-by-step settingnya.

1. Install Elasticsearch

Pertama-tama, kita butuh Elasticsearch biasa jalan di K8s. Kita bisa bikin deployment Elasticsearch dengan file manifest berikut:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: elasticsearch:7.9.2
        ports:
        - containerPort: 9200
          name: http
        - containerPort: 9300
          name: transport
        env:
        - name: discovery.type
          value: single-node
        volumeMounts:
        - name: elasticsearch-data
          mountPath: /usr/share/elasticsearch/data
      volumes:
      - name: elasticsearch-data
        emptyDir: {}

Jangan lupa buat service-nya juga ya biar ada endpointnya:

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: kube-logging
spec:
  selector:
    app: elasticsearch
  ports:
  - port: 9200
    name: http
  - port: 9300
    name: transport
2. Install Fluentd

Setelah itu, kita bakal nginstall Fluentd. Caranya, pertama-tama bikin service account, role binding, dll:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
rules:
- apiGroups:
  - ''
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: fluentd
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: fluentd
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-logging

Terus, kita bisa apply config yaml untuk daemonset fluentd (biar ada agent di tiap node):

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
        env:
          - name: FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch.kube-logging.svc.cluster.local"
          - name: FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
3. Install Kibana (Optional)

Terakhir, kita bisa nginstall Kibana buat ngevisualisasiin log yang ada di Elasticsearch.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.9.2
        ports:
        - containerPort: 5601
        env:
        - name: ELASTICSEARCH_HOSTS
          value: "http://elasticsearch.kube-logging.svc.cluster.local:9200"
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
spec:
  selector:
    app: kibana
  ports:
  - port: 5601

Nah begitu kita apply tuh semua file yaml, kita bisa ngecek dengan kubectl -n kube-logging get pods. Tunggu aja semua pod-nya jalan, kemudian bisa diakses Kibana di http://localhost:5601 (kalo port-forward dulu).

#Kubernetes #Elasticseacrh #Fluentd #Tutorial