温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

filebeat收集k8s日志

发布时间:2020-07-12 12:28:15 来源:网络 阅读:2041 作者:羊皮裘老头 栏目:云计算

1、Filebeat概述

    Filebeat是用于转发和集中日志数据的轻量级传送程序。作为服务器上的代理安装,Filebeat监视您指定的日志文件或位置,收集日志事件,并将其转发给[Elasticsearch]或 [Logstash]进行索引。

    Filebeat的工作方式如下:启动Filebeat时,它将启动一个或多个输入,这些输入将在为日志数据指定的位置中查找。对于Filebeat所找到的每个日志,Filebeat都会启动收集器。每个收割机都读取单个日志以获

取新内容,并将新日志数据发送到libbeat,libbeat将聚集事件并将聚集的数据发送到为Filebeat配置的输出。

filebeat收集k8s日志

2、在Kubernetes上运行Filebeat

    将Filebeat部署为DaemonSet,以确保集群的每个节点上都有一个正在运行的实例。Docker日志主机文件夹(/var/lib/docker/containers)安装在Filebeat容器上。Filebeat会开始输入文件,并在文件出现在

文件夹中后立即开始收集它们。

    这里使用官方提供的方式部署:

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.5/deploy/kubernetes/filebeat-kubernetes.yaml

3、设置

默认情况下,Filebeat将事件发送到现有的Elasticsearch部署(如果存在)。要指定其他目标,请在清单文件中更改以下参数:

env:
- name: ELASTICSEARCH_HOST
  value: elasticsearch
- name: ELASTICSEARCH_PORT
  value: "9200"
- name: ELASTICSEARCH_USERNAME
  value: elastic
- name: ELASTICSEARCH_PASSWORD
  value: changeme
- name: ELASTIC_CLOUD_ID
  value:
- name: ELASTIC_CLOUD_AUTH
  value:

输出到logstash:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        # Mounted `filebeat-inputs` configmap:
        path: ${path.config}/inputs.d/*.yml
        # Reload inputs configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false

    # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      hints.enabled: true

    processors:
      - add_cloud_metadata:

    #cloud.id: ${ELASTIC_CLOUD_ID}
    #cloud.auth: ${ELASTIC_CLOUD_AUTH}

    #output.elasticsearch:
      #hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      #username: ${ELASTICSEARCH_USERNAME}
      #password: ${ELASTICSEARCH_PASSWORD}
    output.logstash:
      hosts: ["192.168.0.104:5044"]
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: log    #设置类型为log
        paths:
          - /var/lib/docker/containers/*/*.log
        #fields:
          #app: k8s
          #type: docker-log
        fields_under_root: true
        json.keys_under_root: true
        json.overwrite_keys: true
        encoding: utf-8
        fields.sourceType: docker-log         #索引名格式
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.5.4       #提前准备好镜像,需要***下载
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat

创建并运行:

filebeat收集k8s日志

filebeat收集k8s日志

能看到上面日志,则表示启动成功。

4、排错

如果没启动成功,查看logstash的日志,报错如下

[2019-12-20T19:53:14,049][ERROR][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"dev-%{[fields][sourceType]}-2019-12-20", :_type=>"doc", :routing=>nil}, 
#<LogStash::Event:0x4c8737db>], :response=>{"index"=>{"_index"=>"dev-%{[fields][sourceType]}-2019-12-20", "_type"=>"doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name 
[dev-%{[fields][sourceType]}-2019-12-20], must be lowercase", "index_uuid"=>"_na_", "index"=>"dev-%{[fields][sourceType]}-2019-12-20"}}}}
[2019-12-20T19:53:14,049][ERROR][logstash.outputs.

原因是logstash中output的index不能有大写:

我原来的logstash的conf文件

output {
    elasticsearch {
            hosts => ["localhost:9200"]
            index => '%{platform}-%{[fields][sourceType]}-%{+YYYY-MM-dd}'
            template => "/opt/logstash-6.5.2/config/af-template.json"
            template_overwrite => true
    }
}

修改后的

output {
    elasticsearch {
            hosts => ["localhost:9200"]
            index => "k8s-log-%{+YYYY.MM.dd}"
            template => "/opt/logstash-6.5.2/config/af-template.json"
            template_overwrite => true
    }
}

完美结束!无坑

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI