这篇文章主要介绍了Docker如何构建ELK Docker集群日志收集系统的相关知识,内容详细易懂,操作简单快捷,具有一定借鉴价值,相信大家阅读完这篇Docker如何构建ELK Docker集群日志收集系统文章都会有所收获,下面我们一起来看看吧。
elk简介
elk由elasticsearch、logstash和kiabana三个开源工具组成
elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。
logstash是一个完全开源的工具,他可以对你的日志进行收集、过滤,并将其存储供以后使用
kibana 也是一个开源和免费的工具,它kibana可以为 logstash 和 elasticsearch 提供的日志分析友好的 web 界面,可以帮助您汇总、分析和搜索重要数据日志。
使用docker搭建elk平台
首先我们编辑一下 logstash的配置文件 logstash.conf
input {
udp {
port => 5000
type => json
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200" #将logstash的输出到 elasticsearch 这里改成你们自己的host
}
}
然后我们还需要需要一下kibana 的启动方式
编写启动脚本 等待elasticserach 运行成功后启动
#!/usr/bin/env bash
# wait for the elasticsearch container to be ready before starting kibana.
echo "stalling for elasticsearch"
while true; do
nc -q 1 elasticsearch 9200 2>/dev/null && break
done
echo "starting kibana"
exec kibana
修改dockerfile 生成自定义的kibana镜像
from kibana:latest
run apt-get update && apt-get install -y netcat
copy entrypoint.sh /tmp/entrypoint.sh
run chmod +x /tmp/entrypoint.sh
run kibana plugin --install elastic/sense
cmd ["/tmp/entrypoint.sh"]
同时也可以修改一下kibana 的配置文件 选择需要的插件
# kibana is served by a back end server. this controls which port to use.
port: 5601
# the host to bind the server to.
host: "0.0.0.0"
# the elasticsearch instance to use for all your queries.
elasticsearch_url: "http://elasticsearch:9200"
# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. if you set it to false,
# then the host you use to connect to *this* kibana instance will be sent.
elasticsearch_preserve_host: true
# kibana uses an index in elasticsearch to store saved searches, visualizations
# and dashboards. it will create a new index if it doesn't already exist.
kibana_index: ".kibana"
# if your elasticsearch is protected with basic auth, this is the user credentials
# used by the kibana server to perform maintence on the kibana_index at statup. your kibana
# users will still need to authenticate with elasticsearch (which is proxied thorugh
# the kibana server)
# kibana_elasticsearch_username: user
# kibana_elasticsearch_password: pass
# if your elasticsearch requires client certificate and key
# kibana_elasticsearch_client_crt: /path/to/your/client.crt
# kibana_elasticsearch_client_key: /path/to/your/client.key
# if you need to provide a ca certificate for your elasticsarech instance, put
# the path of the pem file here.
# ca: /path/to/your/ca.pem
# the default application to load.
default_app_id: "discover"
# time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# ping_timeout: 1500
# time in milliseconds to wait for responses from the back end or elasticsearch.
# this must be > 0
request_timeout: 300000
# time in milliseconds for elasticsearch to wait for responses from shards.
# set to 0 to disable.
shard_timeout: 0
# time in milliseconds to wait for elasticsearch at kibana startup before retrying
# startup_timeout: 5000
# set to false to have a complete disregard for the validity of the ssl
# certificate.
verify_ssl: true
# ssl for outgoing requests from the kibana server (pem formatted)
# ssl_key_file: /path/to/your/server.key
# ssl_cert_file: /path/to/your/server.crt
# set the path to where you would like the process id file to be created.
# pid_file: /var/run/kibana.pid
# if you would like to send the log output to a file you can set the path below.
# this will also turn off the stdout log output.
log_file: ./kibana.log
# plugins that are included in the build, and no longer found in the plugins/ folder
bundled_plugin_ids:
- plugins/dashboard/index
- plugins/discover/index
- plugins/doc/index
- plugins/kibana/index
- plugins/markdown_vis/index
- plugins/metric_vis/index
- plugins/settings/index
- plugins/table_vis/index
- plugins/vis_types/index
- plugins/visualize/index
好了下面我们编写一下 docker-compose.yml 方便构建
端口之类的可以根据自己的需求修改 配置文件的路径根据你的目录修改一下 整体系统配置要求较高 请选择配置好点的机器
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -des.network.host=0.0.0.0
ports:
- "9200:9200"
- "9300:9300"
logstash:
image: logstash:latest
command: logstash -f /etc/logstash/conf.d/logstash.conf
volumes:
- ./logstash/config:/etc/logstash/conf.d
ports:
- "5001:5000/udp"
links:
- elasticsearch
kibana:
build: kibana/
volumes:
- ./kibana/config/:/opt/kibana/config/
ports:
- "5601:5601"
links:
- elasticsearch
#好了命令 就可以直接启动elk了
docker-compose up -d
访问之前的设置的kibanna的5601端口就可以看到是否启动成功了
使用logspout收集docker日志
下一步我们要使用logspout对docker日志进行收集 我们根据我们的需求修改一下logspout镜像
编写配置文件 modules.go
package main
import (
_ "github.com/looplab/logspout-logstash"
_ "github.com/gliderlabs/logspout/transports/udp"
)
编写dockerfile
from gliderlabs/logspout:latest copy ./modules.go /src/modules.go
重新构建镜像后 在各个节点运行即可
docker run -d --name="logspout" --volume=/var/run/docker.sock:/var/run/docker.sock \
jayqqaa12/logspout logstash://你的logstash地址
现在打开kibana 就可以看到收集到的 docker日志了
注意docker容器应该选择以console输出 这样才能采集到
关于“Docker如何构建ELK Docker集群日志收集系统”这篇文章的内容就介绍到这里,感谢各位的阅读!相信大家对“Docker如何构建ELK Docker集群日志收集系统”知识都有一定的了解,大家如果还想学习更多知识,欢迎关注亿速云行业资讯频道。
亿速云「云服务器」,即开即用、新一代英特尔至强铂金CPU、三副本存储NVMe SSD云盘,价格低至29元/月。点击查看>>
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。
原文链接:https://my.oschina.net/apachepulsar/blog/4466655