01-InfluxDB入门介绍
02-influxdb的权限管理
influxdb是使用GO编写的基于时间序列的数据库,用于存储大量带有时间戳的数据,报错DevOps监控,日志数据,应用程序的指标、数据分析数据等等。通过influxdb自动保存数据,你不需要删除和清理,只需要定义一段时间DB会帮你自动清理。
特点
其它信息
influxdb默认端口8086,默认是http协议接口,使用起来简单方便
下载安装包:wget 'http://dl.influxdata.com/influxdb/nightlies/influxdb-nightly_darwin_amd64.tar.gz'
解压安装包:tar zxf influxdb-nightly_darwin_amd64.tar.gz
启动influxdb:cd influxdb-1.7.0~n201811230800-0/ && ./usr/bin/influxd
登录数据库:./usr/bin/influx
查看帮助:
> help
Usage:
connect <host:port> connects to another node specified by host:port
auth prompts for username and password
pretty toggles pretty print for the json format
chunked turns on chunked responses from server
chunk size <size> sets the size of the chunked responses. Set to 0 to reset to the default chunked size
use <db_name> sets current database
format <format> specifies the format of the server responses: json, csv, or column
precision <format> specifies the format of the timestamp: rfc3339, h, m, s, ms, u or ns
consistency <level> sets write consistency level: any, one, quorum, or all
history displays command history
settings outputs the current settings for the shell
clear clears settings such as database or retention policy. run 'clear' for help
exit/quit/ctrl+d quits the influx shell
show databases show database names
show series show series information
show measurements show measurement information
show tag keys show tag key information
show field keys show field key information
A full list of influxql commands can be found at:
https://docs.influxdata.com/influxdb/latest/query_language/spec/
登录Influxdb
influx -precision rfc3339
-precision rfc3339 #精准的指定参数,rfc3339时间格式
现在influxdb已经搭建完成,可以进行入门操作,建库、建表写数据。当写入一个时间序列数据时候他可能包含0到多个点,每个数据都对应一个监控样本(例如cpu_load、温度)至少包括一个键值对(cplu_load=5);
在这里可以认为:
tags是表中索引
在这里和mysql不同是,你可以有几百万个measurements,而且n不需要提前定义表结构、也不会存储空值;
1.创建一个数据库
CREATE DATABASE {NAME};
> create database order_record;
> show databases;
name: databases
name
----
_internal
order_record
这时候我们发现数据库有一个表“_internal”,其实这个表是influxdb数据库的一些指标存储库。有点类似mysql数据库的mysql库。
2.写数据
注意在写数据的时候如果不添加时间戳,系统会默认添加一个时间
<measurement>[,<tag-key>=<tag-value>...] <field-key>=<field-value>[,<field2-key>=<field2-value>...] [unix-nano-timestamp]
> use order_record;
Using database order_record
> INSERT cpu,host=serverA,region=us_west value=0.64
> SELECT "host", "region", "value" FROM "cpu"
name: cpu
time host region value
---- ---- ------ -----
2018-11-23T16:01:29.995044Z serverA us_west 0.64
3.数据读
同mysql一样如果你想空值内容输出行数,可以用 [limit 1~n;],同样influxdb的sql也支持go语言格式的正则。
> select * from cpu;
name: cpu
time host region value
---- ---- ------ -----
2018-11-23T16:01:29.995044Z serverA us_west 0.64
> INSERT stock,symbol=AAPL bid=127.46,ask=127.48
> INSERT temperature,machine=unit42,type=assembly external=25,internal=37 1434067467000000000
> SELECT "host", "region", "value" FROM "cpu"
name: cpu
time host region value
---- ---- ------ -----
2018-11-23T16:01:29.995044Z serverA us_west 0.64
每个表输出一行
SELECT * FROM /.*/ LIMIT 1
4.查看当前数据库的分片
> SHOW SHARDS
name: _internal
id database retention_policy shard_group start_time end_time expiry_time owners
-- -------- ---------------- ----------- ---------- -------- ----------- ------
1 _internal monitor 1 2018-11-24T00:00:00Z 2018-11-25T00:00:00Z 2018-12-02T00:00:00Z
name: zabbix
id database retention_policy shard_group start_time end_time expiry_time owners
-- -------- ---------------- ----------- ---------- -------- ----------- ------
5 zabbix autogen 5 1969-12-29T00:00:00Z 1970-01-05T00:00:00Z 1970-01-05T00:00:00Z
4 zabbix autogen 4 1970-01-12T00:00:00Z 1970-01-19T00:00:00Z 1970-01-19T00:00:00Z
3 zabbix autogen 3 2018-11-19T00:00:00Z 2018-11-26T00:00:00Z 2018-11-26T00:00:00Z
0.接口介绍
接口路径 | 描述 |
---|---|
/debug/pprof | debug排查问题使用 |
/debug/requests | 使用这个请求监听最近是否有请求 |
/debug/vars | 查询influxdb收集到静态信息 |
/ping | 检测influxdb状态 |
/query | 查询数据接口(同时可以创建ku) |
/write | 写入数据接口(一个已存在数据库) |
状态码介绍:
1.创建数据库
curl -i -XPOST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE mydb"
HTTP/1.1 200 OK
Content-Type: application/json
Request-Id: 5edd88a8-ef90-11e8-83cd-a0999b0f94e3
X-Influxdb-Build: OSS
X-Influxdb-Version: 1.7.0~n201811230800
X-Request-Id: 5edd88a8-ef90-11e8-83cd-a0999b0f94e3
Date: Sat, 24 Nov 2018 02:26:38 GMT
Transfer-Encoding: chunked
{"results":[{"statement_id":0}]}
2.写入数据
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.65 1434055564000000000'
HTTP/1.1 204 No Content
Content-Type: application/json
Request-Id: 1ae386c4-ef91-11e8-83d8-a0999b0f94e3
X-Influxdb-Build: OSS
X-Influxdb-Version: 1.7.0~n201811230800
X-Request-Id: 1ae386c4-ef91-11e8-83d8-a0999b0f94e3
Date: Sat, 24 Nov 2018 02:31:53 GMT
3.写入多个数据点
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server02 value=0.67
cpu_load_short,host=server02,region=us-west value=0.55 1422568543702900257
cpu_load_short,direction=in,host=server01,region=us-west value=2.0 1422568543702900257'
HTTP/1.1 204 No Content
Content-Type: application/json
Request-Id: 574f52a0-ef91-11e8-83d9-a0999b0f94e3
X-Influxdb-Build: OSS
X-Influxdb-Version: 1.7.0~n201811230800
X-Request-Id: 574f52a0-ef91-11e8-83d9-a0999b0f94e3
Date: Sat, 24 Nov 2018 02:33:34 GMT
4.从文件导入数据库
从文件导入时候建议不要超过5000条,如果超过请对文件进行切割,因为http api的接口5s会超时,请求数据过多会导致数据无法确认是否成功。
文件cpu_data.txt内容如下:
cpu_load_short,host=server02 value=111
cpu_load_short,host=server02,region=us-west value=0.222 1543027130702900257
cpu_load_short,direction=in,host=server01,region=us-west value=111.222 1543027129702900257
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary @cpu_data.txt
HTTP/1.1 204 No Content
Content-Type: application/json
Request-Id: 4b2ed710-ef92-11e8-83e3-a0999b0f94e3
X-Influxdb-Build: OSS
X-Influxdb-Version: 1.7.0~n201811230800
X-Request-Id: 4b2ed710-ef92-11e8-83e3-a0999b0f94e3
Date: Sat, 24 Nov 2018 02:40:24 GMT
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。