Mongodb分片集群介绍
分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移)。通过一个名为mongos的路由进程进行操作,mongos知道数据和片的对应关系(通过配置服务器)。大部分使用场景都是解决磁盘空间的问题,对于写入有可能会变差,查询则尽量避免跨分片查询。使用分片的时机:
1,机器的磁盘不够用了。使用分片解决磁盘空间的问题。
2,单个mongod已经不能满足写数据的性能要求。通过分片让写压力分散到各个分片上面,使用分片服务器自身的资源。
3,想把大量数据放到内存里提高性能。和上面一样,通过分片使用分片服务器自身的资源。
服务器规划
下载 Mongodb
https://www.mongodb.com/download-center/community
wget -c https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.0.8.tgz
解压包 Mongodb 包
tar -zxvf mongodb-linux-x86_64-rhel70-4.0.7.tgz -C /usr/local/mongodb
添加环境变量
export MONGO_HOME=/usr/local/mongodb export PATH=$PATH:$MONGO_HOME/bin
集群准备
新建 Mongodb 需要的目录
在192.168.2.177服务器上操作,以下配置操作均在192.168.2.177上进行
mkdir -p /wdata/mongodb/{data,logs,config,keys} mkdir -p /wdata/mongodb/data/{mongosrv,shard1}
将新建的目录发送到另外两台服务器
for i in 178 180; do scp -r /wdata/mongodb root@192.168.2.$i; done
生成分片集群的key文件
openssl rand -base64 90 -out /wdata/mongodb/keys/keyfile
修改key文件属性
chmod 600 /wdata/mongodb/keys/keyfile
注意:此处必须修改,否则可能报错
编辑 mongos.conf 文件
systemLog: destination: file #日志存储位置 path: /wdata/mongodb/logs/mongos.log logAppend: true processManagement: #fork and run in background fork: true pidFilePath: /wdata/mongodb/data/mongos.pid #端口配置 net: port: 30000 maxIncomingConnections: 500 unixDomainSocket: enabled: true #pathPrefix: /tmp filePermissions: 0700 bindIp: 0.0.0.0 security: keyFile: /wdata/mongodb/keys/keyfile #将 confige server 添加到路由 sharding: configDB: configs/192.168.2.177:21000,192.168.2.178:21000,192.168.2.180:21000
编辑 mongosrv1.conf 文件
systemLog: destination: file logAppend: true path: /wdata/mongodb/logs/mongosrv.log storage: dbPath: /wdata/mongodb/data/mongosrv journal: enabled: true wiredTiger: engineConfig: directoryForIndexes: true processManagement: # fork and run in background fork: true # location of pidfile pidFilePath: /wdata/mongodb/data/mongosrv/mongosrv.pid net: port: 21000 #bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting. bindIpAll: true maxIncomingConnections: 65535 unixDomainSocket: enabled: true filePermissions: 0700 security: keyFile: /wdata/mongodb/keys/keyfile authorization: enabled replication: replSetName: configs sharding: clusterRole: configsvr
说明:该文件在此环境需要3个,分布在3台服务器,这里仅配置1个
编辑 shard1.conf 文件
systemLog: destination: file logAppend: true path: /wdata/mongodb/logs/shard1.log storage: dbPath: /wdata/mongodb/data/shard1 journal: enabled: true wiredTiger: engineConfig: directoryForIndexes: true processManagement: # fork and run in background fork: true pidFilePath: /wdata/mongodb/data/mongosrv/mongosrv.pid # location of pidfile # timeZoneInfo: /usr/share/zoneinfo net: port: 27001 # bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting. bindIpAll: true maxIncomingConnections: 65535 unixDomainSocket: enabled: true # pathPrefix: /tmp/mongod1 filePermissions: 0700 security: keyFile: /wdata/mongodb/keys/keyfile authorization: enabled replication: replSetName: shard1 sharding: clusterRole: shardsvr
说明:shard在一台服务器上也可以配置多个,建不同的目录充当分片服务器,看需求而定
将配置文件到其他两台服务器
for i in 178 180; do scp -r /wdata/mongodb/config root@192.168.2.$i:/wdata/mongodb/; done
启动集群
在192.168.1.177上启动 mongod -f /wdata/mongodb/config/mongosrv1.conf mongod -f /wdata/mongodb/config/shard1.conf mongos -f /wdata/mongodb/config/mongos.conf 在192.168.1.178上启动 mongod -f /wdata/mongodb/config/mongosrv1.conf mongod -f /wdata/mongodb/config/shard1.conf mongos -f /wdata/mongodb/config/mongos.conf 在192.168.1.180上启动 mongod -f /wdata/mongodb/config/mongosrv1.conf mongod -f /wdata/mongodb/config/shard1.conf mongos -f /wdata/mongodb/config/mongos.conf
分片集群创建副本集
在192.168.2.177上操作
[root@localhost mongodb]# mongo --port 27001 MongoDB shell version v4.0.7 connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("b56157d2-fbc7-4226-aeb4-4de0b79dfcda") } MongoDB server version: 4.0.7 > use admin switched to db admin > config = {_id: 'shard1', members: [{_id: 0, host: '192.168.2.177:27001'},{_id: 1, host: '192.168.2.178:27001'},{_id: 2, host:'192.168.2.180:27001'}]} { "_id" : "shard1", "members" : [ { "_id" : 0, "host" : "192.168.2.177:27001" }, { "_id" : 1, "host" : "192.168.2.178:27001" }, { "_id" : 2, "host" : "192.168.2.180:27001" } ] } > rs.initiate(config) { "ok" : 1 } shard1:PRIMARY> rs.status() { "set" : "shard1", "date" : ISODate("2019-04-03T10:08:16.477Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1554286090, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1554286090, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1554286090, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1554286090, 1), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1554286040, 2), "members" : [ { "_id" : 0, "name" : "192.168.2.177:27001", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 2657, "optime" : { "ts" : Timestamp(1554286090, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-04-03T10:08:10Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1554286040, 1), "electionDate" : ISODate("2019-04-03T10:07:20Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.2.178:27001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 67, "optime" : { "ts" : Timestamp(1554286090, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1554286090, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-04-03T10:08:10Z"), "optimeDurableDate" : ISODate("2019-04-03T10:08:10Z"), "lastHeartbeat" : ISODate("2019-04-03T10:08:16.259Z"), "lastHeartbeatRecv" : ISODate("2019-04-03T10:08:15.440Z"), "pingMs" : NumberLong(6), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.2.177:27001", "syncSourceHost" : "192.168.2.177:27001", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.2.180:27001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 67, "optime" : { "ts" : Timestamp(1554286090, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1554286090, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-04-03T10:08:10Z"), "optimeDurableDate" : ISODate("2019-04-03T10:08:10Z"), "lastHeartbeat" : ISODate("2019-04-03T10:08:16.219Z"), "lastHeartbeatRecv" : ISODate("2019-04-03T10:08:15.440Z"), "pingMs" : NumberLong(4), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.2.177:27001", "syncSourceHost" : "192.168.2.177:27001", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1 } shard1:PRIMARY> exit bye
创建分片集群数据库和用户
[root@localhost mongodb]# mongo --port 27001 MongoDB shell version v4.0.7 connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("a4c12af4-85e2-49e3-9a02-dd481307bcda") } MongoDB server version: 4.0.7 shard1:PRIMARY> use admin switched to db admin shard1:PRIMARY> db.createUser({user:"admin",pwd:"123456",roles:[{role:"userAdminAnyDatabase",db:"admin"}]}) Successfully added user: { "user" : "admin", "roles" : [ { "role" : "userAdminAnyDatabase", "db" : "admin" } ] } shard1:PRIMARY> db.auth("admin","123456") 1 shard1:PRIMARY> use test switched to db test shard1:PRIMARY> db.createUser({user:"root",pwd:"123456",roles:[{role:"dbOwner",db:"test"}]}) Successfully added user: { "user" : "root", "roles" : [ { "role" : "dbOwner", "db" : "test" } ] } shard1:PRIMARY> exit bye
配置 mongosrv 副本集
[root@localhost config]# mongo --port 21000 MongoDB shell version v4.0.7 connecting to: mongodb://127.0.0.1:21000/?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("f9896034-d90f-4d52-bc55-51c3dc85aae9") } MongoDB server version: 4.0.7 > config = {_id: 'configs', members: [{_id: 0, host: '192.168.2.177:21000'},{_id: 1, host: '192.168.2.178:21000'},{_id: 2, host: '192.168.2.180:21000'}]} { "_id" : "configs", "members" : [ { "_id" : 0, "host" : "192.168.2.177:21000" }, { "_id" : 1, "host" : "192.168.2.178:21000" }, { "_id" : 2, "host" : "192.168.2.180:21000" } ] } > rs.initiate(config) { "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(1554288851, 1), "electionId" : ObjectId("000000000000000000000000") }, "lastCommittedOpTime" : Timestamp(0, 0) } configs:SECONDARY> rs.status(config) { "set" : "configs", "date" : ISODate("2019-04-03T10:54:43.283Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "configsvr" : true, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1554288879, 4), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1554288879, 4), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1554288879, 4), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1554288879, 4), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1554288864, 1), "members" : [ { "_id" : 0, "name" : "192.168.2.177:21000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 5454, "optime" : { "ts" : Timestamp(1554288879, 4), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-04-03T10:54:39Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1554288863, 1), "electionDate" : ISODate("2019-04-03T10:54:23Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.2.178:21000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 31, "optime" : { "ts" : Timestamp(1554288879, 4), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1554288879, 4), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-04-03T10:54:39Z"), "optimeDurableDate" : ISODate("2019-04-03T10:54:39Z"), "lastHeartbeat" : ISODate("2019-04-03T10:54:41.362Z"), "lastHeartbeatRecv" : ISODate("2019-04-03T10:54:41.906Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.2.177:21000", "syncSourceHost" : "192.168.2.177:21000", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.2.180:21000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 31, "optime" : { "ts" : Timestamp(1554288879, 4), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1554288879, 4), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-04-03T10:54:39Z"), "optimeDurableDate" : ISODate("2019-04-03T10:54:39Z"), "lastHeartbeat" : ISODate("2019-04-03T10:54:41.362Z"), "lastHeartbeatRecv" : ISODate("2019-04-03T10:54:41.597Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.2.177:21000", "syncSourceHost" : "192.168.2.177:21000", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1554288879, 4), "$gleStats" : { "lastOpTime" : Timestamp(1554288851, 1), "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1554288879, 4), "$clusterTime" : { "clusterTime" : Timestamp(1554288879, 4), "signature" : { "hash" : BinData(0,"VVM2Bsa9KiZy8Sew9Oa8CsbDBPU="), "keyId" : NumberLong("6675619852301893642") } } } configs:PRIMARY>
到此mongodb分片集群部署完成后,后面附上 Mongodb 常见操作指令
show dbs #查看所有数据库 db #查看当前所在的数据库 sh.status() #查看集群的信息 sh.enableSharding("dba") #对数据库启用分片 db.help() #查看帮助 db.account.stats() #查看数据集合的分布情况 db.createUser() #新建用户 use databasename #进入或者新建数据库 db.shutdownServer() #关闭数据库 db.dropDatabase() #删除数据库(必须先切换到该数据库)
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。