分片集群-规划
10个实例:38017-38026 (1)configserver:38018-38020 3台构成的复制集(1主两从,不支持arbiter)38018-38020(复制集名字configsvr) (2)shard节点: sh1:38021-23 (1主两从,其中一个节点为arbiter,复制集名字sh1) sh2:38024-26 (1主两从,其中一个节点为arbiter,复制集名字sh2) (3):mongos: 38017
Shard节点配置过程
目录创建:
mkdir -p /mongodb/38021/{conf,log,data} mkdir -p /mongodb/38022/{conf,log,data} mkdir -p /mongodb/38023/{conf,log,data} mkdir -p /mongodb/38024/{conf,log,data} mkdir -p /mongodb/38025/{conf,log,data} mkdir -p /mongodb/38026/{conf,log,data}
修改配置文件:
第一组复制集搭建:21-23 (1主1从 1Ard)
cat > /mongodb/38021/conf/mongodb.conf <<EOF systemLog: destination: file path: /mongodb/38021/log/mongodb.log logAppend: true storage: journal: enabled: true dbPath: /mongodb/38021/data directoryPerDB: true #engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 1 directoryForIndexes: true collectionConfig: blockCompressor: zlib indexConfig: prefixCompression: true net: bindIp: 192.168.0.10,127.0.0.1 port: 38021 replication: oplogSizeMB: 2048 replSetName: sh1 sharding: clusterRole: shardsvr processManagement: fork: true security: authorization: enabled EOF cp /mongodb/38021/conf/mongodb.conf /mongodb/38022/conf/ cp /mongodb/38021/conf/mongodb.conf /mongodb/38023/conf/ sed 's#38021#38022#g' /mongodb/38022/conf/mongodb.conf -i sed 's#38021#38023#g' /mongodb/38023/conf/mongodb.conf -i
第二组节点:24-26 (1主1从 1Ard)
cat > /mongodb/38024/conf/mongodb.conf <<EOF systemLog: destination: file path: /mongodb/38024/log/mongodb.log logAppend: true storage: journal: enabled: true dbPath: /mongodb/38024/data directoryPerDB: true wiredTiger: engineConfig: cacheSizeGB: 1 directoryForIndexes: true collectionConfig: blockCompressor: zlib indexConfig: prefixCompression: true net: bindIp: 192.168.0.10,127.0.0.1 port: 38024 replication: oplogSizeMB: 2048 replSetName: sh2 sharding: clusterRole: shardsvr processManagement: fork: true security: authorization: enabled EOF cp /mongodb/38024/conf/mongodb.conf /mongodb/38025/conf/ cp /mongodb/38024/conf/mongodb.conf /mongodb/38026/conf/ sed 's#38024#38025#g' /mongodb/38025/conf/mongodb.conf -i sed 's#38024#38026#g' /mongodb/38026/conf/mongodb.conf -i
启动所有节点,并搭建复制集
mongod -f /mongodb/38021/conf/mongodb.conf mongod -f /mongodb/38022/conf/mongodb.conf mongod -f /mongodb/38023/conf/mongodb.conf mongod -f /mongodb/38024/conf/mongodb.conf mongod -f /mongodb/38025/conf/mongodb.conf mongod -f /mongodb/38026/conf/mongodb.conf ps -ef |grep mongod mongo --port 38021 use admin config = {_id: 'sh1', members: [ {_id: 0, host: '192.168.0.10:38021'}, {_id: 1, host: '192.168.0.10:38022'}, {_id: 2, host: '192.168.0.10:38023',"arbiterOnly":true}] } rs.initiate(config) mongo --port 38024 use admin config = {_id: 'sh2', members: [ {_id: 0, host: '192.168.0.10:38024'}, {_id: 1, host: '192.168.0.10:38025'}, {_id: 2, host: '192.168.0.10:38026',"arbiterOnly":true}] } rs.initiate(config)
config节点配置
mkdir -p /mongodb/38018/{conf,log,data} mkdir -p /mongodb/38019/{conf,log,data} mkdir -p /mongodb/38020/{conf,log,data}
修改配置文件
cat > /mongodb/38018/conf/mongodb.conf <<EOF systemLog: destination: file path: /mongodb/38018/log/mongodb.conf logAppend: true storage: journal: enabled: true dbPath: /mongodb/38018/data directoryPerDB: true #engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 1 directoryForIndexes: true collectionConfig: blockCompressor: zlib indexConfig: prefixCompression: true net: bindIp: 192.168.0.10,127.0.0.1 port: 38018 replication: oplogSizeMB: 2048 replSetName: configReplSet sharding: clusterRole: configsvr processManagement: fork: true security: authorization: enabled EOF cp /mongodb/38018/conf/mongodb.conf /mongodb/38019/conf/ cp /mongodb/38018/conf/mongodb.conf /mongodb/38020/conf/ sed 's#38018#38019#g' /mongodb/38019/conf/mongodb.conf -i sed 's#38018#38020#g' /mongodb/38020/conf/mongodb.conf -i
节点启动,配置复制集
mongod -f /mongodb/38018/conf/mongodb.conf mongod -f /mongodb/38019/conf/mongodb.conf mongod -f /mongodb/38020/conf/mongodb.conf mongod --port 38018 admin config = {_id: 'configReplSet', members: [ {_id: 0, host: '192.168.0.10:38018'}, {_id: 1, host: '192.168.0.10:38019'}, {_id: 2, host: '192.168.0.10:38020'}] } rs.initiate(config) 注:configserver 可以是一个节点,官方建议复制集。configserver不能有arbiter。 新版本中,要求必须是复制集。 注:mongodb 3.4之后,虽然要求config server为replica set,但是不支持arbiter
mongos节点配置
创建目录:
mkdir -p /mongodb/38017/{conf,log,data}
配置文件
cat > /mongodb/38017/conf/mongos.conf <<EOF systemLog: destination: file path: /mongodb/38017/log/mongos.log logAppend: true net: bindIp: 192.168.0.10,127.0.0.1 port: 38017 sharding: configDB: configReplSet/192.168.0.10:38018,192.168.0.10:38019,192.168.0.10:38020 processManagement: fork: true EOF
启动mongos
mongos -f /mongodb/38017/conf/mongos.conf
分片集群添加节点
连接到其中一个mongos(192.168.0.10),做以下配置 (1)连接到mongos的admin库 #su - mongod mongo 192.168.0.10:38017/admin (2)添加分片 db.runCommand( { addshard : "sh1/192.168.0.10:38021,192.168.0.10:38022,192.168.0.10:38023",name:"shard1"} ) db.runCommand( { addshard : "sh2/192.168.0.10:38024,192.168.0.10:38025,192.168.0.10:38026",name:"shard2"} ) (3)列出分片 mongos> db.runCommand( { listshards : 1 } ) (4)整体状态查看 mongos> sh.status();
使用分片集群
RANGE分片配置及测试
1.激活数据库分片功能:
mongo --port 38017 admin admin> ( { enablesharding : "数据库名称"} ) eg: admin> db.runCommand( { enablesharding : "test" } )
2.指定分片键对集合分片:
### 创建索引 use test > db.vast.ensureIndex( { id: 1 } ) ### 开启分片 use admin > db.runCommand( { shardcollection : "test.vast",key : {id: 1} } )
3.集合分片验证:
admin> use test test> for(i=1;i<1000000;i++){ db.vast.insert({"id":i,"name":"shenzheng","age":70,"date":new Date()}); } test> db.vast.stats()
4.分片结构测试:
shard1: mongo --port 38021 db.vast.count(); shard2: mongo --port 38024 db.vast.count();
5.Hash分片例子:
对oldboy库下的vast大表进行hash 创建哈希索引 (1)对于oldboy开启分片功能 mongo --port 38017 admin use admin admin> db.runCommand( { enablesharding : "oldboy" } ) (2)对于oldboy库下的vast表建立hash索引 use oldboy oldboy> db.vast.ensureIndex( { id: "hashed" } ) (3)开启分片 use admin admin > sh.shardCollection( "oldboy.vast", { id: "hashed" } ) (4)录入10w行数据测试 use oldboy for(i=1;i<100000;i++){ db.vast.insert({"id":i,"name":"shenzheng","age":70,"date":new Date()}); } (5)hash分片结果测试 mongo --port 38021 use oldboy db.vast.count(); mongo --port 38024 use oldboy db.vast.count();
分片集群的查询及管理
判断是否Shard集群:
admin> db.runCommand({ isdbgrid : 1})
列出所有分片信息:
admin> db.runCommand({ listshards : 1})
列出开启分片的数据库:
admin> use config config> db.databases.find( { "partitioned": true } ) 或者: config> db.databases.find() //列出所有数据库分片情况
查看分片的键:
config> db.collections.find().pretty() { "_id" : "test.vast", "lastmodEpoch" : ObjectId("58a599f19c898bbfb818b63c"), "lastmod" : ISODate("1970-02-19T17:02:47.296Z"), "dropped" : false, "key" : { "id" : 1 }, "unique" : false }
查看分片的详细信息:
admin> sh.status()
删除分片节点(谨慎操作):
(1)确认blance是否在工作 sh.getBalancerState() (2)删除shard2节点(谨慎) mongos> db.runCommand( { removeShard: "shard2" } ) 注意:删除操作一定会立即触发blancer。
balancer操作
mongos的一个重要功能,自动巡查所有shard节点上的chunk的情况,自动做chunk迁移。
什么时候工作?
1.自动运行,会检测系统不繁忙的时候做迁移
2.在做节点删除的时候,立即开始迁移工作
3.balancer只能在预定的时间窗口内运行
有需要时可以关闭和开启blancer(备份的时候)
mongos> sh.stopBalancer()
mongos> sh.startBalancer()
自定义 自动平衡进行的时间段:
https://docs.mongodb.com/manual/tutorial/manage-sharded-cluster-balancer/#schedule-the-balancing-window // connect to mongos use config sh.setBalancerState( true ) db.settings.update({ _id : "balancer" }, { $set : { activeWindow : { start : "3:00", stop : "5:00" } } }, true ) sh.getBalancerWindow() sh.status() 关于集合的balancer(了解下) 关闭某个集合的balance sh.disableBalancing("students.grades") 打开某个集合的balancer sh.enableBalancing("students.grades") 确定某个集合的balance是开启或者关闭 db.getSiblingDB("config").collections.findOne({_id : "students.grades"}).noBalance;