对于使用mongodb进行分片部署本身非常方便,下面将自己最近分片部署的过程记录下来:
分片:shard1,shard2,shard3
配置:configsvr1,configsvr2,configsvr3(配置要为奇数个数)
route:route(两台机器部署)
机器:
10.191.250.131 (shard1,shard2,shard3)
172.20.143.66(shard1,shard2,shard3,configsvr1,configsvr2,configsvr3,route)
172.20.143.68(shard1,shard2,shard3,route)
1. 分片配置文件:
mongod_shard1.conf
bind_ip = local_IP,
port = 10001
dbpath=/data1/mongodb/shard1
logpath = /data1/mongolog/shard1/mongo.log #日志文件
logappend = true #使用追加的方式写日志
pidfilepath = /data1/mongotmp/mongod_shard1.pid
nohttpinterface = true #关闭http接口,默认关闭27018端口访问
fork = true #以守护进程的方式运行MongoDB,创建服务器进程
oplogSize = 4096 #复制操作指定的最大大小(以兆字节为单位)日志
journal = true
#engine = wiredTiger
#cacheSizeGB = 38G
smallfiles=true
shardsvr=true
replSet=shard1
*************************
mongod_shard2.conf
bind_ip = local_IP,
port = 10002
dbpath=/data1/mongodb/shard2
logpath = /data1/mongolog/shard2/mongo.log #日志文件
logappend = true #使用追加的方式写日志
pidfilepath = /data1/mongotmp/mongod_shard2.pid
nohttpinterface = true #关闭http接口,默认关闭27018端口访问
fork = true #以守护进程的方式运行MongoDB,创建服务器进程
oplogSize = 4096 #复制操作指定的最大大小(以兆字节为单位)日志
journal = true
#engine = wiredTiger
#cacheSizeGB = 38G
smallfiles=true
shardsvr=true
replSet=shard2
**************************
mongod_shard3.conf
bind_ip = local_IP,
port = 10003
dbpath=/data1/mongodb/shard3
logpath = /data1/mongolog/shard3/mongo.log #日志文件
logappend = true #使用追加的方式写日志
pidfilepath = /data1/mongotmp/mongod_shard3.pid
nohttpinterface = true #关闭http接口,默认关闭27018端口访问
fork = true #以守护进程的方式运行MongoDB,创建服务器进程
oplogSize = 4096 #复制操作指定的最大大小(以兆字节为单位)日志
journal = true
#engine = wiredTiger
#cacheSizeGB = 38G
smallfiles=true
shardsvr=true
replSet=shard3
上面的三个分片配置文件,分别放到三台服务器上;(存放目录可以指定,本次放入:/etc/mongo/)
2. 配置服务文件
mongod_configsvr1.conf
bind_ip = local_IP,
port = 20001
dbpath=/data1/mongodb/configsvr1
logpath = /data1/mongolog/mongo_config1.log #日志文件
logappend = true #使用追加的方式写日志
pidfilepath = /data1/mongotmp/mongod_config1.pid
nohttpinterface = true #关闭http接口,默认关闭27018端口访问
fork = true #以守护进程的方式运行MongoDB,创建服务器进程
oplogSize = 4096 #复制操作指定的最大大小(以兆字节为单位)日志
journal = true
#engine = wiredTiger
#cacheSizeGB = 38G
smallfiles=true
configsvr = true
*************************
mongod_configsvr2.conf
bind_ip = local_IP,
port = 20002
dbpath=/data1/mongodb/configsvr2
logpath = /data1/mongolog/mongo_config2.log #日志文件
logappend = true #使用追加的方式写日志
pidfilepath = /data1/mongotmp/mongod_config2.pid
nohttpinterface = true #关闭http接口,默认关闭27018端口访问
fork = true #以守护进程的方式运行MongoDB,创建服务器进程
oplogSize = 4096 #复制操作指定的最大大小(以兆字节为单位)日志
journal = true
#engine = wiredTiger
#cacheSizeGB = 38G
smallfiles=true
configsvr = true
******************************
mongod_configsvr3.conf
bind_ip = local_IP,
port = 20003
dbpath=/data1/mongodb/configsvr3
logpath = /data1/mongolog/mongo_config3.log #日志文件
logappend = true #使用追加的方式写日志
pidfilepath = /data1/mongotmp/mongod_config3.pid
nohttpinterface = true #关闭http接口,默认关闭27018端口访问
fork = true #以守护进程的方式运行MongoDB,创建服务器进程
oplogSize = 4096 #复制操作指定的最大大小(以兆字节为单位)日志
journal = true
#engine = wiredTiger
#cacheSizeGB = 38G
smallfiles=true
configsvr = true
上面的三个配置服务文件,放到一台机器:172.20.143.66,(存放目录可以指定,本次放入:/etc/mongo/)
3. 路由配置文件:
mongod_route.conf
bind_ip = local_IP,
port = 30000
logpath = /data1/mongolog/mongo_route.log #日志文件
logappend = true #使用追加的方式写日志
maxConns = 100
#chunkSize = 16
pidfilepath = /data1/mongotmp/mongod_route.pid
#nohttpinterface = true #关闭http接口,默认关闭27018端口访问
fork = true #以守护进程的方式运行MongoDB,创建服务器进程
#oplogSize = 4096 #复制操作指定的最大大小(以兆字节为单位)日志
#engine = wiredTiger
#cacheSizeGB = 38G
configdb = 172.20.143.68:20001,172.20.143.68:20002,172.20.143.68:20003
路由配置文件,分别放到两台机器上(172.20.143.68,172.20.143.66)
从配置文件可以看出,配置服务进程是知道所有分片的,路由服务进程只知道所有配置服务进程;
4.分别启动配置进程,分片进程,路由进程:
/usr/local/mongo/bin/mongod --config /etc/mongo/mongod_configsvr1.conf
/usr/local/mongo/bin/mongod --config /etc/mongo/mongod_configsvr2.conf
/usr/local/mongo/bin/mongod --config /etc/mongo/mongod_configsvr3.conf
/usr/local/mongo/bin/mongod --config /etc/mongo/mongod_shard1.conf
/usr/local/mongo/bin/mongod --config /etc/mongo/mongod_shard2.conf
/usr/local/mongo/bin/mongod --config /etc/mongo/mongod_shard3.conf
/usr/local/mongo/bin/mongos --config /etc/mongo/mongod_route.conf
分别在每台服务器上,启动自己的进程;
5.配置分片
随便进入一台分片所在机器:
配置分片1:
./mongo 172.20.143.68 10001
use admin
config={_id:'shard1',members:[{_id:0,host:'172.20.143.68:10001'},{
_id:1,host:'172.20.143.66:10001' },{ _id:2,host:'10.191.250.131:10001'
}]}
rs.initiate(config)
配置分片2:
./mongo 172.20.143.68 10002
use admin
config={_id:'shard2',members:[{_id:0,host:'172.20.143.68:10002'},{
_id:1,host:'172.20.143.66:10002' },{ _id:2,host:'10.191.250.131:10002'
}]}
rs.initiate(config)
配置分片3:
./mongo 172.20.143.68 10003
use admin
config={_id:'shard3',members:[{_id:0,host:'172.20.143.68:10003'},{
_id:1,host:'172.20.143.66:10003' },{ _id:2,host:'10.191.250.131:10003'
}]}
rs.initiate(config)
配置成功,会显示OK;
6.配置路由:
随便进入一台机器mongos:
./mongo 172.20.143.68:30000
use admin
db.runCommand({addshard:"shard1/172.20.143.68:10001,172.20.143.66:10001,10.191.250.131:10001",name:"shard1"} )
db.runCommand({addshard:"shard2/172.20.143.68:10002,172.20.143.66:10002,10.191.250.131:10002",name:"shard2"} )
db.runCommand({addshard:"shard3/172.20.143.68:10003,172.20.143.66:10003,10.191.250.131:10003",name:"shard3"} )
查看分片:
mongos> use adminuse admin
switched to db admin
mongos> db.runCommand( {listshards : 1 } )db.runCommand( {listshards : 1 } )
{
"shards" : [
{
"_id" : "shard1",
"host" : "shard1/10.191.250.131:10001,172.20.143.66:10001,172.20.143.68:10001"
},
{
"_id" : "shard2",
"host" : "shard2/10.191.250.131:10002,172.20.143.66:10002,172.20.143.68:10002"
},
{
"_id" : "shard3",
"host" : "shard3/10.191.250.131:10003,172.20.143.66:10003,172.20.143.68:10003"
}
],
"ok" : 1
}
通过上述配置部署,mongodb分片已经部署OK;
7.对库和表进行分片启用:
进入一台路由机器,对logdb库的login表启用分片:
./mongo 172.20.143.68:30000
use admin
db.runCommand( { enablesharding : "logdb" } );
db.runCommand({ shardcollection: "logdb.login", key: { wid:1 }})
切换库;
use logdb
插入数据:
for(var i=1;i<=20000;i++) db.login.insert({wid:i,name:"xiaoxiao",tel:"123456"});
查看数据分布情况:
mongos> sh.status()sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("577e01a52f5cc8b5b0dd209c")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.191.250.131:10001,172.20.143.66:10001,172.20.143.68:10001" }
{ "_id" : "shard2", "host" : "shard2/10.191.250.131:10002,172.20.143.66:10002,172.20.143.68:10002" }
{ "_id" : "shard3", "host" : "shard3/10.191.250.131:10003,172.20.143.66:10003,172.20.143.68:10003" }
balancer:
Currently enabled: yes
Currently running: yes
Balancer lock taken at Thu Jul 07 2016 16:43:12
GMT+0800 (CST) by
LF-MYSQL-143-68.JD.LOCAL:30000:1467875749:1804289383:Balancer:1681692777
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
2 : Success
1 : Failed with error 'migration already in progress', from shard1 to shard2
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "logdb", "partitioned" : true, "primary" : "shard1" }
logdb.login
shard key: { "wid" : 1 }
chunks:
shard1 1
shard2 1
shard3 1
{ "wid" : { "$minKey" : 1 } } -->> { "wid" : 2 } on : shard3 Timestamp(3, 0)
{ "wid" : 2 } -->> { "wid" : 10 } on : shard1 Timestamp(3, 1)
{ "wid" : 10 } -->> { "wid" : { "$maxKey" : 1 } } on : shard2 Timestamp(2, 0)
{ "_id" : "test", "partitioned" : false, "primary" : "shard1" }
mongos>
到此处,整个分片部署OK,对外提供route进程信息,则可以开始使用了:)
首次使用,以此文简单记录mongodb分片部署应用;
come from:https://blog.csdn.net/qq315zh/article/details/51852225