使用Centos7搭建MongoDB副本集

avatar 2019年9月23日16:02:55 评论 1,703 次浏览

简介

MongoDB副本集实例默认是使用hashed策略进行分片,什么是分片,分片的原理是什么?请继续往下看:

分片:就是把数据进行拆分,分散到不同的机器上。例如:99条数据分散到三台节点上,不管如何分散,总数据不变,三台节点不变,1-10条数据分散到第一个节点上,11-20分散到第二个节点上,21-30分散到第三个节点上,这样持续下去,这个10是举例也可能是5或20.

分片的原理:就是把大的数据切成小分,分给不同的节点。

MongoDB副本集包含主节点(Primary)、从节点(Secondary)、仲裁者。主节点(Primary)主要负责写的请求,然后同步到从节点(Secondary),供用户进行读的操作。仲裁者是在主节点发送宕机时,仲裁者会从从节点中选举出一个节点代替主节点,从节点最多只能由50个节点,7个有投票权。

使用场景

  • 数据冗余,用做故障恢复使用,当发生硬件故障或者其它原因造成的宕机时,可以使用副本进行恢复。
  • 读写分离,读的请求分流到副本上,减轻主节点的读压力。

一个典型的副本集架构如下图所示:

读写主数据库的默认路由图。

二、两种架构模式

1、PSS

Primary + Secondary + Secondary模式,通过Primary和Secondary搭建的Replica Set

Diagram of a 3 member replica set that consists of a primary and two secondaries.

该模式下 Replica Set节点数必须为奇数,目的是选主投票的时候要出现大多数才能进行选主决策。

1、PSA

Primary + Secondary + Arbiter模式,使用Arbiter搭建Replica Set

100

偶数个数据节点,加一个Arbiter构成的Replica Set

三、搭建环境

环境:

主机名 主机IP 角色
server01 10.211.55.19 PRIMARY
server02 10.211.55.20 SECONDARY
server03 10.211.55.21 SECONDARY

首先安装mongodb,可以参考上面的安装方法,我已经在三台机器上安装完,下面看一下

server01

[root@Server01 bin]# ./mongod --config /usr/local/mongodb/bin/mongodb.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 12916
child process started successfully, parent exiting

server02

[root@Server02 bin]# ./mongod --config /usr/local/mongodb/bin/mongodb.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 3743
child process started successfully, parent exiting

server03

[root@Server03 bin]# ./mongod --config /usr/local/mongodb/bin/mongodb.conf  
about to fork child process, waiting until server is ready for connections.
forked process: 3869
child process started successfully, parent exiting

已经启动成功

四、创建管理员用户和权限

进入mongodb,创建用户

[root@Server01 bin]# ./mongo --host 127.0.0.1:27017
......
> use admin;         #切换到admin数据库
switched to db admin
> db.createRole({role:'admin',roles:[],privileges:[{resource:{anyResource:true},actions:['anyAction']}]});      #创建一个超级管理员的角色,并赋予相应的权限
{
        "role" : "admin",
        "roles" : [ ],
        "privileges" : [
                {
                        "resource" : {
                                "anyResource" : true
                        },
                        "actions" : [
                                "anyAction"
                        ]
                }
        ]
}
> db.createUser({user:'root',pwd:'root',roles:[{role:'admin',db:'admin'}]});        #创建一个超级管理员账号,并赋予上面的超级管理员角色和权限 ,pwd自定义
Successfully added user: {
        "user" : "root",
        "roles" : [
                {
                        "role" : "admin",
                        "db" : "admin"
                }
        ]
}

退出后使用新创建的管理员用户登录

[root@Server01 bin]# ./mongo -u "root" -p"root" --host 127.0.0.1:27017   --authenticationDatabase "admin"
MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("66435d1b-f37b-45bc-bd5f-f584f4a21ef0") }
MongoDB server version: 4.0.6
Server has startup warnings: 
2019-07-14T11:22:17.734+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-07-14T11:22:17.734+0800 I CONTROL  [initandlisten] 
2019-07-14T11:22:17.735+0800 I CONTROL  [initandlisten] 
2019-07-14T11:22:17.735+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-07-14T11:22:17.735+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-07-14T11:22:17.735+0800 I CONTROL  [initandlisten] 
2019-07-14T11:22:17.735+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-07-14T11:22:17.735+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-07-14T11:22:17.735+0800 I CONTROL  [initandlisten] 
> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB
>

注意:需要在三台服务器上均创建超级用户,并可以使用账号密码登录。

五、创建集群之间的安全认证机制KeyFile

现在Server01上生成KeyFile

[root@Server01 bin]# pwd
/usr/local/mongodb/bin
[root@Server01 bin]# openssl rand -base64 745 >> /usr/local/mongodb/bin/data/mongodb-keyfile
[root@Server01 bin]# cat data/mongodb-keyfile 
oz3Yhc9Hf+EghDdtJoxhZnj5BQHanSitO4dJeZc0oVbZv8aZbs8of/KyeZ64nRDU
GWYJLA3NDLiixpzctG2FHyFlPT3/UZ5pHlbnNj0u4/4z2eQZTlWlz31jvqfbeXYp
bgrUw4S+tO0lVBQwObNuy2i4fxYjCNqg+DvJBfQH7mnNhb1+xQl6jgmmC5kL9mB3
GPXq22sT7WYFQnbWV6y+5H81//OyEeFlijU6a/Lvm8QA9x7CTzhEtTs/6ddu3RVm
/qE2ogyHW34CzO/6hE2+WmRH0PX0kGOVLcyXCxcXcN7LLRL+7IRutwhigb66vE5x
VViw6hFjAdjgRuz3uu9jDQqlIZjKVo0W2prxx1PCcqHn4F7GEKf4K3nZ6C7i6PPM
gSSViwOm/nq4Ia828szhYXbMdm7gGr37pG9oe9chqP4Q8YUW/rOnyq8T+a/W11y+
rUzE1C7MaHlIrqvI3Au6NGXsodrlBeKO2ELE6kfxAxZt0rz8FPiAj++ec+5l6VOD
OoSVn9dAhqXLl5BY9KTg5KjndS8/HSf+Nqukp+ZzLfX9g6ehqMOhXOxqVva8wy5P
xKrB4dRSXg3Zu1ZwX1jID5nU1I+eAm/81IrWZe1KvWLFbgir4n1u3p5Swp0z1Xqa
P1YBO97/AjNPgyc13je7Za7ZAzrA5ysnw7xG8nRzvXaqUAaV0nI5PqNaRWJVR9ZR
fz8YkWayvy/uvYx/sNG6pgV8drzS0i1au+f/uRv0CkCoti/tAa7WcqJNNag8AJ/x
d1dRZWlNuCp2Vk6ynJoS0huIbSgeLWZmcv+Gj4Xj6ltX+y/pMz9PDzVdNVR/ZneI
lVSvTAuxn4ZYti5wjfJi8OZ40t7DQDoIthc+koRNoOAnuf+Lew3myyhIIUWakbNl
V4KT5DQQIV7+LplXcEWWdjvYIBiHvOjgaYklv4mWB3jt/sRsuBpQFUD15MNAtFqW
jGqXSwAbSe5apIIgWb5614E6F+hPROugYQ==

把server01的mongodb-keyfile分别拷贝到servber02,server03上,这里我使用的和server01一样的文件名

[root@Server02 bin]# ll data/mongodb-keyfile 
-rw-r--r--. 1 root root 1012 7月  17 11:32 data/mongodb-keyfile
[root@Server03 bin]# ll data/mongodb-keyfile 
-rw-r--r--. 1 root root 1012 Jul 17 11:32 data/mongodb-keyfile

给mongodb-keyfile文件权限为400

六、修改三台主机的配置文件

修改配置文件,开启复制集功能:

[root@Server01 bin]# vim mongodb.conf 
[root@Server01 bin]# ./mongod --config /usr/local/mongodb/bin/mongodb.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15143
child process started successfully, parent exiting
[root@Server01 bin]# cat mongodb.conf 
net:
    port: 27017
    bindIp: 0.0.0.0
systemLog:
    destination: file
    path: "/usr/local/mongodb/bin/data/test/logs/mongodb.log"
    logAppend: true
storage:
    journal: 
        enabled: true
    dbPath: /usr/local/mongodb/bin/data/test/db
setParameter:
    enableLocalhostAuthBypass: true
processManagement:
    fork: true
    pidFilePath: "/usr/local/mongodb/bin/data/mongod.pid"
#加入下面的几行内容:
replication:                          #开启复制集功能
    replSetName: CrystalTest        #replSetName自定义
security:
    authorization: enabled
    keyFile: "/usr/local/mongodb/bin/data/mongodb-keyfile"     #步骤7生成的安全认证机制KeyFile

分别把server02、server03上也执行一下,并重启mongodb,在配置过程中可以使用参数[--repair]来检测配置和错误的内容进行适当的修改。

七、初始化副本集

初始化副本集,可以在所有机器上改好配置文件,并重启服务之后,一次性完成,也可以先初始化PRIMARY,然后在把SECONDARY主机一台一台加进去。

方法一:一次性初始化完成

config = { _id:"CrystalTest", members:[{_id:0,host:"10.211.55.19:27017"},{_id:1,host:"10.211.55.20:27017"},{_id:2,host:"10.211.55.21:27017"}] };
#这里的CrystalTest一定要和配置的replSet一样
rs.initiate(config);

执行过程:

> config = { _id:"CrystalTest", members:[{_id:0,host:"10.211.55.19:27017"},{_id:1,host:"10.211.55.20:27017"},{_id:2,host:"10.211.55.21:27017"}] };
{
        "_id" : "CrystalTest",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.211.55.19:27017"
                },
                {
                        "_id" : 1,
                        "host" : "10.211.55.20:27017"
                },
                {
                        "_id" : 2,
                        "host" : "10.211.55.21:27017"
                }
        ]
}
> rs.initiate(config);
{
        "ok" : 1,
        "operationTime" : Timestamp(1563083691, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563083691, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
CrystalTest:SECONDARY> 
CrystalTest:PRIMARY>            #执行完上面的步骤,过一会儿,状态就从OTHER变成PRIMARY
CrystalTest:PRIMARY> 
CrystalTest:PRIMARY>

方法二:先初始化PRIMARY在加入SECONDARY:

现在PRIMAERY上做如下两步操作:

> use admin
> config = { _id:"CrystalTest", members:[{_id:0,host:"10.211.55.19:27017"}]};
> rs.initiate(config);

具体操作:

> use admin
> config = { _id:"CrystalTest", members:[{_id:0,host:"10.211.55.19:27017"}]};
{
        "_id" : "CrystalTest",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.211.55.19:27017"
                }
        ]
}
> rs.initiate(config);
{
        "operationTime" : Timestamp(1563084412, 1),
        "ok" : 0,
        "errmsg" : "already initialized",
        "code" : 23,
        "codeName" : "AlreadyInitialized",
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563084412, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

另外在加两台主机:

> rs.add("10.211.55.20:27017")
{
        "operationTime" : Timestamp(1563084452, 1),
        "ok" : 0,
        "errmsg" : "Found two member configurations with same host field, members.1.host == members.3.host == 10.211.55.20:27017",
        "code" : 103,
        "codeName" : "NewReplicaSetConfigurationIncompatible",
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563084452, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
> rs.add("10.211.55.21:27017")
{
        "operationTime" : Timestamp(1563084452, 1),
        "ok" : 0,
        "errmsg" : "Found two member configurations with same host field, members.2.host == members.3.host == 10.211.55.21:27017",
        "code" : 103,
        "codeName" : "NewReplicaSetConfigurationIncompatible",
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563084452, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
CrystalTest:SECONDARY> 
CrystalTest:PRIMARY>            #执行完上面的步骤,过一会儿,状态就从OTHER变成PRIMARY
CrystalTest:PRIMARY> 
CrystalTest:PRIMARY> 

CrystalTest:PRIMARY> rs.status()   #查看集群状态
{
        "set" : "CrystalTest",
        "date" : ISODate("2019-07-14T06:08:19.115Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1563084492, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1563084492, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1563084492, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1563084492, 1),
                        "t" : NumberLong(1)
                }
        },
        "lastStableCheckpointTimestamp" : Timestamp(1563084482, 1),
        "members" : [
                {
                        "_id" : 0,
                        "name" : "10.211.55.19:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 2924,
                        "optime" : {
                                "ts" : Timestamp(1563084492, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-07-14T06:08:12Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "electionTime" : Timestamp(1563083701, 1),
                        "electionDate" : ISODate("2019-07-14T05:55:01Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 1,
                        "name" : "10.211.55.20:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 807,
                        "optime" : {
                                "ts" : Timestamp(1563084492, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1563084492, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-07-14T06:08:12Z"),
                        "optimeDurableDate" : ISODate("2019-07-14T06:08:12Z"),
                        "lastHeartbeat" : ISODate("2019-07-14T06:08:17.805Z"),
                        "lastHeartbeatRecv" : ISODate("2019-07-14T06:08:17.942Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "10.211.55.19:27017",
                        "syncSourceHost" : "10.211.55.19:27017",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "10.211.55.21:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 807,
                        "optime" : {
                                "ts" : Timestamp(1563084492, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1563084492, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-07-14T06:08:12Z"),
                        "optimeDurableDate" : ISODate("2019-07-14T06:08:12Z"),
                        "lastHeartbeat" : ISODate("2019-07-14T06:08:17.805Z"),
                        "lastHeartbeatRecv" : ISODate("2019-07-14T06:08:17.941Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "10.211.55.19:27017",
                        "syncSourceHost" : "10.211.55.19:27017",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1563084492, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563084492, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

登录两外两台,MongoDB角色已经变成了SECONDARY:

[root@Server02 bin]#  ./mongo -u "root" -p"root" --host 127.0.0.1:27017   --authenticationDatabase "admin"
MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("3948eb3c-adea-40ee-9661-76714074875d") }
MongoDB server version: 4.0.6
Server has startup warnings: 
2019-07-17T13:13:20.371+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-07-17T13:13:20.371+0800 I CONTROL  [initandlisten] 
2019-07-17T13:13:20.371+0800 I CONTROL  [initandlisten] 
2019-07-17T13:13:20.371+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-07-17T13:13:20.371+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-07-17T13:13:20.371+0800 I CONTROL  [initandlisten] 
2019-07-17T13:13:20.371+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-07-17T13:13:20.371+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-07-17T13:13:20.371+0800 I CONTROL  [initandlisten] 
CrystalTest:SECONDARY> 
CrystalTest:SECONDARY> 
[root@Server03 bin]#  ./mongo -u "root" -p"root" --host 127.0.0.1:27017   --authenticationDatabase "admin"
MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("4fd78510-f0b8-43c6-b911-98c033acd2c5") }
MongoDB server version: 4.0.6
Server has startup warnings: 
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] 
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] 
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] 
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] 
CrystalTest:SECONDARY> 
CrystalTest:SECONDARY>

至此,副本集已经搭建完成。

八、测试一下副本集

在主实例中插入一万条数据,

CrystalTest:PRIMARY>  for(var i=1;i<=10000;i++) db.users.insert({id:i,addr_1:"Beijing",addr_2:"Shanghai"});
WriteResult({ "nInserted" : 1 })
CrystalTest:PRIMARY> show dbs
admin   0.004GB
config  0.000GB
local   0.008GB
testdb  0.000GB
CrystalTest:PRIMARY> use test
switched to db test
CrystalTest:PRIMARY> show collections
CrystalTest:PRIMARY> db.users.find()
CrystalTest:PRIMARY> for(var i=1;i<=10000;i++) db.testdb.insert({id:i,addr_1:"Beijing",addr_2:"Shanghai"});
WriteResult({ "nInserted" : 1 })
CrystalTest:PRIMARY> use testdb
switched to db testdb
CrystalTest:PRIMARY> show collections
users
CrystalTest:PRIMARY> db.users.find()
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e28"), "id" : 1, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e29"), "id" : 2, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2a"), "id" : 3, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2b"), "id" : 4, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2c"), "id" : 5, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2d"), "id" : 6, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2e"), "id" : 7, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2f"), "id" : 8, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e30"), "id" : 9, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e31"), "id" : 10, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e32"), "id" : 11, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e33"), "id" : 12, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e34"), "id" : 13, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e35"), "id" : 14, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e36"), "id" : 15, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e37"), "id" : 16, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e38"), "id" : 17, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e39"), "id" : 18, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e3a"), "id" : 19, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e3b"), "id" : 20, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
Type "it" for more

查看另外两台实例是否同步数据

[root@Server02 bin]#  ./mongo -u "root" -p"root" --host 127.0.0.1:27017   --authenticationDatabase "admin"
MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("c3a26dee-a490-4731-9ba8-57d7e1100f00") }
MongoDB server version: 4.0.6
Server has startup warnings: 
2019-07-17T14:35:22.969+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-07-17T14:35:22.969+0800 I CONTROL  [initandlisten] 
2019-07-17T14:35:22.969+0800 I CONTROL  [initandlisten] 
2019-07-17T14:35:22.969+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-07-17T14:35:22.969+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-07-17T14:35:22.969+0800 I CONTROL  [initandlisten] 
2019-07-17T14:35:22.969+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-07-17T14:35:22.969+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-07-17T14:35:22.969+0800 I CONTROL  [initandlisten] 
CrystalTest:SECONDARY> db.getMongo().setSlaveOk();
CrystalTest:SECONDARY> use testdb
switched to db testdb
CrystalTest:SECONDARY> db.users.find()
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e28"), "id" : 1, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e29"), "id" : 2, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2b"), "id" : 4, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2c"), "id" : 5, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2e"), "id" : 7, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2a"), "id" : 3, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2d"), "id" : 6, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2f"), "id" : 8, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e30"), "id" : 9, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e34"), "id" : 13, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e35"), "id" : 14, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e36"), "id" : 15, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e33"), "id" : 12, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e32"), "id" : 11, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e31"), "id" : 10, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e37"), "id" : 16, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e38"), "id" : 17, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e3f"), "id" : 24, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e3c"), "id" : 21, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e3e"), "id" : 23, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
Type "it" for more

[root@Server03 bin]#  ./mongo -u "root" -p"root" --host 127.0.0.1:27017   --authenticationDatabase "admin"
MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("4f44a532-ff5c-4158-86d3-700259f78dfc") }
MongoDB server version: 4.0.6
Server has startup warnings: 
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] 
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] 
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] 
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-07-17T13:13:31.530+0800 I CONTROL  [initandlisten] 
CrystalTest:SECONDARY> db.getMongo().setSlaveOk();
CrystalTest:SECONDARY> use testdb
switched to db testdb
CrystalTest:SECONDARY> db.users.find()
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e28"), "id" : 1, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e29"), "id" : 2, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2b"), "id" : 4, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2c"), "id" : 5, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2e"), "id" : 7, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2a"), "id" : 3, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e30"), "id" : 9, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2d"), "id" : 6, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e2f"), "id" : 8, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e34"), "id" : 13, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e35"), "id" : 14, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e36"), "id" : 15, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e33"), "id" : 12, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e32"), "id" : 11, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e31"), "id" : 10, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e37"), "id" : 16, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e38"), "id" : 17, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e3f"), "id" : 24, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e3c"), "id" : 21, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5d2ad19f2de537f2c81d3e3e"), "id" : 23, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
Type "it" for more

在副本集中插入100000条数据,进行测试副本集功能:

server01上插入数据

CrystalTest:PRIMARY> db.runCommand( { enablesharding :"testdb"});
{
        "operationTime" : Timestamp(1563088722, 1),
        "ok" : 0,
        "errmsg" : "no such command: 'enablesharding'",
        "code" : 59,
        "codeName" : "CommandNotFound",
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563088722, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
CrystalTest:PRIMARY> db.runCommand({ shardcollection: "testdb.tavle1", key: {id: "hashed"}})
{
        "operationTime" : Timestamp(1563088932, 1),
        "ok" : 0,
        "errmsg" : "no such command: 'shardcollection'",
        "code" : 59,
        "codeName" : "CommandNotFound",
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563088932, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
CrystalTest:PRIMARY> for(var i = 1;i<=100000;i++){
...     db.table1.insert({id:i,name: "caofei"})
... }
WriteResult({ "nInserted" : 1 })
CrystalTest:PRIMARY> db.tavke1.stats()
{
        "ns" : "testdb.tavke1",
        "ok" : 0,
        "errmsg" : "Collection [testdb.tavke1] not found.",
        "operationTime" : Timestamp(1563089172, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563089172, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
CrystalTest:PRIMARY> db.table1.stats()
{
        "ns" : "testdb.table1",
        "size" : 5100000,
        "count" : 100000,         #
        "avgObjSize" : 51,
        "storageSize" : 1650688,
        "capped" : false,
        "wiredTiger" : {
                "metadata" : {
                        "formatVersion" : 1
                },
                "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
                "type" : "file",
                "uri" : "statistics:table:collection-17--6346517315378342346",
                "LSM" : {
                        "bloom filter false positives" : 0,
                        "bloom filter hits" : 0,
                        "bloom filter misses" : 0,
                        "bloom filter pages evicted from cache" : 0,
                        "bloom filter pages read into cache" : 0,
                        "bloom filters in the LSM tree" : 0,
                        "chunks in the LSM tree" : 0,
                        "highest merge generation in the LSM tree" : 0,
                        "queries that could have benefited from a Bloom filter that did not exist" : 0,
                        "sleep for LSM checkpoint throttle" : 0,
                        "sleep for LSM merge throttle" : 0,
                        "total size of bloom filters" : 0
                },
                "block-manager" : {
                        "allocations requiring file extension" : 205,
                        "blocks allocated" : 207,
                        "blocks freed" : 2,
                        "checkpoint size" : 1613824,
                        "file allocation unit size" : 4096,
                        "file bytes available for reuse" : 20480,
                        "file magic number" : 120897,
                        "file major version number" : 1,
                        "file size in bytes" : 1650688,
                        "minor version number" : 0
                },
                "btree" : {
                        "btree checkpoint generation" : 130,
                        "column-store fixed-size leaf pages" : 0,
                        "column-store internal pages" : 0,
                        "column-store variable-size RLE encoded values" : 0,
                        "column-store variable-size deleted values" : 0,
                        "column-store variable-size leaf pages" : 0,
                        "fixed-record size" : 0,
                        "maximum internal page key size" : 368,
                        "maximum internal page size" : 4096,
                        "maximum leaf page key size" : 2867,
                        "maximum leaf page size" : 32768,
                        "maximum leaf page value size" : 67108864,
                        "maximum tree depth" : 3,
                        "number of key/value pairs" : 0,
                        "overflow pages" : 0,
                        "pages rewritten by compaction" : 0,
                        "row-store internal pages" : 0,
                        "row-store leaf pages" : 0
                },
                "cache" : {
                        "bytes currently in the cache" : 4792516,
                        "bytes dirty in the cache cumulative" : 179650,
                        "bytes read into cache" : 0,
                        "bytes written from cache" : 5671371,
                        "checkpoint blocked page eviction" : 0,
                        "data source pages selected for eviction unable to be evicted" : 0,
                        "eviction walk passes of a file" : 206,
                        "eviction walk target pages histogram - 0-9" : 206,
                        "eviction walk target pages histogram - 10-31" : 0,
                        "eviction walk target pages histogram - 128 and higher" : 0,
                        "eviction walk target pages histogram - 32-63" : 0,
                        "eviction walk target pages histogram - 64-128" : 0,
                        "eviction walks abandoned" : 0,
                        "eviction walks gave up because they restarted their walk twice" : 103,
                        "eviction walks gave up because they saw too many pages and found no candidates" : 96,
                        "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
                        "eviction walks reached end of tree" : 295,
                        "eviction walks started from root of tree" : 205,
                        "eviction walks started from saved location in tree" : 1,
                        "hazard pointer blocked page eviction" : 0,
                        "in-memory page passed criteria to be split" : 2,
                        "in-memory page splits" : 1,
                        "internal pages evicted" : 0,
                        "internal pages split during eviction" : 0,
                        "leaf pages split during eviction" : 5,
                        "modified pages evicted" : 6,
                        "overflow pages read into cache" : 0,
                        "page split during eviction deepened the tree" : 0,
                        "page written requiring cache overflow records" : 0,
                        "pages read into cache" : 0,
                        "pages read into cache after truncate" : 1,
                        "pages read into cache after truncate in prepare state" : 0,
                        "pages read into cache requiring cache overflow entries" : 0,
                        "pages requested from the cache" : 100004,
                        "pages seen by eviction walk" : 12999,
                        "pages written from cache" : 202,
                        "pages written requiring in-memory restoration" : 4,
                        "tracked dirty bytes in the cache" : 16382,
                        "unmodified pages evicted" : 0
                },
                "cache_walk" : {
                        "Average difference between current eviction generation when the page was last considered" : 0,
                        "Average on-disk page image size seen" : 0,
                        "Average time in cache for pages that have been visited by the eviction server" : 0,
                        "Average time in cache for pages that have not been visited by the eviction server" : 0,
                        "Clean pages currently in cache" : 0,
                        "Current eviction generation" : 0,
                        "Dirty pages currently in cache" : 0,
                        "Entries in the root page" : 0,
                        "Internal pages currently in cache" : 0,
                        "Leaf pages currently in cache" : 0,
                        "Maximum difference between current eviction generation when the page was last considered" : 0,
                        "Maximum page size seen" : 0,
                        "Minimum on-disk page image size seen" : 0,
                        "Number of pages never visited by eviction server" : 0,
                        "On-disk page image sizes smaller than a single allocation unit" : 0,
                        "Pages created in memory and never written" : 0,
                        "Pages currently queued for eviction" : 0,
                        "Pages that could not be queued for eviction" : 0,
                        "Refs skipped during cache traversal" : 0,
                        "Size of the root page" : 0,
                        "Total number of pages currently in cache" : 0
                },
                "compression" : {
                        "compressed pages read" : 0,
                        "compressed pages written" : 199,
                        "page written failed to compress" : 0,
                        "page written was too small to compress" : 3
                },
                "cursor" : {
                        "bulk-loaded cursor-insert calls" : 0,
                        "close calls that result in cache" : 0,
                        "create calls" : 5,
                        "cursor operation restarted" : 0,
                        "cursor-insert key and value bytes inserted" : 5417635,
                        "cursor-remove key bytes removed" : 0,
                        "cursor-update value bytes updated" : 0,
                        "cursors reused from cache" : 99995,
                        "insert calls" : 100000,
                        "modify calls" : 0,
                        "next calls" : 0,
                        "open cursor count" : 0,
                        "prev calls" : 1,
                        "remove calls" : 0,
                        "reserve calls" : 0,
                        "reset calls" : 200001,
                        "search calls" : 0,
                        "search near calls" : 0,
                        "truncate calls" : 0,
                        "update calls" : 0
                },
                "reconciliation" : {
                        "dictionary matches" : 0,
                        "fast-path pages deleted" : 0,
                        "internal page key bytes discarded using suffix compression" : 584,
                        "internal page multi-block writes" : 0,
                        "internal-page overflow keys" : 0,
                        "leaf page key bytes discarded using prefix compression" : 0,
                        "leaf page multi-block writes" : 7,
                        "leaf-page overflow keys" : 0,
                        "maximum blocks required for a page" : 1,
                        "overflow values written" : 0,
                        "page checksum matches" : 92,
                        "page reconciliation calls" : 11,
                        "page reconciliation calls for eviction" : 5,
                        "pages deleted" : 0
                },
                "session" : {
                        "object compaction" : 0
                },
                "transaction" : {
                        "update conflicts" : 0
                }
        },
        "nindexes" : 1,
        "totalIndexSize" : 958464,
        "indexSizes" : {
                "_id_" : 958464
        },
        "ok" : 1,
        "operationTime" : Timestamp(1563089182, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563089182, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

在其他备用节点查看一下数据,我这里测试的是三个节点的数据一样,下面是新的一个测试相也是对于写入数据。

use testdb
db.table1.stats()

CrystalTest:PRIMARY> db.runCommand( { enablesharding :"testdb"});
{
        "operationTime" : Timestamp(1563088784, 4140),
        "ok" : 0,
        "errmsg" : "no such command: 'enablesharding'",
        "code" : 59,
        "codeName" : "CommandNotFound",
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563088784, 4140),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
CrystalTest:PRIMARY> db.runCommand({ shardcollection: "testdb.tavle1", key: {id: "hashed"}})
{
        "operationTime" : Timestamp(1563088932, 1),
        "ok" : 0,
        "errmsg" : "no such command: 'shardcollection'",
        "code" : 59,
        "codeName" : "CommandNotFound",
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563088932, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
CrystalTest:PRIMARY> for(var i = 1;i<=100000;i++){
...     db.table1.insert({id:i,name: "caofei"})
... }
WriteResult({ "nInserted" : 1 })

CrystalTest:PRIMARY> use testdb
switched to db testdb
CrystalTest:PRIMARY> db.table1.find()
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c48"), "id" : 1, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c49"), "id" : 2, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c4a"), "id" : 3, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c4b"), "id" : 4, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c4c"), "id" : 5, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c4d"), "id" : 6, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c4e"), "id" : 7, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c4f"), "id" : 8, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c50"), "id" : 9, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c51"), "id" : 10, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c52"), "id" : 11, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c53"), "id" : 12, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c54"), "id" : 13, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c55"), "id" : 14, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c56"), "id" : 15, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c57"), "id" : 16, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c58"), "id" : 17, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c59"), "id" : 18, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c5a"), "id" : 19, "name" : "caofei" }
{ "_id" : ObjectId("5d2ad8722de537f2c81d8c5b"), "id" : 20, "name" : "caofei" }
Type "it" for more

扩展部分:

错误问题:

CrystalTest:SECONDARY> show dbs
2019-07-17T14:35:30.075+0800 E QUERY    [js] Error: listDatabases failed:{
        "operationTime" : Timestamp(1563087472, 1),
        "ok" : 0,
        "errmsg" : "not master and slaveOk=false",
        "code" : 13435,
        "codeName" : "NotMasterNoSlaveOk",
        "$clusterTime" : {
                "clusterTime" : Timestamp(1563087472, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:139:1
shellHelper.show@src/mongo/shell/utils.js:882:13
shellHelper@src/mongo/shell/utils.js:766:15
@(shellhelp2):1:1
# 默认因为SECONDARY是不允许读写的,如果非要解决,方法如下:
CrystalTest:SECONDARY> db.getMongo().setSlaveOk();
CrystalTest:SECONDARY> show dbs
admin   0.004GB
config  0.000GB
local   0.009GB
test    0.000GB
testdb  0.000GB

 

 

avatar

发表评论

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: