# MongoDB replica set over ssh tunnels

mongodbreplicationssh

I'm trying to archive something like this

The MongoDB instance on dc1.com is currently a standalone, and On-Site periodically pulls a backup via mongodump over a ssh tunnel.

I now want to create two instances, one On-Site and one in a second datacenter, and make the three instances become a replica set, so that the backup is almost instant. This way I would have an On-Site backup and one in the second datacenter, which would always be up to date.

If the main MongoDB instance in dc1.com fails, then that's it, no data can be served, this will then need to get fixed manually, as the On-Site instance and the one in the secondary datacenter should not take over. Nor are those two meant to be queried. They are there just as a backup.

My problem is that when I add –replSet to the main database, and then a rs.initialize(), that this instance is then somehow not recognized as bound to localhost, but to 172.17.0.1, which is the docker interface, which I need to bind to so that the containers can connect to MongoDB. If I then try to add a member via rs.add("localhost:2001"), which is the instance on dc2.com, I get an error about not being able to mix localhost and non-localhost databases. The main instance is recognized as 172.17.0.1 instead as localhost.

"Either all host names in a replica set configuration must be
localhost references, or none must be; found 1 out of 2"

I then used https://stackoverflow.com/questions/28843496/cant-initiate-replica-set-in-ubuntu in order to issue a rs.initiate({_id:"rs0", members: [{"_id":1, "host":"127.0.0.1:1000"}]}) to force the initialization of the replica set on dc1.com to be bound to localhost. But when I then try to add a member via rs.add("localhost:2001") I still am not able to do it. It's no longer the hostname error, but another one, which I can't quite remember, as I got tired of trying and had to rollback everything. It was something like connection rejected. When I do a mongo --port 2001 I get disconnected after a warning about something related to "isMaster".

Is it possible to create this kind of setup? All I'm trying to do is to avoid using TLS on MongoDB and binding the backups to globally accessible interfaces (the On-Site instance would be behind a firewall, so 0.0.0.0:1002 would not be publicly accessible)

The following setup is possible, without using TLS.

Datacenter 1 is the primary datacenter. All clients connect only to this one. Datacenter 2 is only for realtime backups via replication. On-Site is also only for realtime backups via replication.

This setup is NOT meant for failover, only for disaster recovery.

Datacenter 1:

MongoDB

mongod --bind_ip 127.0.0.1,172.17.0.1 --replSet test --port 2000 --rest --httpinterface --logpath data/replica/logs/log.txt --dbpath data/replica/wiredTiger --directoryperdb --storageEngine wiredTiger --wiredTigerDirectoryForIndexes --fork

127.0.0.1 is for SSH access, 172.17.0.1 is for access from Docker containers.

SSH Tunnels (this server is responsible for creating the tunnels to Datacenter 2):

autossh -f -N -L 2001:localhost:2001 mongodb@dc-2.example.com

autossh -f -N -R 2000:localhost:2000 mongodb@dc-2.example.com

Datacenter 2:

MongoDB

mongod --bind_ip 127.0.0.1 --port 2001 --replSet test --rest --httpinterface --logpath data/replica/logs/log.txt --dbpath data/replica/wiredTiger --directoryperdb --storageEngine wiredTiger --wiredTigerDirectoryForIndexes --fork

On-Site:

MongoDB

mongod --bind_ip 127.0.0.1 --port 2002 --replSet test --rest --httpinterface --logpath data/replica/logs/log.txt --dbpath data/replica/wiredTiger --directoryperdb --storageEngine wiredTiger --wiredTigerDirectoryForIndexes --fork

SSH Tunnels (On-Site is responsible for creating the tunnels to Datacenter 1, no persistent connection exists between Datacenter 2 and On-Site)

autossh -f -N -R 2002:localhost:2002 mongodb@dc-1.example.com

autossh -f -N -L 2000:localhost:2000 mongodb@dc-1.example.com

The server in Datacenter 1 contains all the data which is to be replicated, Datacenter 2 and On-Site are empty databases.

On the machine in Datacenter 1 I enter mongo --port 2000, then I issue a

rs.initiate(
{
_id: "test",
version: 1,
members: [
{ _id: 0, host : "localhost:2000", priority: 1, votes: 1 },
{ _id: 1, host : "localhost:2001", priority: 0, votes: 0, hidden: true },
{ _id: 2, host : "localhost:2002", priority: 0, votes: 0, hidden: true }
]
}
)


I guess priority: 0, votes: 0, hidden: true on the secondaries is optional here, it does what I want, just have the two secondaries be a part as silent, invisible data collectors for backup purposes. I'm not sure if changing this would have any side-effects as it could be possible that then a connection between both secondaries should be available, which it is not, in this case (they can't see each other).

I could use a slaveDelay on one of the hidden members in order to make it lag for let's say half a day, but since I will be making plenty of backups from both hidden databases via mongodump, it's not really neccessary.

In order to be able to connect to the secondaries via mongodb tools I needed to issue a rs.slaveOk()

Nothing had to be changed in the clients, they connect to 172.17.0.1:2000.

I tested a reboot of the machine in Datacenter 2 and during the reboot I inserted new items in the DB, after about 30 seconds the rebooted machine synced to Datacenter 1 as expected.

Datacenter 1 has a fast server with lots of RAM and CPU and only a little bit of SSD storage, Datacenter 2 has a little bit of RAM but lots of HDD, so this seems to be a good solution.