https://smallstep.com/certificate-manager/

Securing MongoDB With TLS (Part 3 of 3)

Carl-Tashian.jpg

Carl Tashian

Follow Smallstep

This post is the third in a series about securing MongoDB with TLS.

In the intro post on mongodb.com, we covered why mutual TLS is such a good fit for securing MongoDB. In part one we created a Certificate Authority (running step-ca) that will create certificates for the MongoDB server and clients. In part two we set up a single-node MongoDB server that uses TLS to encrypt traffic with its clients.

In this post, we're going to set up a MongoDB replication cluster that uses TLS between cluster members and with clients. The cluster will have three nodes, using a Primary-Secondary-Secondary (PSS) topology.

In MongoDB, enabling cluster TLS also enables role-based access control (RBAC). So, since we need users and roles, we should

We're also going to enable role-based access control (RBAC) and X.509 user authentication, so that our clients can authenticate to the database using certificates. This cluster configuration gets us closer to what a production MongoDB cluster might need in order to use TLS everywhere.

Before you begin

You'll need a Certificate Authority before you can set up the MongoDB cluster with TLS. The cluster TLS certificate is issued by an ACME provisioners in step-ca. Follow part one of this series to set up and configure the CA itself.

Creating the MongoDB cluster

I've written a system init script that creates the MongoDB cluster, configures it for TLS, and enables replication.

Because this setup is similar to the single-node MongoDB server from part two, I'm not going to go through the script line-by-line.

The big differences with clustering are as follows:

  • Each cluster member has its own cluster membership certificate. It is a client certificate that the cluster uses to authenticate to other cluster members. It has the DNS name of the cluster node as its subject Common Name (CN).
  • Normally you'd have each cluster node on its own host. But since this is a demo, I've used Docker Compose to build the entire cluster on a single Ubuntu 20.04 (Focal) machine. Each cluster node has exactly the same configuration, and in production the only configuration difference between each node would be the cluster certificate Common Name.

Start up a node and run the script on it. I use AWS User Data to provide the script, so it runs as part of the VM launch process.

Once the node is running, you can follow the init script output by running tail -f /var/log/cloud-init-output.log. Once it's finished, make sure Docker Compose is running properly, and that the replica set has been created in MongoDB:

$ sudo su
# docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Names}}"
CONTAINER ID   IMAGE     STATUS         NAMES
43eff57f8f2a   mongo     Up 6 minutes   mongo_mongo_rs0_0_1
2ce5f88b1470   mongo     Up 6 minutes   mongo_mongo_rs0_2_1
78cabaebdf92   mongo     Up 6 minutes   mongo_mongo_rs0_1_1

If the replica set is initialized properly, you should be able to connect to mongodb using the /root/admin.pem file and you will see the PRIMARY> prompt that indicates a replication cluster.

# LOCAL_HOSTNAME=`curl -s http://169.254.169.254/latest/meta-data/local-hostname`
# mongosh "mongodb://${LOCAL_HOSTNAME},${LOCAL_HOSTNAME}:27018,${LOCAL_HOSTNAME}:27019/?replicaSet=rs0" \
        --tls --tlsCertificateKeyFile admin.pem \
        --tlsCAFile /var/lib/mongo/ca-certs/root_ca.crt
MongoDB shell version v4.4.6
Current Mongosh Log ID:	6112bdfb57e9aaeb125d16e5
Connecting to:		mongodb://ip-172-31-20-26.us-east-2.compute.internal:27017,ip-172-31-20-26.us-east-2.compute.internal:27018,ip-172-31-20-26.us-east-2.compute.internal:27019/?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0
Using MongoDB:		5.0.2
Using Mongosh:		1.0.4
.... { output truncated } ...
rs0:PRIMARY>

Add X.509 Users

Now that the cluster is up and running, we can add one or more X.509 users. In MongoDB, creating a cluster will automatically enable role-based access control. In the above example, we're only able to access the cluster without user authentication because I used the --transitionToAuth configuration parameter for mongod. It's temporarily allowing unauthenticated access to the cluster. In Bash, create a certificate that you'll use to sign in to MongoDB:

$ step ca certificate carl@smallstep.com carl.crt carl.key \
   --provisioner "MongoDB Service User" \
   --provisioner-password-file /var/lib/mongo/ca-password.txt
āœ” Provisioner: MongoDB Service User (JWK) [kid: olSMKTIvJo8XgiDAqwjhyLzDDSefqcfnLhvF4bcYD4k]
āœ” CA: https://ip-172-31-40-201.us-east-2.compute.internal
āœ” Certificate: carl.crt
āœ” Private Key: carl.key

For now, we're using the MongoDB Service User provisioner on the CA. Later, we can add support for getting X.509 user certificates for any user via OAuth OIDC. Let's take a look at the certificate we just created:

$ step certificate inspect carl.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 3227398783534046043792414045290620976 (0x26d930b9d3c96acd1334d50411fe830)
    Signature Algorithm: ECDSA-SHA256
        Issuer: O=Smallstep,CN=Smallstep Intermediate CA
        Validity
            Not Before: Jul 21 20:41:52 2021 UTC
            Not After : Oct 19 20:42:52 2021 UTC
        Subject: O=Smallstep,OU=MongoDB,CN=carl@smallstep.com
        Subject Public Key Info:
            Public Key Algorithm: ECDSA
                Public-Key: (256 bit)
                X:
                    90:9b:88:2e:29:97:45:55:93:48:8b:1b:b5:79:e6:
... output truncated ...

Note the certificate Subject (O=Smallstep,OU=MongoDB,CN=carl@smallstep.com). This Finally, be sure to concatenate the certificate and private key before using it with MongoDB:

$ cat carl.crt carl.key > carl.pem

Let's add an administrative user in MongoDB. In the mongo console, run:

db.getSiblingDB("$external").runCommand(
  {
    createUser: "CN=carl@smallstep.com,OU=MongoDB,O=Smallstep",
    roles: [
         { role: "readWrite", db: "local" },
         { role: "userAdminAnyDatabase", db: "admin" }
    ],
    writeConcern: { w: "majority" , wtimeout: 5000 }
  }
)

The username here (CN=carl@smallstep.com,OU=MongoDB,O=Smallstep) should match the subject of the certificate you just created. Replace carl@smallstep.com with your email address. The output of this command should include "ok" : 1. Now let's test the connection. Reconnect to MongoDB using X.509 user authentication:

$ mongosh "mongodb://${LOCAL_HOSTNAME},${LOCAL_HOSTNAME}:27018,${LOCAL_HOSTNAME}:27019/?replicaSet=rs0" \
    --tls --tlsCertificateKeyFile carl.pem \
    --tlsCAFile /var/lib/mongo/ca-certs/root_ca.crt \
    --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509
Current Mongosh Log ID:	6112bdfb57e9aaeb125d16e5
Connecting to:		mongodb://ip-172-31-20-26.us-east-2.compute.internal:27017,ip-172-31-20-26.us-east-2.compute.internal:27018,ip-172-31-20-26.us-east-2.compute.internal:27019/?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0
Using MongoDB:		5.0.2
Using Mongosh:		1.0.4
... output truncated ...
rs0:PRIMARY>

Great. Now run db.runCommand({connectionStatus : 1}) to see that you're signed in as the user shown on the certificate:

rs0:PRIMARY> db.runCommand({connectionStatus : 1})
{
	"authInfo" : {
		"authenticatedUsers" : [
			{
				"user" : "CN=carl@smallstep.com,OU=MongoDB,O=Smallstep",
				"db" : "$external"
			}
		],
		"authenticatedUserRoles" : [
			{
				"role" : "readWrite",
				"db" : "local"
			},
			{
				"role" : "userAdminAnyDatabase",
				"db" : "admin"
			}
		]
	},
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1627936502, 2),
		"signature" : {
			"hash" : BinData(0,"vvGz5AzmdrSKUvl6a1ByEopozpw="),
			"keyId" : NumberLong("6991933082571898883")
		}
	},
	"operationTime" : Timestamp(1627936502, 2)
}

Now that you've created an initial administrative user, and signed in using the certificate, you can remove the --transitionToAuth flag from /var/lib/mongo/compose.yml and restart MongoDB (docker compose restart). You now have a database that requires TLS client certificates that are issued by your CA, and that requires X.509 user authentication to access the cluster. The beauty of this setup is that one client certificate handles both client validation and authentication. There's no need for SSH tunnels, you can expose the MongoDB cluster directly on your network or even on the public internet.

Carl Tashian (Website, LinkedIn) is an engineer, writer, exec coach, and startup all-rounder. He's currently an Offroad Engineer at Smallstep. He co-founded and built the engineering team at Trove, and he wrote the code that opens your Zipcar. He lives in San Francisco with his wife Siobhan and he loves to play the modular synthesizer šŸŽ›ļøšŸŽšļø