The command line really wasn’t designed for secrets. So, keeping secrets secret on the command line requires some extra care and effort. The other day in my homelab I was configuring a TLS client certificate for a Grafana datasource. The intention was to write something I could run on a timer whenever the certificate is renewed. The command needed to:
- Build some JSON with the renewed certificate and private key injected into it
- PUT it to Grafana’s API to update the datasource configuration.
I thought I was being very clever when I wrote this lil’ Bash pipeline:
BEARER_TOKEN=MhY3b3i3gFpa9otnLQVznJYoWLxpGJUod3iDJwCKRFUVtuALGJooBJuCUf7w9HJfbu; jq -n --arg ca_cert "$(< $STEPPATH/certs/root_ca.crt)" --arg client_cert "$(< $CERT_LOCATION)" --arg client_key "$(< $KEY_LOCATION)" -f ./datasource.jq | curl -s -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer $BEARER_TOKEN" -d "$(< /dev/stdin)" --cacert $STEPPATH/certs/root_ca.crt https://grafana:3000/api/datasources/2
jq to populate a JSON template for an API request (
datasource.jq) with variables passed in with
--arg, and then PUTs to the API with
curl. Nice, huh?
It didn’t take long to discover that this pipeline leaks secrets all over the place:
- The private key credential file is leaked:
- The bearer token env variable is leaked:
-H "Authorization: Bearer $BEARER_TOKEN"
- And the secret piped data from
-d "$(< /dev/stdin)"
All of these values, including the precious contents of the private key file, can be seen via
ps when these commands are running.
ps finds them via
/proc/<pid>/cmdline, which is globally readable for any process ID.
To make atonement, I’m writing this post. We’ll look at three methods for handling secrets on the command line: Using piped data, credential files, and environment variables. We’ll look at some of the risks of these approaches, and how to use each of them as safely as possible.
But first, let’s look at a sanitized version of the above pipeline:
jq -n --rawfile ca_cert $STEPPATH/certs/root_ca.crt --rawfile client_cert $CERT_LOCATION --rawfile client_key $KEY_LOCATION -f ./datasource.jq | curl -s -X PUT -H @api_headers -d @- --cacert $STEPPATH/certs/root_ca.crt https://grafana:3000/api/datasources/2
jqmoves the credentials closer to where they are used by delegating the responsibility for reading the certificate data from the files to
jqpushes the secret JSON into the pipe,
-d @-flag to pull the secret data directly from the pipe and use it as the HTTPS request body.
-H @api_headerswill read the static bearer token from a file (
Now, no secrets will appear in
As the sanitized example shows, a pipeline is generally an excellent way to pass secrets around, if the program you’re using will accept a secret via
STDIN. Because a pipe only has two ends, right? Imagine yourself whispering a secret into one end of a pipe, and a friend putting their ear up to the other. It’s just like that.
$(< /dev/stdin) leak shown above uses a neat Bash substitution to make an otherwise secure pipe insecure. For example, if you run:
$ echo "secret-data" | curl -d "$(</dev/stdin)" https://example.com:3000
Then the output of
curl will show:
curl -d secret-data https://example.com:3000
Other than that, there’s not too much to worry about with pipes.
What’s not to love about a file? It’s got an owner. It has permissions and access control. Give each secret a file! Any program that accepts secrets should be able to accept them by passing a filename or by redirecting a file into
STDIN. You can also use files to pass secrets into Docker containers with mounted volumes.
A few notes about storing and retrieving file secrets:
- You’d better get the permissions right
- Avoid leaking the secret in the command string eg. with
- Be sure your disk is encrypted at rest, eg. with LUKS
- You may want to encrypt the contents of the file — but, then you need to figure out how to handle the encryption key.
Using environment variables for secrets is very convenient. And we don’t recommend it because it’s so easy to leak things:
- Some operating systems still make every process’s environment variables world readable. (But, in all the Linuxes I’ve seen,
/proc/<pid>/environis not world-readable.)
- In Docker, anyone with access to the Docker daemon can use
docker inspectto see all of the environment variables for any running container.
- In systemd, environment variables in unit files are available to users via the dbus interface (see the recent introduction of
LoadCredential=for an alternative that uses credential files)
- Exported environment variables will get passed to every new process, and then who knows what will happen to them. They might get dumped to
STDOUTor logged to a debug logfile.
- Local (unexported) environment variables are also easy to leak into
$ BEARER_TOKEN=MhY3b3i3gFpa9otnLQVznJYoWLxpGJUod3iDJwCKRFUVtuALGJooBJuCUf7w9HJfbu; curl -s -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer $BEARER_TOKEN" ...
- Environment variables can easily end up in shell history. In many shells, adding an extra space before a command will hide it from shell history. (In Bash, the
HISTCONTROLvariable must be set to
If a command takes a password filename, you can use a password environment variable if you’re careful about it. For example, say we have a
$STEP_CA_PASSWORD environment variable, and we run the following in Bash:
$ STEP_CA_PASSWORD=amazingpw; step-ca --password-file <(echo -n "$STEP_CA_PASSWORD") $(step path)/config/ca.json
/proc/<pid>/cmdline for this process will contain something like:
step-ca --password-file /dev/fd/11 /home/carl/.step/config/ca.json
<() syntax of process substitution, Bash will create a file using the output of
echo -n "$STEP_CA_PASSWORD" and supply that file’s name to
Great, right? Except…wouldn’t the
/proc/<pid>/cmdline for the inner
echo -n my-secret-ca-password
Thankfully, in Bash, the answer is no. We are saved by the fact that
echo is a builtin, and a process will never be created. So, if you are able to otherwise secure your environment variables, this approach is safe. The downside is that it appears unsafe, because
$STEP_CA_PASSWORD is still getting substituted into something that certainly looks like it’s a command.
What About A Secrets Manager?
Secrets managers can be great because they can make it easier to get secrets closer to where they are used. For example, a Docker container can call out to a secrets manager for its secrets. But, a secrets manager is an extra dependency. Often you need to run a secrets manager server and hit an API. And even with a secrets manager, you may still need Bash to shuttle the secret into your target application. For this post I’m focused on more lightweight solutions.
Speaking of lightweight solutions,
there is an keyring facility in the Linux kernel.
The Linux keyring offers several scopes for storing keys safely in memory that will never be swapped to disk.
A process or even a single thread can have its own keyring,
or you can have a keyring that is inherited across all processes in a user’s session.
To manage the keyrings and keys,
keyctl command or
keyctl system calls.
Directly in the command
In case it isn’t already abundantly clear, this is very unsafe. There is no way for the caller of a command to choose to hide the command line from being world readable. So, any CLI command worth its salt should not accept passwords directly.
You could maybe pass a password directly in a command if you were running inside a secure container with sandboxing around everything. But, why take the risk?
Now, even for this one there’s a caveat. Have you ever run
mysql this way?
$ mysql --user carl --password amazingpw db.smallstep.com
curl this way?
$ curl -u carl:password https://example.com:3000
These commands accept passwords against their own better judgement, for convenience. But, immediately upon startup, they will overwrite
argv with a blank value, effectively hiding the secret. If you run
ps during the
curl command shown here, you’ll see:
curl -u https://example.com:3000
Now technically, if a system is overloaded enough, it could be possible to grab the secret from
curl has a chance to overwrite it. But, this approach is better than nothing at all. And these passwords can easily end up in audit logs or shell history. So, better to avoid this approach entirely.
The alternative for
curl is a credential file: A
.netrc file can be used to store credentials for servers you need to connect to.
mysql, you can create option files: a
.my.cnf or an obfuscated
.mylogin.cnf will be read on startup and can contain your passwords.