Dealing with secrets and sensitive data in Puppet is daunting, right? Nope, not at all. Let me show you how to do it. I’ve wrapped my head around the options available and want to share my journey in hopes of saving you from a few trials and tribulations. Just interested in the end result? Feel free to scroll down to the last section fittingly entitled The final product.

Preface

Puppet’s InfraCore team manages engineering’s core infrastructure from hardware to the actual host configuration of running applications and services. The InfraCore team was already utilizing hiera-eyaml in their control repo when I joined them last April. That control repo is used by all of SRE at Puppet so this meant other teams could take advantage of having secrets, like passwords and AWS keys, versioned in git without fear of exposure to everyone with access to the repo. That’s a big deal since we open up our repositories to the entire company by default.

InfraCore’s hiera-eyaml setup

The maintainers of hiera-eyaml, Vox Pupuli, describe one way to set it up in its readme. We bootstrapped our eyaml setup by placing the keys on our master of masters (MoM) and manually running puppetserver gem install hiera-eyaml. Once the MoM was bootstrapped, everything else was managed by our control repo. To do this we added puppetlabs-puppetserver_gem to our Puppetfile and then added the code below to the profile used by our MoM and compile masters:

file {
  default:
    ensure => file,
    owner  => 'root',
    group  => 'root',
  ;
  '/etc/puppetlabs/puppet/eyaml':
    ensure => directory,
    mode   => '0755',
  ;
  '/etc/puppetlabs/puppet/eyaml/private_key.pkcs7.pem':
    group   => 'pe-puppet',
    mode    => '0440',
    content => lookup('profile::pe::master::eyaml_private_key'),
  ;
  '/etc/puppetlabs/puppet/eyaml/public_key.pkcs7.pem':
    mode   => '0444',
    source => 'puppet:///modules/profile/pe/master/eyaml_public_key.pkcs7.pem',
  ;
}

package { 'hiera-eyaml puppetserver_gem':
  ensure   => '2.1.0',
  name     => 'hiera-eyaml',
  provider => 'puppetserver_gem',
  notify   => Service['pe-puppetserver'],
}

This took care of installing the needed gem and distributing the keys since all the compile masters get their catalogs from the MoM.

Deploying secrets

We have a Confluence page to help anyone who wants to store sensitive data in our control repo. It contains a copy of our eyaml public key and explains how they can use it:

  1. Install the hiera-eyaml gem via gem install hiera-eyaml
  2. Place the public key in ~/.eyaml/key.pub
  3. Create ~/.eyaml/config.yaml and add a line like pkcs7_public_key: '/Users/<your username>/.eyaml/key.pub' to it

They can then encrypt their string, file, or password like shown in the example below. Note that:

  • -f encrypts a file.
  • -s encrypts a string.
  • -p prompts for a password (or other string) and encrypts it.
eyaml encrypt [-f|-s] $my_string_or_file

eyaml encrypt -p
Enter password: 

This generates a single-line and a multi-line encrypted version of the crypto-text that you can place in the relevant yaml file.

The journey

My journey began a couple of weeks ago when I was working on converting a ~/.fog file used by our Jenkins servers from a flat file to one generated by a Puppet profile. I created a new profile with parameters for each setting. The values that were not sensitive were set with sane defaults while the sensitive values were pulled from Hiera via automatic parameter lookup where they are stored as eyaml-encrypted strings. In the body of the profile I created a local variable with a hash that mirrored the structure of the fog file. I then created a file resource that converted the hash to yaml as the content. It looked something like this:

class profile::jenkins::agent::fog (
  String[1]           $default_some_password,
  Array[String[1], 1] $default_some_array    = [ 'item1', 'item2' ],
  String[1]           $default_some_username = 'jdoe',
  ) {
  $fog_hash = {
    'default' => {
      'some_array'    => $default_some_array,
      'some_username' => $default_some_username,
      'some_password' => $default_some_password,
    }
  }

  $agent_home = '/var/lib/jenkins'

  file { "${agent_home}/.fog":
    ensure  => file,
    mode    => '0640',
    owner   => 'jenkins',
    group   => 'jenkins',
    content => to_yaml($fog_hash),
    require => User['jenkins'],
  }
}

That code generates this file:

---
default:
  some_array:
  - item1
  - item2
  some_username: jdoe
  some_password: pa$$w0rd

I tested this new profile out locally and all looked good so I ran it on one of the targeted nodes. Nothing about the file changed, which was what I expected, so I created a pull request and a little later it was merged. Success!

Coincidentally, right as I finished the code above one of our devs hit me up about adding a new section to the file. I told them “no problem!” since facilitating that process was part of why I was reworking the file’s management anyhow. So… I created a branch, made the change, and tested it on the same node as before.

Oh crap! Passwords are showing up in diffs…

When I ran puppet agent -t --noop -E new_fog_section two things happened:

  1. I saw the new section being added to the file (as expected)
  2. I saw the new section, including the passwords, right there on my monitor which also meant it was in the report sent back to the PE console.

The whole point of encrypting the password in Hiera was to keep it out of plain text in places it shouldn’t be. The current process was obviously failing at this so I figured it was time to learn how to put this thing I’d heard people talk about called “the Sensitive data type” to use.

HELP!

I read everything I could find on this subject but was still confused, so I posted the message below to an internal mailing list:

Subject: Sensitive and Class params from Hiera

How can I use the sensitive data type to keep strings that are encrypted via eyaml in Hiera from being displayed in the diff of a file? The use case here is for the params in a manifest that do not have defaults assigned to them.

Later that same day I got two tips that set the path for the rest of my journey:

  1. One of our solutions consultants suggested I have a look at Ben Ford’s binford2k-node_encrypt module.
  2. A long-time professional services engineer (PSE) expanded on that by telling me that
    • in Puppet 5.5.0 we added the ability to cast a parameter looked up via automatic parameter lookup to Sensitive by way of lookup_options. (PUP-7675)
    • when building a file using the content attribute, you can’t restrict the diff to hide Sensitive types and show non-Sensitive types so I’d need to not show the diff at all. To do that I’d need to add show_diff => false to the file resource.
    • if I want to hide the Sensitive values from the catalog and from PuppetDB then I’d need to use the node_encrypt module mentioned earlier.

Stumbling around

This was all great info and sounded like what I was looking for but I was also not quite getting it yet. In particular, I didn’t understand how to do the casting and I had a total memory lapse when it comes to how automatic parameter lookup works. With regards to the latter, I got it in my head somehow that parameters were only automatically looked up if defaults were not supplied so I was baffled when setting a parameter to something like Sensitive[String[1]] $bar = Sensitive(lookup('profile::foo:bar')) complained that parameter 'bar' expects a Sensitive[String] value, got String. That’s where the casting via lookup_options comes in.

lookup_options to the rescue

One of our other PSEs was kind enough to hop in chat with me and helped me understand the flaw in how I was doing the casting. It turns out that since the values are in Hiera I needed to do the casting in Hiera too. The result was me going into the Hiera file where the sensitive values were stored and adding this:

---
lookup_options:
  '^profile::jenkins::agent::fog::default_some_password:
    convert_to: 'Sensitive'
  '^profile::jenkins::agent::fog::second_special_param':
    convert_to: 'Sensitive'
  '^profile::jenkins::agent::fog::another_param_thats_encrypted':
    convert_to: 'Sensitive'
  '^profile::jenkins::agent::fog::p4_special_param':
    convert_to: 'Sensitive'
  '^profile::jenkins::agent::fog::special_param_no_5':
    convert_to: 'Sensitive'
  '^profile::jenkins::agent::fog::special_param_no_6':
    convert_to: 'Sensitive'
  '^profile::jenkins::agent::fog::special_param_no_7':
    convert_to: 'Sensitive'

Back in the manifest I adjusted it to look something like this:

class profile::jenkins::agent::fog (
  Sensitive[String[1]] $default_some_password,
  Array[String[1], 1]  $default_some_array    = [ 'item1', 'item2' ],
  String[1]            $default_some_username = 'jdoe',
  # several more parameters omitted here
  ) {
  $fog_hash = {
    'default' => {
      'some_array'    => $default_some_array,
      'some_username' => $default_some_username,
      'some_password' => $default_some_password,
    }
  }

  # file resource unchanged below here
}

Regex for the win

Notice some redundancy in those Hiera entries? Yeah, me too. Along the way there was mention that you could use a regex with lookup_options so I hunted around and found a docs page that showed me how this worked. Thanks to that info I refactored the parameter names to always start with “sensitive_” and replaced the entries above with this:

---
lookup_options:
  '^profile::jenkins::agent::fog::sensitive_.*':
    convert_to: 'Sensitive'

Great. Now any param in that profile that starts with sensitive_ with an entry in this particular yaml file will be cast to Sensitive… but what about parameters saved in other files?

Going global

Our hiera.yaml follows the has common.yaml as the lowest level. With that in mind I took the lookup_options block and moved it there. I then refactored it to cover any parameter in any profile that begins with sensitive_ by expanding the regex:

---
lookup_options:
  '^profile::.+::sensitive_\w+$':
    convert_to: 'Sensitive'

Implementing node_encrypt

At this stage I was quite happy with how things were coming along but still wanted to implement node_encrypt before declaring victory. To that end, I added the latest versions of binford2k-node_encrypt and puppetlabs-puppet_authorization from Puppet Forge to our Puppetfile and replaced my file resource with a node_encrypt::file resource. Doing so just required prefixing file with node_encrypt::. The updated resource looked like this:

node_encrypt::file { "${agent_home}/.fog":
  ensure  => file,
  mode    => '0640',
  owner   => 'jenkins',
  group   => 'jenkins',
  content => to_yaml($fog_hash),
  require => User['jenkins'],
}

I then modified the profile for our Puppet masters again (the one used by the MoM and compile masters) and added these two lines:

include node_encrypt::certificates

The node_encrypt readme has this to say about the above line:

The node_encrypt::certificates class can synchronize certificates across your infrastructure so that encryption works from all compile masters.

Puppet_authorization::Rule <| |> ~> Service['pe-puppetserver']

This line ensures that authorization rules like the one added by node_encrypt::certificates get applied right away by collecting them all and then notifying the pe-puppetserver service when needed.

Our MoM has an alias…

Like many people we have set up an alias for our certificate authority (and most everything else) so that configurations don’t have to change when a server is replaced. This means we need to override a parameter on the node_encrypt::certificates class on our MoM but nowhere else. To do this I added this line to its node-specific Hiera file:

---
node_encrypt::certificates::ca_server: "%{facts.fqdn}"

Testing time

With the code completed it’s time to make sure tests still pass. We use Onceover to test all pull requests to our control repo. The node_encrypt module does some unique things like access a node’s public Puppet certificates during catalog compilation. This is awesome for protecting sensitive data but caused me quite a bit of consternation when trying to figure out how to get tests to pass again. Fortunately, I work with some awesome people who are always willing to lend a hand. I reached out to Dylan Ratcliffe (the author of Onceover) and Ben Ford (the author of node_encrypt) to see if they could help get me unstuck and, sure enough, they did. In the end they helped me write some code in spec/onceover.yaml to mock the internal functions and prevent the catalog compilation from reading the file system during testing:

functions:
  node_encrypt:
    type: rvalue
    returns: "-----BEGIN PKCS7-----\nencrypted"

before:
  - "require_relative '../../etc/puppetlabs/code/environments/production/modules/node_encrypt/lib/puppet_x/binford2k/node_encrypt.rb'"
  - "Puppet_X::Binford2k::NodeEncrypt.stubs(:encrypt).with('foobar','testhost.example.com').returns('encrypted')"
  - "Puppet_X::Binford2k::NodeEncrypt.stubs(:decrypt).with('-----BEGIN PKCS7-----\nencrypted').returns('decrypted')"

Tests pass, let’s roll

Everything seemed to be good so after a couple of code review approvals I applied the environment associated with this code to our MoM so that node_encrypt::certificates could do its magic. Next I dipped into our tool bag and used bolt to do the same on all the compile masters:

# bolt gets updated often...
$ gem update bolt
$ bolt command run 'sudo -i puppet agent --onetime --no-daemonize --no-usecacheonfailure --no-splay --verbose -E sensitive_and_node_encrypt' --nodes @/Users/<my username>/Downloads/prod-compilers.txt

With the certificates distributed, I did a --noop run in this environment followed by a real run on the same server that was used for testing earlier. Everything looked good so we merged the pull request and let it roll out to all the places fog is used.

Annnnd I broke it

As it turns out I missed something in my first implementation of the Sensitive data type… I didn’t realize I had to apply the unwrap function to each parameter when using it inside of a hash. This caused every place that should look like some_password: pa$$w0rd to look like this instead:

 some_password: !ruby/object:Puppet::Pops::Types::PSensitiveType::Sensitive
    value: pa$$w0rd

Not surprisingly, this broke fog. I could have reverted the change and triggered a Puppet run on all the Jenkins agents but our developers were okay with letting it sit until I fixed the issue. To get rid of all the extra text I just needed to go back and surround each of the sensitive values in the hash with unwrap() like so:

$fog_hash = {
  'default' => {
    'some_array'    => $default_some_array,
    'some_username' => $default_some_username,
    'some_password' => unwrap($sensitive_default_some_password), # function added here
  }
}

With the fix in place we stepped through the PR process and then merged it to fix the other servers.

The final product

The final product is a setup that:

  • has secrets stored as encrypted strings in Hiera via hiera-eyaml
  • automatically casts all Hiera values that start with profile:: and end with sensitive_* to Sensitive
  • allows utilizing the binford2k-node_encrypt module to encrypt secrets in a node’s catalog and prevent them from being disclosed in reports and diffs

This is done by:

  1. Deploying hiera-eyaml to the master of masters manually as described in the backend’s readme file
  2. Using puppetserver_gem and some file resources to deploy the hiera-eyaml gem and the needed keys to the compile masters
  3. Adding a lookup_options entry to common.yaml that utilizes a regex matcher
  4. Refactoring parameters on profiles that access secrets in Hiera to enforce the Sensitive data type
  5. Adding code to the master of masters and compilers to make the public certs from nodes available when compiling catalogs
  6. Add a collector to the same servers to ensure authorization rules like the one added by node_encrypt::certificates trigger a restart of the pe-puppetserver service
  7. Switch from a file resource to a node_encrypt::file resource for sensitive files

Examples of each of these are below:

Profile for masters

include node_encrypt::certificates  

Puppet_authorization::Rule <| |> ~> Service['pe-puppetserver']

file {
  default:
    ensure => file,
    owner  => 'root',
    group  => 'root',
  ;
  '/etc/puppetlabs/puppet/eyaml':
    ensure => directory,
    mode   => '0755',
  ;
  '/etc/puppetlabs/puppet/eyaml/private_key.pkcs7.pem':
    group   => 'pe-puppet',
    mode    => '0440',
    content => lookup('profile::pe::master::eyaml_private_key'),
  ;
  '/etc/puppetlabs/puppet/eyaml/public_key.pkcs7.pem':
    mode   => '0444',
    source => 'puppet:///modules/profile/pe/master/eyaml_public_key.pkcs7.pem',
  ;
}

package { 'hiera-eyaml puppetserver_gem':
  ensure   => '2.1.0',
  name     => 'hiera-eyaml',
  provider => 'puppetserver_gem',
  notify   => Service['pe-puppetserver'],
}

Hiera node file for master of masters

---
node_encrypt::certificates::ca_server: "%{facts.fqdn}"  

Hiera common.yaml

---
lookup_options:
  '^profile::.+::sensitive_\w+$':
    convert_to: 'Sensitive'

Profile with sensitive params

class profile::jenkins::agent::fog (
  Sensitive[String[1]] $sensitive_default_some_password,
  Array[String[1], 1]  $default_some_array    = [
    'item1',
    'item2',
  ],
  String[1]            $default_some_username = 'jdoe',
  # several more parameters omitted here
  ) {
  $fog_hash = {
    'default' => {
      'some_array'    => $default_some_array,
      'some_username' => $default_some_username,
      'some_password' => unwrap($sensitive_default_some_password),
    }
  }

  node_encrypt::file { "${agent_home}/.fog":  
    ensure  => file,
    mode    => '0640',
    owner   => 'jenkins',
    group   => 'jenkins',
    content => to_yaml($fog_hash),
    require => User['jenkins'],
  }
}

What’s next?

This is all great stuff… if it’s used. Next up is ensuring our Confluence page on deploying secrets is updated to help others utilize the Sensitive data type and the options provided by node_encrypt. After that it’s all about communication. Many of the people who work with secrets have been doing so long enough they probably don’t need to look at the directions anymore so I need to ensure they know that the process has been enhanced.

I am a senior site reliability engineer at Puppet.