Category Archives: Puppet

Using PKGNG on FreeBSD with Puppet

This is how I installed the new package manager on FreeBSD : pkgng and how to use it with Puppet.
This has been tested on a FreeBSD 8.3 jail with Puppet 3.2.

Pkgng setup

The official documentation is here.

Pkgng installation:

# portsnap fetch update
# portmaster -dB ports-mgmt/pkg

You need then to convert your package database to new pkgng format. Warning ! As mentioned in documentation, this step is not reversible. You won’t be able to use pkg_add anymore after that.

# pkg2ng

To use pkgng format by default, you must add in your make.conf:

# echo "WITH_PKGNG=yes" >> /etc/make.conf

Define new repository for pkgng:

# mkdir -p /usr/local/etc/pkg/repos
# cat << 'EOF' > /usr/local/etc/pkg/repos/FreeBSD.conf
FreeBSD: {
  url: "${ABI}/latest",
  mirror_type: "srv",
  enabled: "yes"
# pkg update

That’s it, you should be able to use pkgng:

# pkg update
# pkg install sl # This is my favorite test package!
# sl

pkg may display an error message (only warning):

pkg: Invalid configuration format, ignoring the configuration file

This is a known bug related to the empty /usr/local/etc/pkg.conf file.
pkg 1.2 should fix this problem.

Add pkgng as package provider in Puppet

The new provider already exists in Puppet.
You can find the github repository here.

To install the module, you can clone the repo or use puppet module install:

# cd ~puppet/modules/
# puppet module install xaque208/pkgng

I personaly don’t use the module’s manifest (init.pp and params.pp), I only use the new package provider defined in lib directory.

You now have to change the default provider for FreeBSD on Puppet. I did it on site.pp:

if $::operatingsystem == 'FreeBSD' {
        Package {
                provider => $::operatingsystem ? {
                        FreeBSD => pkgng

You can now define package resource on Puppet, they will be installed by pkgng !

package {
		#ensure => installed;
		ensure => absent;

Distributed monitoring with Nagios and Puppet

In the past I had only one Nagios3 server to monitor all my production servers. The configuration was
fully generated with Puppet by Naginator.
This solution even with drawbacks (hard to set specific alert thresholds, appliances without Puppet, etc…)
is very powerfull. I never had to mind about monitoring configuration :
I’m always sure that every host in production is monitored by nagios thanks to Puppet.
However my needs have evolved and I begun to have distributed monitoring problems :
4 datacenters spread between Europe and USA and networks outages between datacenters
raising a lot of False Positives alerts.
I didn’t have any performance isssues as I have less than 200 hosts and 2K services.

I tried Shinken, really I tried. 2 years ago and again this last few months.
I had to package it into Debian package because all of our servers are
built unattended : the installation script was not an option for me.

On the paper Shinekn was perfect :
* fully compatible with Nagios configration
* support of specific shinken parameters on Puppet (ie: poller_tag)
* built-in support of distributed monitoring with realms and poller_tag
* built-in suport of HA
* built-in support of Livestatus
* very nice community and developers

In my experience :
* the configuration was not fully compatible (but adjustments were easy)
* shinken takes a lot more RAM than Nagios (even if Jean Gabès took the time to write me a very long mail to explain this behavior)
* the most important to me : the whole set was IMHO not enough stable and robust for my use-case : in case of netsplit, daemons did not resynchronize after outage, some modules crashed quite often without explanation, some problems with Pyro, etc ..

At the end I was not confident enough about my monitoring POC and I did not choose to put it in production.

To be clear :
* I still believe that Shinken will be in (near ?) future one (or THE) solution to replace the old Nagios, but it was not ready
for my needs
* Some people are running Shinken in production and some on very big production without any problem. My experience should not
convince you not to try this product ! You need to make your own opinion !

In deciding not to use Shinken I had to find another solution.

I choose this architecture :
* One Nagios per datacenter for polling
* Puppet to manage the whole distributed configuration (it take there the aim of arbiter in shinken)
* Livestatus + Check_MK Multisite to aggregate views of monitoring from all datacenters

Puppet tricks

We use a lot of Facts custom in Puppet and we have a Fact “$::hosting”
wich let us know in which datacenter is the host.
In order to cut our monitoring configuration between each poller, I use dynamic target for all puppet’s resources bounded to datacenters (hosts, services, hostescalation, serviceescalation):

Here is a simplified example of Host configuration in Exported Resources :

        $puppet_conf_directory = '/etc/nagios3/conf.puppet.d'
        $host_directory = "$puppet_conf_directory/$::hosting"

        @@nagios_host { "$::fqdn" :
                tag           => 'nagios',
                address    => $::ipaddress_private,
                alias         => $::hostname,
                use           => "generic-host",
                notify        => Service[nagios3],
                target        => "$host_directory/$::fqdn-host.cfg",

All common resources between every pollers (contacts, contactgroups,
commands, timeperiods, etc…) are generated in one common directory
that all nagios pollers are reading (ie: ‘/etc/nagios3/conf.puppet.d/common’).
Eventually in nagios.cfg, I read for each poller the good directories for each datacenters.

# ie for nagios1 : 
# for nagios2 :

I decided not to use tags in exported resources :
it let me have exact same configuration files on each nagios poller in “/etc/nagios3/conf.puppet.d” : only change nagios.cfg between pollers.
In case of problem on one of my poller, I can very simply add monitoring of one datacenter only adding one more directory to source in nagios.cfg.

With that configuration I have a very simple distributed monitoring thanks to Puppet once again :)

I will explain in one future blog post how to aggregate views with Livestatus and Check_MK Multisite.


Howto integrate Puppet, Foreman and Mcollective

Since we deployed Foreman in production, we didn’t use the ‘Run puppet’ button
in Foreman’s interface because we run puppet with a crontab.

However Foreman 1.2 release changed that : now smart-proxy have
mcollective native integration.

This is how to setup that. I assume that you already have a working Foreman and Mcollective

In all your ‘puppet’ proxies you need to :
Install mcollective client and puppet plugin:

# apt-get install mcollective-client mcollective-puppet-client

You need to configure you mcollective client (/etc/mcollective/client.cfg). This configuration should be
quite similar to the one you have for your desktop.
You need then to grant the user foreman-proxy to run mcollective client :

# visudo 
Defaults:foreman-proxy !requiretty
foreman-proxy ALL = NOPASSWD: /usr/bin/mco puppet runonce *

In your proxy configuration :

:puppet: true
:puppet_provider: mcollective

Restart then your smart-proxy (I run it with apache/passenger):

# service apache2 restart

You should be able to test your new installation with a simple
curl command :

$  curl   -d "" https://myproxy:8443/puppet/run

In order to be able to use the mcollective integration, I had to add in my mcollective daemon
configuration the following directive :

Dans /etc/mcollective/server.cfg

identity =

Eventualy in Foreman settings, you
need to set ‘puppetrun’ directive to ‘true':

This should be good: you just need to click on ‘Run puppet’ button on your host page !


How to generate Puppet SSL certificate with “Alternative Name”

I needed to add DNS Alt name in order to setup a full SSL comunication between my 2 Foreman servers et their proxies.
My problem was that my Foreman servers are used in faillover (with a VIP) and the clients use a generic DNS record and not directly
their FQDN. This was a problem because the address didn’t match with the certificate’s CN.

In order to fix that, I seted up a Puppet certificate where CN is the FQDN of the server (ie: and which have an
‘Subject Alternative Name’ with VIP address (ie:

This is really simple to do but not that easy to find on the internet:
You first need to revoke the certicate on the master and remove it on the client :
On the client (on Debian):

# rm -rf /var/lib/puppet/ssl

On the master:

# puppet cert clean

You should add to the client’s puppet.conf the following:

dns_alt_names =

The you need to kick puppet on the client to force a new certificate generation and ask to the puppet master to sign it:

# puppet agent -t --report --pluginsync

On the master, you can see the certificate signing request and sign it:

# puppet cert list
  "" (SHA256) 2C:76:5B:85:67:28:1C:92:48:AA:10:22:44:C7:9B:A7:0D:9B:E2:A5:5F:10:71:87:B9:3F:46:E4:70:4B:43:6C (alt names: "", "")
# puppet cert sign --allow-dns-alt-names

You now have a Puppet CA signed certificate with DNS Alt Name.


Auto validation and monitoring of Nagios configuration by itself

This is quite complicated title for a very simple bash script.

My problem was the following. I heavily use Puppet and its Naginator module for writting my Nagios configuration. When I create a new host on Foreman, Puppet write dynamicaly the Nagios configuration with all necessary checks depending puppet classes associated to that new host (system monitoring, application monitoring, …). After puppet has written all changes, it notifies nagios daemon for a reload of configuration.

However, as my nagios module puppet was not perfect, in case of human error I can have an invalid nagios configuration created. We have the chance that nagios always checks its configuration before restarting, so I don’t loose the monitoring service. However the configuration is not reloaded and stop being updated.

That’s the reason why I wrote a 3 lines bash script wich just check nagios configuration. I basicaly use nagios binary with ‘-v’ option. You can find this script on Github.

That’s with this simple solution that I am always sure that I have a valid nagios configuration and that this configuration always match my production.


How to check Puppet run with Foreman and Nagios

I needed to make sure that Puppet was running smoothly on all production servers. For that purpose I needed to check 2 things :

– First that puppet was running every 30 minutes (I use cron and not Puppet daemon) . For that I simply use the nagios ‘check_file_age’ and I check the age of the “state.yaml” file. Here is my configuration on the command on Debian server:

/usr/lib/nagios/plugins/check_file_age -w 3780 -c 43200 -f /var/lib/puppet/state/state.yaml

– The first check make me sure that Puppet is running on a regular basis. However I am not sure that it run without problems. That’s the reason why I decided to use the Foreman Report status.

You can find my script on Github . To use it on a Debian server, you should install the dependencies :

# apt-get install libhttp-server-simple-perl libjson-perl
# wget
# tar xvf REST-Client-243.tar.gz
# cd REST-Client-243
# make
# make install

To use it, it’s very simple:

$ /usr/lib/nagios/plugins/ -H -F -w 3 -c 5 -u username -p password

This command will check last reports of Puppet run. If the number of run with error state is greater than warning or critical then the nagios check will return the corresponding error.

Now, you can monitor your puppet run thanks to Nagios and Foreman !