Tag Archives: Puppet

Using PKGNG on FreeBSD with Puppet

This is how I installed the new package manager on FreeBSD : pkgng and how to use it with Puppet.
This has been tested on a FreeBSD 8.3 jail with Puppet 3.2.

Pkgng setup

The official documentation is here.

Pkgng installation:

# portsnap fetch update
# portmaster -dB ports-mgmt/pkg

You need then to convert your package database to new pkgng format. Warning ! As mentioned in documentation, this step is not reversible. You won’t be able to use pkg_add anymore after that.

# pkg2ng

To use pkgng format by default, you must add in your make.conf:

# echo "WITH_PKGNG=yes" >> /etc/make.conf

Define new repository for pkgng:

# mkdir -p /usr/local/etc/pkg/repos
# cat << 'EOF' > /usr/local/etc/pkg/repos/FreeBSD.conf
FreeBSD: {
  url: "http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  enabled: "yes"
}
EOF
# pkg update

That’s it, you should be able to use pkgng:

# pkg update
# pkg install sl # This is my favorite test package!
# sl

pkg may display an error message (only warning):

pkg: Invalid configuration format, ignoring the configuration file

This is a known bug related to the empty /usr/local/etc/pkg.conf file.
pkg 1.2 should fix this problem.

Add pkgng as package provider in Puppet

The new provider already exists in Puppet.
You can find the github repository here.

To install the module, you can clone the repo or use puppet module install:

# cd ~puppet/modules/
# puppet module install xaque208/pkgng

I personaly don’t use the module’s manifest (init.pp and params.pp), I only use the new package provider defined in lib directory.

You now have to change the default provider for FreeBSD on Puppet. I did it on site.pp:

if $::operatingsystem == 'FreeBSD' {
        Package {
                provider => $::operatingsystem ? {
                        FreeBSD => pkgng
                }
        }
}

You can now define package resource on Puppet, they will be installed by pkgng !

package {
	'sl':
		#ensure => installed;
		ensure => absent;
}
Share

Foreman 1.3 has been released

What’s new in that release ?

Foreman 1.3 has just been released, let’s have a look to the content of that new version:

  1. The installer is now based on Kafo project. I didn’t test it because I always install Foreman with a git checkout
  2. The Hammer project (the Foreman CLI) is going on ! This is great because Foreman was lacking a good CLI at the beginning. However Core Team still warn that CLI is still limited, to be continued so.
  3. On Compute Resource level, the most wanted Amazon EC2 VPC support (Virtual Private Cloud) has been included. On top of that a first shot for GCE (Google Compute Engine) has been released. It’s now quite limited as it doesn’t support VMs that requires persistant disk creation.
  4. Spice support for Libvirt Compute Resource is now available
  5. And Foreman allows now to transform a VM seen like a BareMetal host in Foreman as … a VM associated to a Compute Resource!
  6. There is some changes also on API side. The API v2 is still ‘experimental’ but supports now : REMOTE_USER, smart classes, the network interface management, the power management and boot devices. The API v1 is still the default one.
  7. A new foreman-rake command exists now. The goal is to provide a generic wrapper for all Foreman’s related rake commands.

Translations have been improved. Foreman is now translated into 6 languages. Don’t hesitate to participate on Foreman’s Transifex.

Important before migration

The Reports and Facts format structure have changed with removal of Puppet from Foreman Core.

  • You have to change your external node scripts (/etc/puppet/node.rb) on your puppet master and use the new one that you can find there.
  • You also need to change your report upload script on your puppet master and use the new one

The removal of Puppet from Foreman Core is a really good thing. It will let now the support of other Configuration Management System such as Chef or CFEngine. It should be noted that a first Chef support has begin. You can find the project here.

The official release note and changelog can be found here.
Let’s migrate !

Share

Distributed monitoring with Nagios and Puppet

In the past I had only one Nagios3 server to monitor all my production servers. The configuration was
fully generated with Puppet by Naginator.
This solution even with drawbacks (hard to set specific alert thresholds, appliances without Puppet, etc…)
is very powerfull. I never had to mind about monitoring configuration :
I’m always sure that every host in production is monitored by nagios thanks to Puppet.
However my needs have evolved and I begun to have distributed monitoring problems :
4 datacenters spread between Europe and USA and networks outages between datacenters
raising a lot of False Positives alerts.
I didn’t have any performance isssues as I have less than 200 hosts and 2K services.

I tried Shinken, really I tried. 2 years ago and again this last few months.
I had to package it into Debian package because all of our servers are
built unattended : the installation script was not an option for me.

On the paper Shinekn was perfect :
* fully compatible with Nagios configration
* support of specific shinken parameters on Puppet (ie: poller_tag)
* built-in support of distributed monitoring with realms and poller_tag
* built-in suport of HA
* built-in support of Livestatus
* very nice community and developers

In my experience :
* the configuration was not fully compatible (but adjustments were easy)
* shinken takes a lot more RAM than Nagios (even if Jean Gabès took the time to write me a very long mail to explain this behavior)
* the most important to me : the whole set was IMHO not enough stable and robust for my use-case : in case of netsplit, daemons did not resynchronize after outage, some modules crashed quite often without explanation, some problems with Pyro, etc ..

At the end I was not confident enough about my monitoring POC and I did not choose to put it in production.

To be clear :
* I still believe that Shinken will be in (near ?) future one (or THE) solution to replace the old Nagios, but it was not ready
for my needs
* Some people are running Shinken in production and some on very big production without any problem. My experience should not
convince you not to try this product ! You need to make your own opinion !

In deciding not to use Shinken I had to find another solution.

I choose this architecture :
* One Nagios per datacenter for polling
* Puppet to manage the whole distributed configuration (it take there the aim of arbiter in shinken)
* Livestatus + Check_MK Multisite to aggregate views of monitoring from all datacenters

Puppet tricks

We use a lot of Facts custom in Puppet and we have a Fact “$::hosting”
wich let us know in which datacenter is the host.
In order to cut our monitoring configuration between each poller, I use dynamic target for all puppet’s resources bounded to datacenters (hosts, services, hostescalation, serviceescalation):

Here is a simplified example of Host configuration in Exported Resources :

        $puppet_conf_directory = '/etc/nagios3/conf.puppet.d'
        $host_directory = "$puppet_conf_directory/$::hosting"

        @@nagios_host { "$::fqdn" :
                tag           => 'nagios',
                address    => $::ipaddress_private,
                alias         => $::hostname,
                use           => "generic-host",
                notify        => Service[nagios3],
                target        => "$host_directory/$::fqdn-host.cfg",
        }

All common resources between every pollers (contacts, contactgroups,
commands, timeperiods, etc…) are generated in one common directory
that all nagios pollers are reading (ie: ‘/etc/nagios3/conf.puppet.d/common’).
Eventually in nagios.cfg, I read for each poller the good directories for each datacenters.

# ie for nagios1 : 
cfg_dir=/etc/nagios3/conf.puppet.d/common
cfg_dir=/etc/nagios3/conf.puppet.d/hosting1
# for nagios2 :
cfg_dir=/etc/nagios3/conf.puppet.d/common
cfg_dir=/etc/nagios3/conf.puppet.d/hosting2

I decided not to use tags in exported resources :
it let me have exact same configuration files on each nagios poller in “/etc/nagios3/conf.puppet.d” : only change nagios.cfg between pollers.
In case of problem on one of my poller, I can very simply add monitoring of one datacenter only adding one more directory to source in nagios.cfg.

With that configuration I have a very simple distributed monitoring thanks to Puppet once again :)

I will explain in one future blog post how to aggregate views with Livestatus and Check_MK Multisite.

Share

Howto integrate Puppet, Foreman and Mcollective

Since we deployed Foreman in production, we didn’t use the ‘Run puppet’ button
in Foreman’s interface because we run puppet with a crontab.

However Foreman 1.2 release changed that : now smart-proxy have
mcollective native integration.

This is how to setup that. I assume that you already have a working Foreman and Mcollective
setup.

In all your ‘puppet’ proxies you need to :
Install mcollective client and puppet plugin:

# apt-get install mcollective-client mcollective-puppet-client

You need to configure you mcollective client (/etc/mcollective/client.cfg). This configuration should be
quite similar to the one you have for your desktop.
You need then to grant the user foreman-proxy to run mcollective client :

# visudo 
Defaults:foreman-proxy !requiretty
foreman-proxy ALL = NOPASSWD: /usr/bin/mco puppet runonce *

In your proxy configuration :

:puppet: true
:puppet_provider: mcollective

Restart then your smart-proxy (I run it with apache/passenger):

# service apache2 restart

You should be able to test your new installation with a simple
curl command :

$  curl   -d "nodes=myserver.example.com" https://myproxy:8443/puppet/run

In order to be able to use the mcollective integration, I had to add in my mcollective daemon
configuration the following directive :

Dans /etc/mcollective/server.cfg

identity = myserver.example.com

Eventualy in Foreman settings, you
need to set ‘puppetrun’ directive to ‘true':

This should be good: you just need to click on ‘Run puppet’ button on your host page !

Share

How to generate Puppet SSL certificate with “Alternative Name”

I needed to add DNS Alt name in order to setup a full SSL comunication between my 2 Foreman servers et their proxies.
My problem was that my Foreman servers are used in faillover (with a VIP) and the clients use a generic DNS record and not directly
their FQDN. This was a problem because the address didn’t match with the certificate’s CN.

In order to fix that, I seted up a Puppet certificate where CN is the FQDN of the server (ie: foreman1.example.com) and which have an
‘Subject Alternative Name’ with VIP address (ie: foreman.example.com).

This is really simple to do but not that easy to find on the internet:
You first need to revoke the certicate on the master and remove it on the client :
On the client (on Debian):

# rm -rf /var/lib/puppet/ssl

On the master:

# puppet cert clean foreman1.example.com

You should add to the client’s puppet.conf the following:

dns_alt_names = foreman.example.com

The you need to kick puppet on the client to force a new certificate generation and ask to the puppet master to sign it:

# puppet agent -t --report --pluginsync

On the master, you can see the certificate signing request and sign it:

# puppet cert list
  "foreman1.example.com" (SHA256) 2C:76:5B:85:67:28:1C:92:48:AA:10:22:44:C7:9B:A7:0D:9B:E2:A5:5F:10:71:87:B9:3F:46:E4:70:4B:43:6C (alt names: "DNS:foreman.example.com", "DNS:foreman1.example.com")
# puppet cert sign devshinken4.yzserv.com --allow-dns-alt-names

You now have a Puppet CA signed certificate with DNS Alt Name.

Share

Foreman migration without problems.

I just migrated my Foreman instances to 1.1 in production (I’ll writte later about nice new features on 1.1).

One of most important test I do before upgrading the production is the non regression of ENC output. What I mean is that I check that the new Foreman server sends the same YAML to the Puppet master during ENC lookup. I wrote a small ruby script (using the external controler script from Foreman community) wich compares YAML responses between 2 Foreman instances (ie: production and QA).

In order to support parameterized classes, Foreman changed a bit the YAML structure but this script supports this changement.
You can find it on my Github repo. You just have to change the 2 URLs of Foreman instances and set your login and password.

This script stop automaticaly if it founds a different node defintion between dev and production. This tool allow me to be more confident before a major Foreman’s migration.

Share

Fully automated deployement of webserver using Foreman

The aim of this post is to show the automation level for application server deployement on virtualization layer that is now possible with Foreman @ Yakaz.

This video shows how easy it is for us to create a new virtual machine and how Puppet manage it to fully configure the server to run our application. This video show the Compute Resource system that was introduced with Foreman 1.0.

Compared to deprecated virtualization support on Foreman before 1.0, the Compute Resources allow to :

  • Deploy new VMs with multiple NIC and disks
  • Direct access to VNC console on Foreman’s interface
  • ACL on Compute Resource use. This is usefull for us to be able to have an ‘open’ virtualization cluster for every developers that need VM and an isolated production cluster
  • VM power cycle management directly on Foreman
  • When a Compute Resource is deleted on Foreman, the VM is also deleted on virtualization cluster
For now, supported compute resources are: 
  • oVirt / RHEV-M
  • libvirt
  • Amazon EC2
  • VMware
  • Rackspace OpenCloud
As reminder, with ‘Unattended Installation’ feature, Foreman manage:
  • A and PTR DNS record for multiple domains
  • DHCP lease for the host
  • Puppet configuration with ENC
  • PuppetCA management : no need to sign and revoke certificates anymore
Once the VM is built, puppet configure it in order that it comply with configuration selected (in that case, our Webserver for Yakaz website)

This video shows that the ‘Operator time’ required for new server deployement is really reaaly short. On top of that, our Foreman integration give us a great flexibility and elasticity on our production cluster management.

Share

Auto validation and monitoring of Nagios configuration by itself

This is quite complicated title for a very simple bash script.

My problem was the following. I heavily use Puppet and its Naginator module for writting my Nagios configuration. When I create a new host on Foreman, Puppet write dynamicaly the Nagios configuration with all necessary checks depending puppet classes associated to that new host (system monitoring, application monitoring, …). After puppet has written all changes, it notifies nagios daemon for a reload of configuration.

However, as my nagios module puppet was not perfect, in case of human error I can have an invalid nagios configuration created. We have the chance that nagios always checks its configuration before restarting, so I don’t loose the monitoring service. However the configuration is not reloaded and stop being updated.

That’s the reason why I wrote a 3 lines bash script wich just check nagios configuration. I basicaly use nagios binary with ‘-v’ option. You can find this script on Github.

That’s with this simple solution that I am always sure that I have a valid nagios configuration and that this configuration always match my production.

Share

How to check Puppet run with Foreman and Nagios

I needed to make sure that Puppet was running smoothly on all production servers. For that purpose I needed to check 2 things :

– First that puppet was running every 30 minutes (I use cron and not Puppet daemon) . For that I simply use the nagios ‘check_file_age’ and I check the age of the “state.yaml” file. Here is my configuration on the command on Debian server:

/usr/lib/nagios/plugins/check_file_age -w 3780 -c 43200 -f /var/lib/puppet/state/state.yaml

– The first check make me sure that Puppet is running on a regular basis. However I am not sure that it run without problems. That’s the reason why I decided to use the Foreman Report status.

You can find my script on Github . To use it on a Debian server, you should install the dependencies :

# apt-get install libhttp-server-simple-perl libjson-perl
# wget http://search.cpan.org/CPAN/authors/id/M/MC/MCRAWFOR/REST-Client-243.tar.gz
# tar xvf REST-Client-243.tar.gz
# cd REST-Client-243
# make
# make install

To use it, it’s very simple:

$ /usr/lib/nagios/plugins/check_foreman_puppet_failure.pl -H webserver.example.com -F http://foreman.example.com -w 3 -c 5 -u username -p password

This command will check last reports of Puppet run. If the number of run with error state is greater than warning or critical then the nagios check will return the corresponding error.

Now, you can monitor your puppet run thanks to Nagios and Foreman !

Share