Category Archives: Planet Libre

How to use the new virsh provider in Foreman 1.4

This morning I decided to play with a new Foreman 1.4 feature : TFTP, DHCP and DNS provider for my local workstation : virsh.
Virsh provider allow you to manage DHCP and DNS libvirt’s network (via dnsmasq) for some local development. It allow to have a full provisioning workflow without having to install bind, tftpd and dhcpd.

This post is hugely inspired from Foreman 1.4 manual.

Libvirt configuration

The first thing to do is to define a persistent virtual network in libvirt.
Copy in a file named net-defintion.xml. You can off course change the network name, ip range, domain name, etc …

$ cat net-defintion.xml
<network>
 <name>default</name>
 <uuid>16b7b280-7462-428c-a65c-5753b84c7545</uuid>
 <forward mode='nat'>
 <nat>
 <port start='1024' end='65535'/>
 </nat>
 </forward>
 <bridge name='virbr0' stp='on' delay='0'/>
 <mac address='52:54:00:b2:fa:27'/>
 <domain name='fitzdsl.local'/>
 <ip address='192.168.122.1' netmask='255.255.255.0'>
 <tftp root='/tftp'/>
 <dhcp>
 <range start='192.168.122.2' end='192.168.122.254'/>
 <bootp file='pxelinux.0'/>
 </dhcp>
 </ip>
</network>

Then, you need to create and start the default network on libvirt:

# virsh net-define --file net-definition.xml
# virsh net-start default

We need to setup the TFTP directory (from Foreman manual for Fedora) :

mkdir -p /var/tftproot/{boot,pxelinux.cfg}
yum -y install syslinux
cp /usr/share/syslinux/{pxelinux.0,menu.c32,chain.c32} /var/tftproot
chgrp -R nobody /var/tftproot
find /var/tftproot/ -type d | xargs chmod g+s

Smart-Proxy configuration

We need now to configure a local smart-proxy to manage TFTP, DNS and DHCP:
We should now configure the local smart-proxy to use this new provider:
Set the following:

:tftp: true
:tftproot: /var/tftproot
:tftp_servername: 192.168.122.1
:dns: true
:dns_provider: virsh
:dhcp: true
:dhcp_vendor: virsh
:virsh_network: default

Finaly make sure your smart-proxy can have sudo rights :

Defaults !requiretty
foreman-proxy ALL=/usr/bin/virsh

Foreman configuration

First you need to add your proxy or refresh the feature list:

In Infrastructure:
New proxy : http://localhost:8443

Then you need to create a new domain and subnet :

  • Create a new domain : name it accordingly to your “domain name” on the net-defintion.xml.
  • Create a new subnet accordingly to you net-definition file.

In my case:

Name: Home
Network address: 192.168.122.0
Netmask: 255.255.255.0
Start IP Range: 192.168.122.2
Stop IP Range: 192.168.122.255
  • In Domains tab check the domain you just created.
  • In Proxies tab select your new proxy for DHCP, TFTP and DNS.

Create a new VM

When creating a new host, take care to select in “Virtual Machine” Tab on Network Interfaces:

  • Network Type => Virtual (NAT)
  • Network => “default”

You have now the ability to setup in local full provisioning environment. The only missing thing is that the PTR DNS record is not setup.

Great thanks to Lukas (@lzap) who implemented this new great feature !

Share

Using PKGNG on FreeBSD with Puppet

This is how I installed the new package manager on FreeBSD : pkgng and how to use it with Puppet.
This has been tested on a FreeBSD 8.3 jail with Puppet 3.2.

Pkgng setup

The official documentation is here.

Pkgng installation:

# portsnap fetch update
# portmaster -dB ports-mgmt/pkg

You need then to convert your package database to new pkgng format. Warning ! As mentioned in documentation, this step is not reversible. You won’t be able to use pkg_add anymore after that.

# pkg2ng

To use pkgng format by default, you must add in your make.conf:

# echo "WITH_PKGNG=yes" >> /etc/make.conf

Define new repository for pkgng:

# mkdir -p /usr/local/etc/pkg/repos
# cat << 'EOF' > /usr/local/etc/pkg/repos/FreeBSD.conf
FreeBSD: {
  url: "http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  enabled: "yes"
}
EOF
# pkg update

That’s it, you should be able to use pkgng:

# pkg update
# pkg install sl # This is my favorite test package!
# sl

pkg may display an error message (only warning):

pkg: Invalid configuration format, ignoring the configuration file

This is a known bug related to the empty /usr/local/etc/pkg.conf file.
pkg 1.2 should fix this problem.

Add pkgng as package provider in Puppet

The new provider already exists in Puppet.
You can find the github repository here.

To install the module, you can clone the repo or use puppet module install:

# cd ~puppet/modules/
# puppet module install xaque208/pkgng

I personaly don’t use the module’s manifest (init.pp and params.pp), I only use the new package provider defined in lib directory.

You now have to change the default provider for FreeBSD on Puppet. I did it on site.pp:

if $::operatingsystem == 'FreeBSD' {
        Package {
                provider => $::operatingsystem ? {
                        FreeBSD => pkgng
                }
        }
}

You can now define package resource on Puppet, they will be installed by pkgng !

package {
	'sl':
		#ensure => installed;
		ensure => absent;
}
Share

Foreman 1.3 has been released

What’s new in that release ?

Foreman 1.3 has just been released, let’s have a look to the content of that new version:

  1. The installer is now based on Kafo project. I didn’t test it because I always install Foreman with a git checkout
  2. The Hammer project (the Foreman CLI) is going on ! This is great because Foreman was lacking a good CLI at the beginning. However Core Team still warn that CLI is still limited, to be continued so.
  3. On Compute Resource level, the most wanted Amazon EC2 VPC support (Virtual Private Cloud) has been included. On top of that a first shot for GCE (Google Compute Engine) has been released. It’s now quite limited as it doesn’t support VMs that requires persistant disk creation.
  4. Spice support for Libvirt Compute Resource is now available
  5. And Foreman allows now to transform a VM seen like a BareMetal host in Foreman as … a VM associated to a Compute Resource!
  6. There is some changes also on API side. The API v2 is still ‘experimental’ but supports now : REMOTE_USER, smart classes, the network interface management, the power management and boot devices. The API v1 is still the default one.
  7. A new foreman-rake command exists now. The goal is to provide a generic wrapper for all Foreman’s related rake commands.

Translations have been improved. Foreman is now translated into 6 languages. Don’t hesitate to participate on Foreman’s Transifex.

Important before migration

The Reports and Facts format structure have changed with removal of Puppet from Foreman Core.

  • You have to change your external node scripts (/etc/puppet/node.rb) on your puppet master and use the new one that you can find there.
  • You also need to change your report upload script on your puppet master and use the new one

The removal of Puppet from Foreman Core is a really good thing. It will let now the support of other Configuration Management System such as Chef or CFEngine. It should be noted that a first Chef support has begin. You can find the project here.

The official release note and changelog can be found here.
Let’s migrate !

Share

New webservice to manage monitoring downtimes with Livestatus

To follow my previous post about distributed monitoring, I had to update my script to manage nagios’ downtimes. I explained my first method in a previous article.
I completly rewrote the webservice in python using Livestatus. The sources are available on my github.
This script supports multiples Livestatus daemons.
The use of this webservice is quite similar to the old one. You need to query an HTTP GET with multiples arguments.
The query format is the following:

ACTION=(schedule-svc-downtime|remove-svc-downtime|schedule-servicegroup-downtime)&MANDATORY_ARGUMENTS

The MANDATORY_ARGUMENTS depends on the ACTION:

  • If ACTION=schedule-svc-downtime: mandatory arguments are HOSTNAME,SERVICEDESC,DURATION,AUTHOR,COMMENT
  • If ACTION=remove-svc-downtime: mandatory arguments are HOSTNAME,SERVICEDESC
  • If ACTION=schedule-servicegroup-downtime : mandatory arguments are SERVICEGROUP,DURATION,AUTHOR,COMMENT

With :

  • HOSTNAME the host_name defined in nagios
  • SERVICEDESC the service_description defined in nagios
  • DURATION the duration in seconds
  • SERVICEGROUP the servicegroups defined in nagios
  • AUTHOR the contact in nagios

The advantages of this new script are:

  • Support of managing multiple monitoring servers from a single end-point
  • You don’t need to run that script on the monitoring server
  • This script is compatible with any monitoring server that have a Livestatus broker. It has been tested with Shinken and Nagios.

If you have any feedback, don’t hesitate to post a comment!

Share

Power management of Bare Metal servers with Foreman

Power management of bare-metal servers is a new feature that comes with Foreman 1.2. You will need to have deployed Foreman and smart-proxy to 1.2 to enjoy this.
With that feature you will be able to provide a way to start, stop and reboot servers directly from Foreman’s Webinterface or from REST API.
This has been tested on DELL servers configuring DRAC/

Smart-proxy configuration

On your smart-proxy, you need:

* The rubyipmi gem installed

# gem install rubyipmi

* ipmitool installed:

# apt-get install ipmitool

Configuration in Foreman

You need to edit the host you want to manage with BMC:

  • Go on foreman/hosts/server.example.com/edit
  • Click on  Network tab
  • Click on ‘Add Interface’
  • Type = ‘BMC’
  • You need then to get the MAC address of your BMC interface. Connect to your server (server.example.com in my example) in ssh and then using ipmitool command :
# ipmitool lan print

Foreman will automaticaly add a DNS record and a DHCP lease for your interface.
However, there is a bug in 1.2.0 where Foreman doesn’t append correctly the domain to the name of the card (see the open bug ). As a woraround, you can set in Name field the FQDN of your interface name (ie: server-ipmi.example.com)

  • Eventualy choose a user/password then save.

IPMI configuration

On the server, you still need to configure it to allow Foreman to use it :
You can use the following ipmitool commands :

# ipmitool lan set 1 ipsrc dhcp
# ipmitool lan set 1 access on
# ipmitool user set name 3 your_user
# ipmitool user set password 3 your_password
# ipmitool channel setaccess 1 3 callin=on ipmi=on link=on privilege=4
# ipmitool user enable 3

How to use that feature

On the host page, if all went well, you should be able to manage from Foreman the power of the server :

bmc

A new feature to integrate more deeply Foreman with all your servers. Foreman is getting better and better !

Share

Distributed monitoring with Nagios and Puppet

In the past I had only one Nagios3 server to monitor all my production servers. The configuration was
fully generated with Puppet by Naginator.
This solution even with drawbacks (hard to set specific alert thresholds, appliances without Puppet, etc…)
is very powerfull. I never had to mind about monitoring configuration :
I’m always sure that every host in production is monitored by nagios thanks to Puppet.
However my needs have evolved and I begun to have distributed monitoring problems :
4 datacenters spread between Europe and USA and networks outages between datacenters
raising a lot of False Positives alerts.
I didn’t have any performance isssues as I have less than 200 hosts and 2K services.

I tried Shinken, really I tried. 2 years ago and again this last few months.
I had to package it into Debian package because all of our servers are
built unattended : the installation script was not an option for me.

On the paper Shinekn was perfect :
* fully compatible with Nagios configration
* support of specific shinken parameters on Puppet (ie: poller_tag)
* built-in support of distributed monitoring with realms and poller_tag
* built-in suport of HA
* built-in support of Livestatus
* very nice community and developers

In my experience :
* the configuration was not fully compatible (but adjustments were easy)
* shinken takes a lot more RAM than Nagios (even if Jean Gabès took the time to write me a very long mail to explain this behavior)
* the most important to me : the whole set was IMHO not enough stable and robust for my use-case : in case of netsplit, daemons did not resynchronize after outage, some modules crashed quite often without explanation, some problems with Pyro, etc ..

At the end I was not confident enough about my monitoring POC and I did not choose to put it in production.

To be clear :
* I still believe that Shinken will be in (near ?) future one (or THE) solution to replace the old Nagios, but it was not ready
for my needs
* Some people are running Shinken in production and some on very big production without any problem. My experience should not
convince you not to try this product ! You need to make your own opinion !

In deciding not to use Shinken I had to find another solution.

I choose this architecture :
* One Nagios per datacenter for polling
* Puppet to manage the whole distributed configuration (it take there the aim of arbiter in shinken)
* Livestatus + Check_MK Multisite to aggregate views of monitoring from all datacenters

Puppet tricks

We use a lot of Facts custom in Puppet and we have a Fact “$::hosting”
wich let us know in which datacenter is the host.
In order to cut our monitoring configuration between each poller, I use dynamic target for all puppet’s resources bounded to datacenters (hosts, services, hostescalation, serviceescalation):

Here is a simplified example of Host configuration in Exported Resources :

        $puppet_conf_directory = '/etc/nagios3/conf.puppet.d'
        $host_directory = "$puppet_conf_directory/$::hosting"

        @@nagios_host { "$::fqdn" :
                tag           => 'nagios',
                address    => $::ipaddress_private,
                alias         => $::hostname,
                use           => "generic-host",
                notify        => Service[nagios3],
                target        => "$host_directory/$::fqdn-host.cfg",
        }

All common resources between every pollers (contacts, contactgroups,
commands, timeperiods, etc…) are generated in one common directory
that all nagios pollers are reading (ie: ‘/etc/nagios3/conf.puppet.d/common’).
Eventually in nagios.cfg, I read for each poller the good directories for each datacenters.

# ie for nagios1 : 
cfg_dir=/etc/nagios3/conf.puppet.d/common
cfg_dir=/etc/nagios3/conf.puppet.d/hosting1
# for nagios2 :
cfg_dir=/etc/nagios3/conf.puppet.d/common
cfg_dir=/etc/nagios3/conf.puppet.d/hosting2

I decided not to use tags in exported resources :
it let me have exact same configuration files on each nagios poller in “/etc/nagios3/conf.puppet.d” : only change nagios.cfg between pollers.
In case of problem on one of my poller, I can very simply add monitoring of one datacenter only adding one more directory to source in nagios.cfg.

With that configuration I have a very simple distributed monitoring thanks to Puppet once again :)

I will explain in one future blog post how to aggregate views with Livestatus and Check_MK Multisite.

Share

Howto integrate Puppet, Foreman and Mcollective

Since we deployed Foreman in production, we didn’t use the ‘Run puppet’ button
in Foreman’s interface because we run puppet with a crontab.

However Foreman 1.2 release changed that : now smart-proxy have
mcollective native integration.

This is how to setup that. I assume that you already have a working Foreman and Mcollective
setup.

In all your ‘puppet’ proxies you need to :
Install mcollective client and puppet plugin:

# apt-get install mcollective-client mcollective-puppet-client

You need to configure you mcollective client (/etc/mcollective/client.cfg). This configuration should be
quite similar to the one you have for your desktop.
You need then to grant the user foreman-proxy to run mcollective client :

# visudo 
Defaults:foreman-proxy !requiretty
foreman-proxy ALL = NOPASSWD: /usr/bin/mco puppet runonce *

In your proxy configuration :

:puppet: true
:puppet_provider: mcollective

Restart then your smart-proxy (I run it with apache/passenger):

# service apache2 restart

You should be able to test your new installation with a simple
curl command :

$  curl   -d "nodes=myserver.example.com" https://myproxy:8443/puppet/run

In order to be able to use the mcollective integration, I had to add in my mcollective daemon
configuration the following directive :

Dans /etc/mcollective/server.cfg

identity = myserver.example.com

Eventualy in Foreman settings, you
need to set ‘puppetrun’ directive to ‘true’:

This should be good: you just need to click on ‘Run puppet’ button on your host page !

Share

How to run foreman-proxy with passenger

I recently decided to run my Foreman-Proxy daemon with Passenger instead of commonly used webrick.

As we will see, the setup is quite simple. I assume that you already have apache and passenger installed
(for Foreman, puppetmasted, …).

As I use Git for my setup, my smart-proxy is located in /opt, I let you fix your paths!
My apache configuration is (for apache2.4):

Listen 8444
<VirtualHost *:8444>
  ServerName foreman-proxy.example.com
  ServerAlias proxy1.example.com

  DocumentRoot /opt/smart-proxy/public

  RailsAutoDetect On
  PassengerTempDir /opt/smart-proxy/tmp
  AddDefaultCharset UTF-8
  HostnameLookups On

  SSLEngine on
  SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/proxy1.example.com.pem
  SSLCertificateFile /var/lib/puppet/ssl/certs/proxy1.example.com.pem
  SSLCACertificateFile /var/lib/puppet/ssl/certs/ca.pem

  <Directory /opt/smart-proxy/public>
     Require local
     Require ip 192.168.0.0/16 10.0.0.0/8
  </Directory>

  CustomLog ${APACHE_LOG_DIR}/foreman-proxy.example.com/access.log combined
  ErrorLog ${APACHE_LOG_DIR}/foreman-proxy.example.com/error.log
</VirtualHost>

I decided to use an other listenning port for apache, but you can use default 8443 port.

As you can see, the SSL configuration is done on apache level and not is smart-proxy anymore.

On proxy side configuration, it’s important to know, that “:trusted_hosts” directive raise a ’500 Internal Error’.

The bug has been open there : http://projects.theforeman.org/issues/2259

Now, you only have to stop your webrick smart-proxy daemon and restart apache.

Be careful, if you changed your listenning port to update your smart-proxies configuration on Foreman.

Share

How to generate Puppet SSL certificate with “Alternative Name”

I needed to add DNS Alt name in order to setup a full SSL comunication between my 2 Foreman servers et their proxies.
My problem was that my Foreman servers are used in faillover (with a VIP) and the clients use a generic DNS record and not directly
their FQDN. This was a problem because the address didn’t match with the certificate’s CN.

In order to fix that, I seted up a Puppet certificate where CN is the FQDN of the server (ie: foreman1.example.com) and which have an
‘Subject Alternative Name’ with VIP address (ie: foreman.example.com).

This is really simple to do but not that easy to find on the internet:
You first need to revoke the certicate on the master and remove it on the client :
On the client (on Debian):

# rm -rf /var/lib/puppet/ssl

On the master:

# puppet cert clean foreman1.example.com

You should add to the client’s puppet.conf the following:

dns_alt_names = foreman.example.com

The you need to kick puppet on the client to force a new certificate generation and ask to the puppet master to sign it:

# puppet agent -t --report --pluginsync

On the master, you can see the certificate signing request and sign it:

# puppet cert list
  "foreman1.example.com" (SHA256) 2C:76:5B:85:67:28:1C:92:48:AA:10:22:44:C7:9B:A7:0D:9B:E2:A5:5F:10:71:87:B9:3F:46:E4:70:4B:43:6C (alt names: "DNS:foreman.example.com", "DNS:foreman1.example.com")
# puppet cert sign devshinken4.yzserv.com --allow-dns-alt-names

You now have a Puppet CA signed certificate with DNS Alt Name.

Share

Foreman migration without problems.

I just migrated my Foreman instances to 1.1 in production (I’ll writte later about nice new features on 1.1).

One of most important test I do before upgrading the production is the non regression of ENC output. What I mean is that I check that the new Foreman server sends the same YAML to the Puppet master during ENC lookup. I wrote a small ruby script (using the external controler script from Foreman community) wich compares YAML responses between 2 Foreman instances (ie: production and QA).

In order to support parameterized classes, Foreman changed a bit the YAML structure but this script supports this changement.
You can find it on my Github repo. You just have to change the 2 URLs of Foreman instances and set your login and password.

This script stop automaticaly if it founds a different node defintion between dev and production. This tool allow me to be more confident before a major Foreman’s migration.

Share