ReleaseEngineering/PuppetAgain/Puppetmasters: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
No edit summary
 
Line 3: Line 3:
== Master System ==
== Master System ==


The puppetmaster is built by Mozilla IT's puppet, which is not publicly accessibleWhat follows are the most salient points of its configurationWith this information, you should be able to stand up a puppet master that can reproduce releng systems, although perhaps without the deployment support.
Masters are defined by the 'puppetmaster' module.  The module is designed to run on CentOS 6, but in principle there's no reason it couldn't run on any supported operating system (with some modifications)See that module for all of the gory details of how that worksThis page just highlights some of the cooler parts.


Masters are RHEL6 systems running same version of puppet as used in IT (currently 2.7.9).  This is frontended by Passenger.
The puppet manifests at are checked out at <tt>/etc/puppet/production</tt>.  The masters update their manifests from mercurial once every 5 minutes, with a bit of "splay" added (so it does not always occur on the 5-minute mark).  Any errors during the update are emailed, as well as a diff of the manifests when they change; the latter forms a kind of change control.
 
The manifests at http://hg.mozilla.org/build/puppet are checked out at <tt>/etc/puppet/production</tt>.  Environments are also set up as described below.
 
The masters update their manifests from mercurial once every 5 minutes, with a bit of "splay" added (so it does not always occur on the 5-minute mark).  Any errors during the update are emailed, as well as a diff of the manifests when they change; the latter forms a kind of change control.


The puppet configuration includes 'node_name = cert' and 'strict_host_checking = true' to ensure that a host can only get manifests for the hostname in its certificate (which the deployment system gets from DNS).
The puppet configuration includes 'node_name = cert' and 'strict_host_checking = true' to ensure that a host can only get manifests for the hostname in its certificate (which the deployment system gets from DNS).
Line 17: Line 13:
The closest master is available at the unqualified hostnames <tt>puppet</tt> and <tt>repos</tt> (assuming the DNS search path is set correctly), on ports 8140 (puppet), 80 (http), and 443 (https).  The http/https URI space looks like this:
The closest master is available at the unqualified hostnames <tt>puppet</tt> and <tt>repos</tt> (assuming the DNS search path is set correctly), on ports 8140 (puppet), 80 (http), and 443 (https).  The http/https URI space looks like this:


* /repos
* / - see [[ReleaseEngineering/PuppetAgain/Data]]
** /yum - yum repositories; see ''modules/packages/manifests/setup.pp'' for the list
* /deploy (HTTPS only) - deployment CGI script
* /deploy (HTTPS only)
** deployment CGI script


== Repos ==
== Environments ==
 
Each puppet master hosts a collection of RPM repositories under <tt>/data/repos</tt>.  These repositories do *not* automatically update, but can be updated by hand as desired.  The respositories should be accessed with the <tt>repos</tt> hostname, rather than <tt>puppet</tt>, to allow a later segregation of these two functions.


See [[ReleaseEngineering/PuppetAgain/Repositories]] for more detail.
{{note|environments don't work yet in 3.2.0}}
 
== Environments ==


For each of the members of release engineering, an environment is set up with e.g.,
For each of the members of release engineering, an environment is set up with e.g.,
Line 41: Line 31:


Releng users will all have sudo access on the puppet masters, allowing them to diagnose and solve any small issues that come up without depending on IT, although IT is happy to help (and will be required for any changes to the sysadmins puppet configs).
Releng users will all have sudo access on the puppet masters, allowing them to diagnose and solve any small issues that come up without depending on IT, although IT is happy to help (and will be required for any changes to the sysadmins puppet configs).
== Synchronization ==
One master in a cluster is designated as the "distinguished master" (DM).  This host serves as the hub in a hub-and-spoke synchronization model -- much easier to implement than a full mesh.  If the distinguished master is down for a short time, no harm is done - masters can't synchronize, but agents can continue to generate catalogs and receive files.
Masters synchronize secrets by rsyncing the secrets file from the distinguished master periodically.  Similarly, data is synchronized from the DM periodically using rsync.  If desired, the DM can itself sync from http://puppetagain.pub.build.mozilla.org periodically.
All of the SSL key and certificate materials are synchronized using git.  There are two git repositories (one bare, one for editing) under <tt>/var/lib/puppetmaster/ssl/</tt>.  See the manifests for details on how all of this fits together.


== Cert Signing ==
== Cert Signing ==


  A sysadmin asked the Architect,
All of our installation tools are scriptable.  These tools are responsible for fetching a signed certificate from the puppet master and installing it on the client before its first boot.  This transaction is authenticated using a protected secret.  The shared secret is a password.  For systems where the base image is access-restricted, this password is embedded in the image.  For other systems (e.g., kickstart), the password must be supplied by the person doing the imaging, at the beginning of the process.
    "What's the best way to install a new system?"
  The Architect answered,
    "Turn it on."
  The sysadmin was enlightened.
 
All of our installation tools are scriptable.  These tools are responsible for fetching a signed certificate from the puppet master and installing it on the client before its first boot.  This transaction IS be authenticated using a protected secret.  Non-Mozilla users can simply omit this part of the setup and sign certificates by hand.  The shared secret is a password.  For systems where the base image is access-restricted, this password is embedded in the image.  For other systems (e.g., kickstart), the password must be supplied by the person doing the imaging, at the beginning of the process.


See [[ReleaseEngineering/PuppetAgain/Puppetization Process|Puppetization Process]] and [[ReleaseEngineering/PuppetAgain/Certificate Chaining|Certificate Chaining]] for details on this system.
See [[ReleaseEngineering/PuppetAgain/Puppetization Process|Puppetization Process]] and [[ReleaseEngineering/PuppetAgain/Certificate Chaining|Certificate Chaining]] for details on this system.

Latest revision as of 19:44, 26 April 2013

Within releng, the puppet master should respond at the unqualified hostname puppet. This is adjustable through manifests/settings.pp for other environments.

Master System

Masters are defined by the 'puppetmaster' module. The module is designed to run on CentOS 6, but in principle there's no reason it couldn't run on any supported operating system (with some modifications). See that module for all of the gory details of how that works. This page just highlights some of the cooler parts.

The puppet manifests at are checked out at /etc/puppet/production. The masters update their manifests from mercurial once every 5 minutes, with a bit of "splay" added (so it does not always occur on the 5-minute mark). Any errors during the update are emailed, as well as a diff of the manifests when they change; the latter forms a kind of change control.

The puppet configuration includes 'node_name = cert' and 'strict_host_checking = true' to ensure that a host can only get manifests for the hostname in its certificate (which the deployment system gets from DNS).

Hostnames

The closest master is available at the unqualified hostnames puppet and repos (assuming the DNS search path is set correctly), on ports 8140 (puppet), 80 (http), and 443 (https). The http/https URI space looks like this:

Environments

Note: environments don't work yet in 3.2.0

For each of the members of release engineering, an environment is set up with e.g.,

[jford]
   modulepath = /etc/puppet/environments/jford/env/modules
   templatedir = /etc/puppet/environments/jford/env/templates
   manifestdir = /etc/puppet/environments/jford/env/manifests
   manifest = $manifestdir/site.pp

and per-user logins are enabled. A clone of the hg library at this location, along with any necessary secrets and settings, can be used to test and develop changes to puppet. (See also HowTo: Set up a user environment)

Releng users will all have sudo access on the puppet masters, allowing them to diagnose and solve any small issues that come up without depending on IT, although IT is happy to help (and will be required for any changes to the sysadmins puppet configs).

Synchronization

One master in a cluster is designated as the "distinguished master" (DM). This host serves as the hub in a hub-and-spoke synchronization model -- much easier to implement than a full mesh. If the distinguished master is down for a short time, no harm is done - masters can't synchronize, but agents can continue to generate catalogs and receive files.

Masters synchronize secrets by rsyncing the secrets file from the distinguished master periodically. Similarly, data is synchronized from the DM periodically using rsync. If desired, the DM can itself sync from http://puppetagain.pub.build.mozilla.org periodically.

All of the SSL key and certificate materials are synchronized using git. There are two git repositories (one bare, one for editing) under /var/lib/puppetmaster/ssl/. See the manifests for details on how all of this fits together.

Cert Signing

All of our installation tools are scriptable. These tools are responsible for fetching a signed certificate from the puppet master and installing it on the client before its first boot. This transaction is authenticated using a protected secret. The shared secret is a password. For systems where the base image is access-restricted, this password is embedded in the image. For other systems (e.g., kickstart), the password must be supplied by the person doing the imaging, at the beginning of the process.

See Puppetization Process and Certificate Chaining for details on this system.