Puppet 6 cluster setup

 

This solution includes a complete configuration to setup a Puppet 6 cluster.

The Puppet server is dockerized and the clients can be any Puppet-compatible operating systems. The solution includes a preconfigured set of recipes and a flexible folders/modules structure to add custom client configurations.

Features

Functionalities are distributed to nodes based on the following criteria:

  • hostname
  • node role (api node, logging node, database node, vpn node)
  • app role (logging master, logging worker, user-facing app, internal-api)
  • environment (staging, production, qa)

Other criteria can be quickly added since all are based on Facter. Some other criteria can be IPs, CPU types, AWS Tags (if on AWS) or any other parameter that can be detected by Facter (this includes also custom facts that can be provided via dynamic bash/sh scripts).

The example application comes with the following features working out of the box:

  • NTP synchronization
  • Docker installation
  • User management (users, ssh access, sudoers)
  • Encrypted secrets distribution (using asymmetric key encryption)
  • Networking management (static or Dynamic IP configuration by using networkd)

Other features pre-installed:

  • AWS EC2 Tags can be used as Facter facts
  • PuppetDB/PuppetExplorer enabled to visualize the puppet status
  • node setup script to add nodes to the cluster with a convenient step-by-step process

Testing

To test locally the changes, this package provides a set of vagrant files that allows you to spin-up VMs as if they are part of the cluster. VMs can be used to test changes locally before committing them and before applying them tho the production cluster.

Preview

.
├── code
│   └── environments
│       └── production
│           ├── data
│           │   ├── app_role
│           │   │   └── myapp.yaml
│           │   ├── clientcert
│           │   │   ├── test1.eyaml
│           │   │   └── test1.yaml
│           │   ├── common.yaml
│           │   ├── env
│           │   │   ├── prod.yaml
│           │   │   └── staging.yaml
│           │   └── node_role
│           │       ├── logging.yaml
│           │       ├── puppetmaster.yaml
│           │       └── web.yaml
│           ├── keys
│           │   ├── public_key.pkcs7.pem
│           │   └── private_key.pkcs7.pem
│           ├── local-modules
│           │   ├── app
│           │   │   ├── manifests
│           │   │   │   ├── init.pp
│           │   │   │   └── network.pp
│           │   │   └── templates
│           │   │       └── networkd
│           │   │           ├── dhcp.conf
│           │   │           └── static.conf
│           │   ├── app_accounts
│           │   │   └── manifests
│           │   │       ├── init.pp
│           │   │       └── user.pp
│           │   ├── app_docker
│           │   │   └── manifests
│           │   │       └── init.pp
│           │   └── app_openvpn
│           │       ├── files
│           │       │   └── vpn_keys
│           │       ├── manifests
│           │       │   └── init.pp
│           │       └── templates
│           ├── manifests
│           │   └── site.pp
│           ├── modules
│           ├── metadata.json
│           └── Puppetfile.lock
├── docker
│   └── puppetserver
│       └── Dockerfile
├── docker-compose.dev.yml
├── docker-compose.dev.yml.example
├── docker-compose.live.yml
├── docker-compose.yml
├── init_node.sh
├── Makefile
├── README.md
└── test
    ├── test1
    │   └── Vagrantfile
    ├── test2
    │   └── Vagrantfile
    ├── test3
    │   └── Vagrantfile
    └── Vagrantfile.example

puppet, provisioning, server, cluster

Want more info?