It is recommended that server upgrades or changes on servers should never be done while the servers are live. What you should is that you should create new servers having the upgrades, and then stop using the old servers. The benefit is that you will enjoy immutability as you program at the infrastructure level, and you will not be affected by the configuration drift.
Nodes
Our infrastructure project will be made up of nodes.yalm which will be used for definition of the node names and the AWS security groups which they belong to. This is simple, as it is used in multiple other tools such as the vagrant. The code should be as shown below:
elasticsearch: group: logging zookeeper: group: zookeeper redis: group: redis size: m2.2xlarge
Rakefile
We will use the file “nodes.yaml” together with rake for production of packer templates for building out new AMIs. Note that most packer templates usually have similar or related features, so you can manage them as a unit, and this feature will ensure this. The code for this is given below:
require ‘erb’ require ‘yaml’ namespace :packer do task :generate do current_dir = File.dirname(__FILE__) nodes = YAML.load_file( “#{current_dir}/nodes.yml”) nodes.each_key do |node_name| include ERB::Util template = File.read(“#{current_dir}/packs/template.json.erb”) erb = ERB.new(template) File.open(“#{current_dir}/packs/#{node_name}.json”, “w”) do |f| f.write(erb.result(binding)) end end end end
What we have done is that we have used it together with a simple erb template which will inject the nodename into it. This is shown below:
{ “builders”: [{ “type”: “amazon-ebs”, “region”: “us-east-1”, “source_ami”: “ami-10314d79”, “instance_type”: “t1.micro”, “ssh_username”: “ubuntu”, “ami_name”: “<%= node_name %> {{.CreateTime}}”, “security_group_id”: “packer” }], “provisioners”: [{ “type”: “shell”, “script”: “packs/install_puppet.sh” }, { “type”: “shell”, “inline”: [ “sudo apt-get upgrade -y”, “sudo sed -i /etc/puppet/puppet.conf -e ”s/nodename/<%= node_name %>-$(hostname)/””, “sudo puppet agent –test || true” ] }]
With the above code, a packer template will be generated for each node, and this will perform the following tasks:
The Puppet agent should not be enabled, so that we can avoid polling of updates. Once Puppet has completed, we can then remove it from the server to avoid it being baked in by AMI.
The Script
With packer, the user can specify the shell files and the shell commands which are to be run. When it comes to bootstrapping, this feature is the best, but it is good for the kind of configuration management needed in Puppet. Our packer templates will work by calling a shell script, and this will ensure that we do not use the old version of ruby Linux distros. The server name of the Puppet master will also be specified as part of the installation process. The code is given below:
sleep 20, wget https://apt.puppetlabs.com/puppetlabs-release-raring.deb sudo dpkg -i puppetlabs-release-precise.deb sudo apt-get update sudo apt-get remove ruby1.8 -y sudo apt-get install ruby1.9.3 puppet -y sudo su -c ‘echo “””[main] logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter templatedir=$confdir/templates [agent] server = ip-10-xxx-xx-xx.ec2.internal report = true certname=nodename””” >> /etc/puppet/puppet.conf’
The next step in our process should be to build a new AMI for the redis by running the following command:
Once you execute the above command, the server will be created, configured, imaged, and finally terminated. Note that for each AMI that you create, a cost will be incurred. The cost for a single AMI might be small, but when you have multiple of these, then this will be very costly. This is why the old images have to be cleaned up. This is a very simple task which can be done as shown below:
import os import boto from fabric.api import task class Images(object): def __init__(sf, **kw): sf.con = boto.connect_ec2(**kw) def get_ami_for_name(sf, name): (keys, AMIs) = sf.get_amis_sorted_by_date(name) return AMIs[0] def get_amis_sorted_by_date(sf, name): amis = sf.conn.get_all_images(filters={‘name’: ‘{}*’.format(name)}) AMIs = {} for ami in amis: (name, creation_date) = ami.name.split(‘ ‘) AMIs[creation_date] = ami # removing the old images! keys = AMIs.keys() keys.sort() keys.reverse() return (keys, AMIs) def remove_old_images(sf, name): (keys, AMIs) = sf.get_amis_sorted_by_date(name) while len(keys) > 1: key = keys.pop() print(“deregistering {}”.format(key)) AMIs[key].deregister(delete_snapshot=True) @task def cleanup_old_amis(name): ”’ Usage: cleanup_old_amis:name={{ami-name}} ”’ images = Images( aws_access_key_id=os.environ[‘AWS_ACCESS_KEY_ID’], aws_secret_access_key=os.environ[‘AWS_SECRET_ACCESS_KEY’] ) images.remove_old_images(name)
You can set up the above. It will make sure that the AMI that you have in your system is the latest one. If you need to make sure that your five last AMIs are kept for the purpose of archiving, you can tweak this. If we had data stores, then this would have been made a bit trickier, since we would have to boot each of the replicas of the primary instances, replicas promoted to primaries, and then old primaries would be retired.
Frequently Asked Puppet Interview Questions & Answers
Stay updated with our newsletter, packed with Tutorials, Interview Questions, How-to's, Tips & Tricks, Latest Trends & Updates, and more ➤ Straight to your inbox!
Name | Dates | |
---|---|---|
Puppet Training | Jul 02 to Jul 17 | |
Puppet Training | Jul 05 to Jul 20 | |
Puppet Training | Jul 09 to Jul 24 | |
Puppet Training | Jul 12 to Jul 27 |
Ravindra Savaram is a Content Lead at Mindmajix.com. His passion lies in writing articles on the most popular IT platforms including Machine learning, DevOps, Data Science, Artificial Intelligence, RPA, Deep Learning, and so on. You can stay up to date on all these technologies by following him on LinkedIn and Twitter.
1 /15
Copyright © 2013 - 2022 MindMajix Technologies