--- --- [toc] ### Introduction and Definitions ### Introduction to Configuration Management #### Why CM / Automation :::warning - Reduce Human errors - Increase Consistency - Reduce time and effort - Reuseability - Improve productivity - Ease of Maintenance ::: #### Popular CM Tools :::warning **On-prem** - Ansible - Chef - Puppet - SaltStack - CFEngine - Powershell DSC **Cloud-based** - AWS OpsWorks - Hosted Chef Server - Hosted Puppet Enterprise Server - AWS Systems Manager ::: #### IaC vs Config Management :::warning - Infrastructure-as-Code --> (Provisioning of the Infrastructure) - Terraform - AWS CloudFormation - Azure ARM templates - Configuration Management --> (Management and maintenance of the State of the Infrastructure) - Chef - Ansible - Puppet ::: #### Configuration Management vs Scripting (Powershell or bash) :::warning - Config management scripts are easier to write compared to traditional scripting using shell or PowerShell (Declarative vs Imperative) - ::: #### Terminology in different Config Management tools ![CM Tool Mapping](https://i.imgur.com/AHxRxbk.png) ### Introduction to Ansible #### Benefits / Features of Ansible :::warning - Lower learning curve - Playbooks are written in YAML - Works on SSH (no need to open a special port) - Agentless (Push based) - Written in Python ::: #### Ansible Architecture ![AnsibleArchitecture](https://i.imgur.com/NBgAvlC.png) #### Ansible Terminologies :::warning - Ansible Controller - Ansible Modules - Ansible Ad-hoc commands ```! Ad-hoc commands are used for executing simple and immediate tasks on remote systems without the need for creating and maintaining playbooks Ad-hoc commands provide flexibility for running one-time tasks or quick fixes. They are suitable for tasks that don't require complex orchestration or conditionals. ``` - Ansible Playbooks ```! Ansible playbooks are written in YAML syntax and are used for defining and orchestrating complex configurations, deployments, and management tasks. They provide a declarative and reusable way of managing infrastructure and application deployments. Playbooks are suitable for managing tasks across multiple hosts, as tasks can be defined once and applied to multiple hosts or groups of hosts. Playbooks are highly scalable, supporting the management of large and complex infrastructures. They can handle dependencies between tasks, parallelize execution, and provide better control over the order and grouping of tasks. ``` - Ansible Tasks - Ansible Roles ::: #### Ansible CLI :::warning ansible --> Ad-hoc commands ansible-playbook --> command line tool to work with Ansible Playbooks ansible-doc --> to check the documentation (similar to man pages) ansible-inventory --> to check and work with your hosts and host groups ansible-galaxy --> to work with Ansible roles ::: ### Installation and Configuration #### Renaming your Lab machines on Simplilearn ````yaml= ## Change hostname (Temporary) sudo hostname controller ## Change hostname (Permanent) sudo hostnamectl set-hostname controller ## Change hostname of worker nodes sudo hostnamectl set-hostname worker1 sudo hostnamectl set-hostname worker2 ```` #### If your getting error message about signature couldn't be verified ````yaml= Error Message: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05 Solution: cd /etc/apt/sources.list.d/ sudo rm kubernetes.list sudo rm kubernetes.list.save sudo apt update ```` #### If you are getting error message regarding dpkg lock ````yaml= Error message: sudo apt install ansible Waiting for cache lock: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 24072 (unattended-upgr)... 161s Solution: Syntax: sudo kill -9 <processId> example: sudo kill -9 24072 ```` #### Step 1: Ansible Installation (On Controller Node) ````go=1 sudo apt update sudo apt install -y software-properties-common sudo apt-add-repository --yes --update ppa:ansible/ansible sudo apt install -y ansible ## Validation ansible --version ```` #### Step 2: Setup password-less authentication for localhost ````yaml=1 cd ~ ssh-keygen -t rsa ## Press Enter when asked for File and Paraphrase details cat .ssh/id_rsa.pub >> .ssh/authorized_keys Validation 1: ssh localhost Expected result: Above ssh command should take you directly to the target node without the need to put in the password. Validation 2: labsuser@controller:~$ ansible -m ping localhost Expected output: localhost | SUCCESS => { "changed": false, "ping": "pong" } ```` #### Step 3: Copy the ssh public key on worker nodes :::warning **Method 1** 1. Enable password based authentication via ssh cd /etc/ssh/sshd_config ````yaml= Edit sshd_config file (On worker nodes) Note: This step is needed to enable Password based authentication for first time SSH login: vi /etc/ssh/sshd_config change PasswordAuthentication no to PasswordAuthentication yes --> Restart the service: sudo systemctl restart sshd OR sudo service sshd restart ```` 2. Setup a password for labsuser via passwd ````yaml= sudo passwd labsuser <provide a password of your choice when prompted> <confirm the password of your choice when prompted> ```` 3. Copy the public key using the following command: ````go=1 ssh-copy-id -i .ssh/id_rsa.pub labsuser@172.31.49.9 ssh-copy-id -i .ssh/id_rsa.pub labsuser@172.31.57.187 **Validation** ssh <worker1-ip> ssh <worker2-ip> ```` **Method 2** You can also manually copy the public key from controller to worker nodes: ```` 1. On controller: cat .ssh/id_rsa.pub --> copy the content 2. On worker nodes: vi .ssh/authorized_keys --> paste the content and save the file. ```` **Validation** ssh <worker1-ip> ssh <worker2-ip> ::: #### Step 4: Configure Ansible inventory :::warning sudo vi /etc/ansible/hosts Put the following content in the hosts file and save it. ```` [nodes] 172.31.0.243 172.31.13.243 [webservers] localhost 172.31.0.243 172.31.13.243 ```` Validation: ````yaml= ansible -m ping all ansible -m ping nodes ansible -m ping webservers ansible --list-hosts all ansible-inventory --list ansible-inventory --graph ```` ::: :arrow_right: *By the end of the above exercise you should have a 3 node cluster with 1 ansible controller and 2 worker nodes. We will use this cluster for our future demos and hands-on activities* ### Working with Multiple inventory files #### Providing a different inventory file using -i flag ````go=1 Syntax: ansible -m ping -i <filename> <group-name> Sample Command: ansible -m ping -i newhosts new-nodes ```` #### Update a new inventory location in ansible.cfg ````go=1 /etc/ansible/ansible.cfg [defaults] inventory = /etc/ansible/hosts ```` :mag: *Note: Checkout following github page for more Ansible configuration defaults and customizations --> https://github.com/ansible/ansible/blob/stable-2.9/examples/ansible.cfg* #### Class activity :::warning 1. Create a new file called 'dbhosts' and create a sample inventory with 2 or 3 nodes which are your Database nodes ``` ## Sample content for dbhosts file [db] 172.31.0.243 172.31.13.243 ``` 2. call this new file using -i flag: ``` Command: ansible-inventory -i dbhosts --graph ``` ``` Expected output: labsuser@controller:/etc/ansible$ ansible-inventory -i dbhosts --graph @all: |--@db: | |--172.31.0.243 | |--172.31.13.243 |--@ungrouped: ``` 3. Change the default inventory file in ansible config file to dbhosts. sudo vi /etc/ansible/ansible.cfg add the following content: ```` [defaults] inventory = /etc/ansible/dbhosts ```` save the file and run ansible-inventory command again. This time dbhosts should be shown by default. 4. Edit ansible.cfg file and revert the inventory file back to hosts. ::: ### configure DNS compliant names in host file ````go=1 sudo vi /etc/ansible/hosts [nodes] w1 ansible_host=178.62.122.74 w2 ansible_host=159.65.51.250 ```` ### References :::info - https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html - https://medium.com/trabe/use-your-local-ssh-keys-inside-a-docker-container-ea1d117515dc - https://linuxize.com/post/how-to-setup-passwordless-ssh-login/ - https://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ :::