One of the tools that was suggested is Ansible, which is an automation tool (something like a remote command executor). Like any other new tech that invovles infra, I made a virtual machine as a sandbox to experiment with it.
I also found this book which I will be using as my guide.
I also installed ansible according to the book, which this command handled everything for us
sudo pip install ansible
The only prerequisite was python and pip.
To have something to act as my "server" to deploy applications on, the guide suggested that we use vagrant as a starter, and so I did. Too bad the book was abiut outdated on the versioning and I had to improvise some bits myself.
We did have the vm running and we are able to ssh it. It is important to know the SSH credentials because it will be used by ansible to access the machines. Here are the SSH credentials that I have obtained
Before we are able to test the connectivity between ansible and our VM or any server, we need to first let ansible know about our server. We can do that by creating an inventory file
The first file is the hosts file, which defines the hostnames as well as the SSH credentials. The second file is the config file, which tells ansible how to behave.
Once everything seems good, we can use this to test the connectivity.
So now that my ansible instance can connect to my vm, its time to use that connection to do some actual commands. With that said, we define the imperative instructions using a yaml file, which follows a certian format that constitites it as a playbook.
Here, the ansible will SSH into every machine in group webservers and then install and configure nginx using builtin ansible modules. To run this playbook use the command ansible-playbook playbook.yml
.
Once the command completes, you should be able to access the webpage on localhost
To support http in production, we would need to buy a certificate from a certificate authority. For dev, we can generate our own using lets encrypt. I ran the commands
To generate a new SSL cert, and changed the playbook to
which unlocks a lot of new features such as handlers, templating, facts variables and so on… At this point, we should be able to connect to our website after this in HTTPS.
And there we go
According to the subject, we have gathered the requirements such that we need to have
docker-compose
to contain all containers build instrctionsI dont know if thats correct, but theres only 1 way to find out I guess
There are many cloud providers out there which can do this for free, with the most notorious ones begin AWS and GCP. No azure because its windows based and no scaleway because im not french.
After comparing the free tier plans, i noticed that GCP has more flexibilty but less features than AWS. Since this is a short term project, I gave AWS the edge since flexibilty isnt going to give me an advantage if I dont plan to maintain my project or scale it further.
We need something to host our containers, so I plan to use the EC2 instances for general purpose computing to host the containers.
For load balancing with auto-scaling, we can use ELB to provide the routing logic to a EC2 auto scaling resource.
For CDN, AWS has CloudFront which is needed to improve load times of static files and have security features suck as AWS shield standard and WAF.
To make sure I dont have any data loss, I will store my data for MySQL in an EBS Volume which is isolated from the EC2 instances and it is recoverable after the EC2 instance is destroyed.
My initial plan is like so
While creating your first EC2 instance, you need to create a new key pair to be able to SSH into it. Here it is on the EC2 instance creation screen
For now, Im going to allow internet access to the SSH port for debugging. In production, this will live inside a VPC and will only be exposed to the frontend servers.
I am able to SSH into the DB instance using the command
ssh -i ssh/jng.pem ec2-user@ec2-52-64-169-82.ap-southeast-2.compute.amazonaws.com
where ssh/jng.pem
is the private key i get from the AWS console when I create a new key pair.
However, the VM is barren currently, No git or docker, only yum and python3. This might be a good time to write an ansible playbook to bootstrap the EC2 instance.
I need to create a host file to tell ansible which server to connect to, the contents are below
I also made an ansible.cfg to provide context to the ansible application
Sanity check to test connectivity
ansible database -m ping
Create a playbook with the contents
To run the playbook, run ansible-playbook playbooks/bootstrap.yaml
After running, you can SSH into the machine manually to verify that git is indeed installed
I proceeded to write a docker-compose.yml with the instructions to start a Mysql container
and I modified the playbook to install docker, docker compose and to run the container
And now we should be able to see the container running in the EC2 instance
To simulate a hardware failure, lets make the instance inaccesible by disabling its network interface sudo ifconfig enX0 down
Now to make sure this incident gets detected, we can use CloudWatch to monitor the EC2 instance status and generate alerts if something is not right (in this case our instance is unreachable)
With this in place, we should now create a task to run once an alert appears. To take care of the first part create a task to run
, we are able to use Lambda functions, which are code that runs on the cloud which are triggered by events happening in your infra, which is the second part (once an alert appears).
Here is an example of a bootstrap lambda function, which just prints hello world for the time being. I want to configure this to run once an alert is generated by my CloudWatch.
To make my cloudwatch events detected by the lambda function, I need to make an eventbridge where the source is cloudwatch and the target is the lambda function I made.
And once the two is ready, I added that eventbridge as a trigger to my lambda function
Lets goo, looks like I got a hit
You cant have an alarm for multiple instances together, which means everytime a new instance is created a new alarm must be created. This is tedious. Im just going to reboot the instance on fail instead.
And as we can see, the error state resolved itself
And the database container started up.
So now we will create an EC2 instance to run phpmyadmin, with the compose file below
We will also create an ansible playbook to automate the provisioning of the instance, with the contents below
When all is said and done, we should be able to connect to the public IP of the phpmyadmin instance
And as we might have noticed, there seems to be an error with the configuration for the mysql server connectivity. This is becase we are trying to connect to our database instance at 172.31.0.82
which is not exposed to other instances.
After I added the rule to allow the phpmyadmin instance, I should be able to view the data now.
This is getting abit repetitive, we will create a docker-compose section and an ansible playbook to provision a wordpress container.
There will be new directories created :
./ngninx.conf
will contain the files needed for our nginx webserver configuration./ssl
will contain our self-signed ssl credentials./wordpress_data
will contain wordpress installation filesOf course, dont forget to change the security group for the database instance to allow your wordpress to connect
And with the configuration above, we get these
This isnt exactly correct, since the CSS should be loaded as well. Upon inspecting the network tab, turns out there are some hostname errors
Our nginx server just reverse-proxies all request to the wordpress container. While the nginx container itself knows where to route to, the client does not. Hence we are unable to get things like fonts and CSS.
Then I remembered for our inception project, we copied the pre-installed files into nginxs static folder, and we would just use that as the root instead.
For the php files, we will route it to wordpresses fpm as a CGI controller.
Hence, we will change our docker-compose like so:
which only exposes the CGI application at port 9000. We will handle the static files on nginx side. We change the nginx config to :
And viola, looks better now
Before we create an autoscaling groupm a Launch template needs to be made to tell new ec2 instances how to provision themselves.
They consist of just the initial configuration of the ec2 instance (OS to use, security rules) and a userdata script. The userdata script is a shell script that in run as root to install and run stuff.
The user data script that I used for my launch template is :
I launch a test ec2 instance and got it to work, but its not detecting any sessions because of the abscence of load balancing and domain name
To get it to work, I have to create an auto scaling group that specifies the minimum instances I want. It also creates a Target Group to group all our aws instances to specify the routed destinations (port 443 of those instances)
Then, I will need to make 2 Load balancers for HTTP and HTTPS traffic. The HTTP load balancer will redirect to the HTTPS load balancer and the HTTPS load balancer shall distribute traffic to the target group.
Then, I would need to create a default SSL certificate key pair for the loadbalancers that are listening on https using openssl req -x509 -nodes -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365
and then register them to ACM
Everything looks OK, just that for some reason my wordpress cant access wp-admin, and just redirects me to the login page even after successful login. I suspect its something to do with my nginx
I applied the fix like so, its also a good time to test the auto scaling feature now.
Actually it was not fixed until I enabled this option in my target group settings
And turns out my instances gets created if i stop them and the target group auto updates, which is nice.
To test the load balancing feature, I will display the docker logs for nginx containers side by side and make sure both of them are getting hits when I made a request to the load balancers DNS.
As you can see, both the servers get hits when i make a request, hence validating the load balancer.
I also made an alarm to alert the system when any of the EC2 instances in the auto scale is down and added a dynamic scaling policy which is triggered by the alart. Here are the activity logs to show that its working
I created an SNS topic to allow my infra to send out emails, and I attatched it to the CloudWatch Alarm like so
And once an instance fails, here is the email
I set up AWS Cloudfront to act as a CDN to my system.
I also installed WAF on cloudfront with default settings which by alone does not do anything, however if properly configured with other services like AWS shield, it can protect against common attacks like DDOS, SQL Injection, XSS and much more.
This is unfortunate, I guess ill need to reach out to support for this matter. Guess Ill have to wait for now.
Solved within 2 hours, but I reread the pdf and found out i might not need it lol
I have changed all the SSH rules to only allow my own IP, and that should be it.