Deploying of end-to-end infrastructure on aws using Terraform:-
Most of us have used public cloud services like AWS, Azure, Google Cloud Platform, etc. Creating our infrastructure on these platforms are pretty easy and straightforward when done manually.
When we talk about infrastructure, we talk about Networks, Subnets, Firewalls, Storage, Load Balancers, etc and when we talk about automating our infrastructure, we talk about the reusability, reliability and sharable features.
There are already many configuration management tools in the market such as Chef, Ansible, Puppet etc. These tools can be used to automate your services running inside the virtual machine. But we need a reusable process to build infrastructure. The idea is to treat the infrastructure in the same way as an application.
So all the principles which can be applied to software development can be applied to infrastructure too like version control. The infrastructure can be shared as it is a code and can be rolled back to the previous version if needed.
What is Terraform ?
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can help with multi-cloud by having one workflow for all clouds. The infrastructure Terraform manages can be hosted on public clouds like AWS, MS Azure, and Google Cloud Platform, or on-prem in private clouds such as VMWare vSphere, OpenStack, or CloudStack. Terraform treats infrastructure as code (IaC) so you never have to worry about you infrastructure drifting away from its desired configuration.
Task: Have to create/launch Application using Terraform
1. Create the key and security group which allow the port 80, 22.
2. Launch EC2 instance.
3. In this Ec2 instance use the key and security group which we have created in step 1.
4. Launch one Volume (EBS) and mount that volume into /var/www/html
5. The developer has uploaded the code into GitHub repo and other repo with some images.
6. Copy the GitHub repo code into /var/www/html.
7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the S3 bucket and change the permission to public readable.
8 Create a Cloudfront using S3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.
For performing this practical, you should have Terraform and AWS CLI installed in your local machine .
STEP 1 : First we will be declaring our cloud provider and give our account details so that Terraform can access our AWS account. We will also provide the region where we want to work.
Note: I assume you have already configure your AWS CLI profile.
Here I am using my profile:
Step 1: Profile
STEP 2: Now we will create a security group which will allow HTTP and SSH inbound traffic from all sources. Here, I have also enabled ICMP protocol from all sources which will enable us to ping to our instance.
Similarly, we have to create an outbound rule to connect to all IPs and ports.
STEP 3: Then, we create a pair of private and public key using the tls_private_key resource and save the private key on our local machine for future reference.
We will provide the public key created above to resource aws_key_pair which will create a key pair for us on AWS. This key pair will be used to connect to our instance whenever required.
STEP 4 : Now we will be launching our instance using the key and security group created in above steps. I have used Amazon Linux 2 AMI which is similar to RHEL 8.
After creation, we will connect to our instance using the key and public IP. Here, our main aim is to configure the O.S. so that it can host our web page.
For this, we will install required software such as Apache Web Server, PHP and git and enable the httpd service. I have used the below script to do the same -
STEP 5 : Now, its time to create a new EBS volume and attach it to our instance. Here, I have created a volume of size 1 GiB. One thing to note here is that, the volume should be created in the same availability zone as the instance. But at the time of creation, we don’t know the availability zone of the instance. To solve this, Terraform provides us a way of using variables which will give us the required data. Here, I have used variable- aws_instance.MyWebServer.availability_zone.
Finally, we attach the created volume to our instance using resource aws_volume_attachment. I have enabled force_detach so that volume can be deleted even being in-use at the time of destroying the infrastructure.
STEP 6: Getting the public ip address of the instance saved in txt file in our local machine.
STEP 7: Now, we have to format and mount the attached volume to the default web server directory (/var/www/html). For this, we have to again connect to our instance and run a script. We also, clone our GitHub Repository to the local machine.
STEP 8 : Next, we upload the image to the bucket. Here also, I have set the ACL to “public-read” to make the object publicly readable
STEP 9 : Now for the final step, we have to create the CloudFront distribution for the image on our bucket.
It was quite complicated, but finally I managed to create it using the below code -
STEP 10: Now you are done with project file just save it with .tf extension ie. a terraform file extension.
This is it. We have created the whole infrastructure without using the WebUI or CLI. Now, we just have to run this file.
On the terminal, just run the following commands
Note: Make sure you mount the correct directory in terminal ie. where you have your project.tf file.
Now, just copy the IP address and run it in any browser to see your web page.
Note: IP address details will be stored in same project folder you where you have saved project.tf file.
Here’s mine -
GITHUB REPO :
Below you can find the complete code -
Contribute to:-https://github.com/adityamg16/hybridcloud-using-terraform development by editing code on on GitHub. You are free to use the code and Repository.
You can open the webpage and test it. You can change the GitHub repos and file names according to yourself. Everything else is dynamic for RHEL 8 image.
CONNECT WITH ME ON LINKEDIN:-