A DevOps pipeline is a set of automated processes and tools that allows both developers and operations professionals to work cohesively to build and deploy code to a production environment. While a DevOps pipeline can differ by organization, it typically includes build automation/continuous integration, automation testing, validation, and reporting. It may also include one or more manual gates that require human intervention before code is allowed to proceed. (base on atlassin.com)
Shout out for BillNgo - my mentor for brief the activity of one pipeline and components of it
Pipelines will come with
Stages : This is biggest component in pipelines, it will include many stage inside
Jobs: This is smaller unit than Stage and on once stage can have many job and run it on runner like VM such as github action VM, gitlab VM, azure VM, …
Steps: This is kind of smallest on pipelines and this is where we exactly do the somthing automatic –> Get the Artifacts
And trigger is one of the best thing make pipelines will very helpful because y can trigger it by commit or pull request to checkout the code is shipping to exactly where we need and 100% automatic.
So we go through Azure, on DevOps we can meet the pipeline in Azure and it pretty strong and go to deepest that
Like i said it will include anything above on the concept but it kind stuff to setup that on runner for doing pipeline on first time
Agent
It is a condition for any pipelines can do the job what it should do.
In Azure, we can have free VM for config default for Agent but something we not expected here is:
U must to request to get this VM
U just have 1800 minutes pipeline in month
And the time to request will cost you 3-4 day, to get the response
Identity for VM is too complicated and ik make your important infomation can expose. So I think be good we need to set up for yourself the agent for doing all stuff
Step for setup once VM for doing pipeline, i will provisioning anything with terraform to easy deployed to bring it to Agent
For first you should do create a pool for put the agent inside
You can setup this stuff by yourself machine but for purpose to grant identify for all resource in cloud –> I will config with VM in azure –> can be using IAM for RBAC to VM, managed anything to easy but can restrict something can increase damage for cloud so –> Get started
Set up VM for agent –> On this session i will teach you some case to reproduce your code for VM
So you can provisioning anything with this stuff like you want i call it into through variable and providing for each module –> Easy
So Linux-Agent –> I name of agent i want to create for used it for pipeline. Example you can using windows agent - centos agent –> What ever you want just need custom the variable you pass into main and can provide for modules
So on Agent you just three module
IAM: Using for set RBAC can using for grant permission for your resource. Relly important
Network: Like you know VM –> need a network to do everything. Really Basic
###Linux-Agent
## provider.tf
# Get the resource backend from the Private Storage backend
terraform {
backend "azurerm" {
resource_group_name = <rg-name>
storage_account_name = <sa-name>
container_name = <container-name>
key = <key-storeConfig>
}
}
provider "azurerm" {
features{}
}
But i meet a trouble when you just put not version to your provider, i will cost your much time to understanding to debug –> So you can do this and try again for next time if i missing somestuff like this for provider
. Choose the version you understand it stable for your project
###Linux-Agent
## provider.tf
# Get the resource backend from the Private Storage backend
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.43.0"
}
}
backend "azurerm" {
resource_group_name = <rg-name>
storage_account_name = <sa-name>
container_name = <container-name>
key = <key-storeConfig>
}
}
provider "azurerm" {
features{}
}
## variables.tf
variable "resource_group_name" {
type = string
description = "Resource group name of module"
}
variable "location" {
type = string
description = "location of resource group"
default = "southeastasia"
}
variable "os" {
type = string
description = "os of module"
default = "linux"
}
variable "tag" {
type = map
description = "Tag of module"
}
variable "url_org" {
type = string
description = "URL of organization where give access for pool"
sensitive = true
}
variable "auth_type" {
type = string
description = "authentication type for pool"
default = "pat"
}
variable "token" {
type = string
description = "token for the identity"
sensitive = true
}
variable "pool" {
type = string
description = "pool which to create for respresentive"
}
variable "agent" {
type = string
description = "agent which to create for respresentive"
}
## main.tf
# Move the azure for new resource
resource "azurerm_resource_group" "main" {
name = var.resource_group_name
location = var.location
tags = var.tag
}
module "iam" {
source = "../modules/iam"
os = var.os
resource_group_name = var.resource_group_name
resource_group_id = azurerm_resource_group.main.id
location = var.location
subscription_target = data.azurerm_subscription.main.id
depends_on = [
azurerm_resource_group.main
]
}
module "network" {
source = "../modules/network"
os = var.os
resource_group_name = var.resource_group_name
address_prefixes_subnet = [ "10.0.4.0/24" ]
service_endpoints = [ "Microsoft.Storage" ]
tag = var.tag
depends_on = [
module.iam
]
}
module "vm" {
source = "../modules/vm"
os = var.os
resource_group_name = var.resource_group_name
nic_id = module.network.nic_id
user_identity_id = module.iam.user_identity_id
public_key = data.azurerm_ssh_public_key.main.public_key
tag = var.tag
url_org = var.url_org
token = var.token
pool = var.pool
agent = var.agent
depends_on = [
module.network,
module.iam
]
}
Something just using with module block and get the source of module file anything you want –> and grep it into once. So data.tf is the file easy you just get data you want –> so in this case i not refer it
After get all this code –> Go to next step using terraform to provider init –> plan –> apply. After 10min i don't remember exactly what time it cost –> You got your pipeline you want
But it got one more thing just missing it. Important thing how to apply your userdata to go throught the script without UI to setup the VM became the Agent
We will talk so tricky when using this script. Because the config.sh have some reason when using it which type user. Remember this when setup for done it early
That for all for option you can choose in create a new and edit pipeline.
So on this task i have twice pipeline about apply terraform and destroy terraform. It have one producing and just changing the environment. Down the script below to set what i write
Trigger: is the place you will put the trigger you want and default it will be trigger on your branch if you not put that case
Pool: Where you want to run pipeline and it name of pool agent we create on previous step
And you will have structure start from stages-jobs-steps and repeat again if your pipeline purpose
U can see $(anything_else) this is variable and you can put it with variable tab on the pipeline when u setup or when u edit it
Click save and it will variable you pass in session of stage and when you change stage i will disappear so you can reffer the method to get that.
And result is wait and get the result frome pipeline
You can go to detail for check working and scirpt run it it
Conclusion
So this is session talk about the pipeline will work on the cloud.
CI/CD is the hugest and pipeline is the part inside –> When you reproduced you need to and want to CI/CD on your DEVOPS because it will help you automate anything and reduce time to do repeatly work
Ending
I hope this session can deliver some reason and skill for you if you want to using the pipelines for your project and what it can do for you. I will be back on the new session on packer and hope you see that. Peace and happy implement !!!