# DevOps Training Session 11: Cloud - Pipeline (**Azure-Pipeline**)
###### tags: `devops` `reliable` `research`
Yo, btb i will bring a new of powerful on DevOps, Pipeline where we make anything automatic and okay --> Start :stars:
## Overview pipeline
- A DevOps pipeline is a set of automated processes and tools that allows both developers and operations professionals to work cohesively to build and deploy code to a production environment. While a DevOps pipeline can differ by organization, it typically includes build automation/continuous integration, automation testing, validation, and reporting. It may also include one or more manual gates that require human intervention before code is allowed to proceed. (*base on [atlassin.com](https://www.atlassian.com/)*)
![](https://i.imgur.com/TFNu8Gz.png)
- So on my platform from cloud, if you use the azure, you should check it out a Azure-Pipelines. Cool Stuff and not bad for config pipeline
![](https://i.imgur.com/Iszkf1s.png)
- CI/CD is the observe for processing on bring on commit to project, and it take back again again during development to complete and release product
![](https://i.imgur.com/hcJvwTp.png)
- Shout out for **BillNgo** - my mentor for brief the activity of one pipeline and components of it
- Pipelines will come with
- Stages : This is biggest component in pipelines, it will include many stage inside
- Jobs: This is smaller unit than Stage and on once stage can have many job and run it on runner like VM such as github action VM, gitlab VM, azure VM, ...
- Steps: This is kind of smallest on pipelines and this is where we exactly do the somthing automatic --> Get the Artifacts
- And trigger is one of the best thing make pipelines will very helpful because y can trigger it by commit or pull request to checkout the code is shipping to exactly where we need and 100% automatic. :cool:
## Azure-Pipeline
- So we go through Azure, on DevOps we can meet the pipeline in Azure and it pretty strong and go to deepest that
- Like i said it will include anything above on the concept but it kind stuff to setup that on runner for doing pipeline on first time
### Agent
- It is a condition for any pipelines can do the job what it should do.
- In Azure, we can have free VM for config default for Agent but something we not expected here is:
- U must to request to get this VM
- U just have 1800 minutes pipeline in month
- And the time to request will cost you 3-4 day, to get the response
- Identity for VM is too complicated and ik make your important infomation can expose. So I think be good we need to set up for yourself the agent for doing all stuff
- Step for setup once VM for doing pipeline, i will provisioning anything with terraform to easy deployed to bring it to Agent
- For first you should do create a pool for put the agent inside
![](https://i.imgur.com/jE1kcYx.png)
- After that you will create a agent by click new agent config it with the stuff like that
![](https://i.imgur.com/Tc87Ni0.png)
- You can setup this stuff by yourself machine but for purpose to grant identify for all resource in cloud --> I will config with VM in azure --> can be using IAM for RBAC to VM, managed anything to easy but can restrict something can increase damage for cloud so --> Get started
***Set up VM for agent --> On this session i will teach you some case to reproduce your code for VM***
![](https://i.imgur.com/BeOW4SF.png)
- So you can provisioning anything with this stuff like you want i call it into through variable and providing for each module --> Easy
- So Linux-Agent --> I name of agent i want to create for used it for pipeline. Example you can using windows agent - centos agent --> What ever you want just need custom the variable you pass into main and can provide for modules
- So on Agent you just three module
1. IAM: Using for set RBAC can using for grant permission for your resource. **Relly important**
2. Network: Like you know VM --> need a network to do everything. **Really Basic :smiley:**
3. VM: Your Agent will custom on this
- Details it
```
###Linux-Agent
## provider.tf
# Get the resource backend from the Private Storage backend
terraform {
backend "azurerm" {
resource_group_name = <rg-name>
storage_account_name = <sa-name>
container_name = <container-name>
key = <key-storeConfig>
}
}
provider "azurerm" {
features{}
}
```
But i meet a trouble when you just put not version to your provider, i will cost your much time to understanding to debug --> So you can do this and try again for next time if i missing somestuff like this for provider :smile:. Choose the version you understand it stable for your project
```
###Linux-Agent
## provider.tf
# Get the resource backend from the Private Storage backend
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.43.0"
}
}
backend "azurerm" {
resource_group_name = <rg-name>
storage_account_name = <sa-name>
container_name = <container-name>
key = <key-storeConfig>
}
}
provider "azurerm" {
features{}
}
## variables.tf
variable "resource_group_name" {
type = string
description = "Resource group name of module"
}
variable "location" {
type = string
description = "location of resource group"
default = "southeastasia"
}
variable "os" {
type = string
description = "os of module"
default = "linux"
}
variable "tag" {
type = map
description = "Tag of module"
}
variable "url_org" {
type = string
description = "URL of organization where give access for pool"
sensitive = true
}
variable "auth_type" {
type = string
description = "authentication type for pool"
default = "pat"
}
variable "token" {
type = string
description = "token for the identity"
sensitive = true
}
variable "pool" {
type = string
description = "pool which to create for respresentive"
}
variable "agent" {
type = string
description = "agent which to create for respresentive"
}
## main.tf
# Move the azure for new resource
resource "azurerm_resource_group" "main" {
name = var.resource_group_name
location = var.location
tags = var.tag
}
module "iam" {
source = "../modules/iam"
os = var.os
resource_group_name = var.resource_group_name
resource_group_id = azurerm_resource_group.main.id
location = var.location
subscription_target = data.azurerm_subscription.main.id
depends_on = [
azurerm_resource_group.main
]
}
module "network" {
source = "../modules/network"
os = var.os
resource_group_name = var.resource_group_name
address_prefixes_subnet = [ "10.0.4.0/24" ]
service_endpoints = [ "Microsoft.Storage" ]
tag = var.tag
depends_on = [
module.iam
]
}
module "vm" {
source = "../modules/vm"
os = var.os
resource_group_name = var.resource_group_name
nic_id = module.network.nic_id
user_identity_id = module.iam.user_identity_id
public_key = data.azurerm_ssh_public_key.main.public_key
tag = var.tag
url_org = var.url_org
token = var.token
pool = var.pool
agent = var.agent
depends_on = [
module.network,
module.iam
]
}
```
- Something just using with module block and get the source of module file anything you want --> and grep it into once. So data.tf is the file easy you just get data you want --> so in this case i not refer it
- After get all this code --> Go to next step using terraform to provider init --> plan --> apply. After 10min i don't remember exactly what time it cost --> You got your pipeline you want
- But it got one more thing just missing it. Important thing how to apply your userdata to go throught the script without UI to setup the VM became the Agent
```
#!/bin/bash
apt update && apt upgrade
apt install pass gnupg2 -y
apt install -y
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
az login --identity
su -c "curl https://vstsagentpackage.azureedge.net/agent/2.214.1/vsts-agent-linux-x64-2.214.1.tar.gz --output /home/${user}/download.tar.gz" ${user}
su -c "cd /home/${user} && tar zxvf download.tar.gz && ./config.sh --unattended --url ${url} --auth ${auth} --token ${token} --pool ${pool} --agent ${agent} && ./run.sh &" ${user}
```
- We will talk so tricky when using this script. Because the config.sh have some reason when using it which type user. Remember this when setup for done it early :smiley:
- First it not permit fo your sudo, so you need `su` to switch account fron sudo --> became user
- Config have the non UI --> U can type `--help` after you execute `./config.sh --help` like this and you can using the flag after that :coffee:. Quite tricky but non sense
- When u using this flag with config.sh file. U need `su` to run and if you using sudo account it will reject your process.
- Don't waste my time like me LOL :coffee:. So after that you got what you want
![](https://i.imgur.com/spHyGvk.png)
### Pipeline
- So like we talk about go throught for overview to see what we do with pipeline
- After setup your owner agent, next step is using agent to product pipeline for you :small_airplane:
- In Azure-Pipeline you will have some extension i meet and it quite useful and something it not. It depend on your chooice, :sweat_smile:. I will overview some thing about azure-pipelines
1. It will choose the place you push code in to using for trigger or something pipeline you want to put in it
![](https://i.imgur.com/LB9tCzZ.png)
2. Select your project and branch you want
![](https://i.imgur.com/1RUoHQA.png)
3. Setup the env for your pipeline (It not obligatory) and u can choice basic option with blank config
![](https://i.imgur.com/vtRf3S1.png)
4. On this case you will open that window for putting your yaml file with using for pipeline and what variable you want to give it for pipeline.
![](https://i.imgur.com/LSUHSKP.png)
5. Save and run
6. You can edit that kind and everypipeline you want and change that for what a yaml file you want for purpose of you :coffee:. It can be config like step 3 i refer
![](https://i.imgur.com/FAA4Zoj.png)
- That for all for option you can choose in create a new and edit pipeline.
- So on this task i have twice pipeline about apply terraform and destroy terraform. It have one producing and just changing the environment. Down the script below to set what i write
```
## azure-pipelines.yaml
trigger: none
pool:
name: linuxAgent
stages:
- stage: terraform_plan
jobs:
- job: terraform_plan
steps:
- task: CmdLine@2
inputs:
script: |
export TF_LOG=DEBUG
sudo apt install unzip -y
az login --identity
- task: TerraformInstaller@0
inputs:
terraformVersion: 'latest'
- task: TerraformTaskV3@3
displayName: 'Terraform init'
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: $(workingDirectory)
backendServiceArm: $(serviceConnection)
backendAzureRmResourceGroupName: $(resourceGroup)
backendAzureRmStorageAccountName: $(storageAccount)
backendAzureRmContainerName: $(storageContainer)
backendAzureRmKey: $(storageKey)
- task: TerraformTaskV3@3
displayName: Terraform Validate
inputs:
provider: 'azurerm'
command: 'validate'
workingDirectory: $(workingDirectory)
- task: TerraformTaskV3@3
displayName: Terraform Plan
inputs:
provider: 'azurerm'
command: 'plan'
workingDirectory: $(workingDirectory)
environmentServiceNameAzureRM: $(serviceConnection)
env:
TF_VAR_resource_group_root_name: $(resourceGroupRoot)
TF_VAR_resource_group_name: $(resourceGroupDev)
TF_VAR_container_registry_name: $(containerRegistry)
TF_VAR_source_image_name: $(sourceImage)
TF_VAR_ssh_public_key_name: $(sshPublicKey)
TF_VAR_allowed_ips: $(allowedIPs)
- stage: terraform_apply
dependsOn: [terraform_plan]
condition: succeeded('terraform_plan')
jobs:
- deployment: terraform_apply
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:
- task: TerraformInstaller@0
inputs:
terraformVersion: 'latest'
- task: TerraformTaskV3@3
displayName: 'Terraform Init'
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: $(workingDirectory)
backendServiceArm: $(serviceConnection)
backendAzureRmResourceGroupName: $(resourceGroup)
backendAzureRmStorageAccountName: $(storageAccount)
backendAzureRmContainerName: $(storageContainer)
backendAzureRmKey: $(storageKey)
- task: TerraformTaskV3@3
displayName: Terraform Apply
inputs:
provider: 'azurerm'
command: 'apply'
commandOptions: '-auto-approve'
workingDirectory: $(workingDirectory)
environmentServiceNameAzureRM: $(serviceConnection)
env:
TF_VAR_resource_group_root_name: $(resourceGroupRoot)
TF_VAR_resource_group_name: $(resourceGroupDev)
TF_VAR_container_registry_name: $(containerRegistry)
TF_VAR_source_image_name: $(sourceImage)
TF_VAR_ssh_public_key_name: $(sshPublicKey)
TF_VAR_allowed_ips: $(allowedIPs)
```
- i will explain somekind on this, you can refer for [yaml-schema for explain](https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/?view=azure-pipelines)
- Trigger: is the place you will put the trigger you want and default it will be trigger on your branch if you not put that case
- Pool: Where you want to run pipeline and it name of pool agent we create on previous step
- And you will have structure start from `stages-jobs-steps` and repeat again if your pipeline purpose
- U can see `$(anything_else)` this is variable and you can put it with variable tab on the pipeline when u setup or when u edit it
![](https://i.imgur.com/QRCX6et.png)
- Click save and it will variable you pass in session of stage and when you change stage i will disappear so you can reffer the method to get that.
- And result is wait and get the result frome pipeline
![](https://i.imgur.com/iBy7Iog.png)
- You can go to detail for check working and scirpt run it it
![](https://i.imgur.com/QYr3EPJ.png)
## Conclusion
- So this is session talk about the pipeline will work on the cloud.
- CI/CD is the hugest and pipeline is the part inside --> When you reproduced you need to and want to CI/CD on your DEVOPS because it will help you automate anything and reduce time to do repeatly work
![](https://i.imgur.com/YkOPBwE.png)
![](https://i.imgur.com/avqHjqh.png)
## Ending
I hope this session can deliver some reason and skill for you if you want to using the pipelines for your project and what it can do for you. I will be back on the new session on packer and hope you see that. Peace and happy implement !!! :coffee:
## Reference
[yaml-schema](https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/?view=azure-pipelines)
[pipeline azure-devops](https://learn.microsoft.com/en-us/azure/devops/pipelines/?view=azure-devops&viewFallbackFrom=azure-pipelines)