# DevNet Associate 3 ## 37. Automate Everyting with NSO ![](https://i.imgur.com/5r8rqmz.png) - Network Service Orchestrator: A solution that is building your network controller from the ground up. - Why do I need this when I have a network controller? - Cross Architecture - ACI worked in the DC with Nexus 9K. - DNA Center worked with APIC and Cat9K. - Multi-vendor - IOS-XR devices have NETCONF or RESTCONF for programmability. - NX-OS has their own API. - Model driven programmability: Standardize device configuration regardless of vendor or platform. NSO works with anything, even legacy devices that don't have APIs. NSO can connect to all of IOS, IOS-XR, NX-OS platforms, pull running-config off of the device, parse it, convert them to a common YANG data model and store that YANG data model in a config database. - Architecure of NSO - It only runs on Linux or Mac OS(Unix). - Installing NSO - Code: sh nso-5.1.0.1.linux.x86_64.installer.bin $HOME/nso-5.1 --local -install - No DevNet Sandbox, so that's why we're installing NSO. - Two Core Components - Service Manager ![](https://i.imgur.com/q7kKvVo.png) - Manages network services like wireless. To have an entire wireless service, we need from gateway configuration, WLC configuration and Switch configuration.(out of scope of DevNet) - Device Manager ![](https://i.imgur.com/s1B4VXl.png) - Use NSO to manage individual devices themselves. - For example, when NSO is connecting to Cisco IOS device. It runs 'show run', 'show version', and 'show inventory'. It parses out the results of all of these, stores them in the Config Database as a YANG data model. - If you are using NSO to make change, it will keep track of the changes like version control. So you can roll back to previous versions of config even on IOS that doesn't support rollbacks. ![](https://i.imgur.com/xVjp3Q1.png) - For example, Service Instance 1: Wireless service. It connects to individual devices like WLC, Gateway, Switches. It makes configurations in order to get the service up and running in its desired state. So you're affectively configuring the service and NSO takes it from there and configures the individual devices. - NSO has a web GUI, REST API, Python API, Java API/SDK. All three of these API support CRUD operations for the service and device managers, but the Python and Java APIs go step further and give you access to the Config Database. ### Installing and Configuring NSO - Installing NSO is free if you're not in production mode. - There are some install dependencies. You need Java, Ant, Python2 or 3. - link: github.com/dataknox/codesamples - NCS: Configuration commands ### Deeper Exploration of NSO Components - Netsim here has three IOS routers. - @ NSO ![](https://i.imgur.com/UuS6afg.png) - Connected to devices through Telnet/SSH depending on how I configured it, and it Synced from those devices, the current running-config to the database. - I can also navigate the config tree with top right 'configuration'. - Configuration editor: Change config, I can choose what module I want to config. - Commit manager: Actually make the commits to the configurations. - Enter the below CLI command for rotuer information. - > ncs-netsim cli-i c1 // c1 represents router1 ![](https://i.imgur.com/d2FUJ2C.png) - We can see that config is prepopulated here on this particular router c1. - > ncs_cli -u admin // Go to command line of NSO - admin@ncs> show configuration devices device // Go to the three device objects and its info. - Summary: Pulling in data off of IOS device and convert it to YANG data model. We can also browse it from the GUI too. ### Python Scripting for NSO - Code ![](https://i.imgur.com/O8jxoHh.png) ![](https://i.imgur.com/b76Q5TI.png) - line 4: remote ip address and specifying port 8080, NSO's REST endpoint is 'api'. - line 12: Endpoint starts with vnd.yang.[what i want]. I'm getting collection of data here. - line 15: Endpoint of 'devices/device'. This will return three routers we have. The line ends with ".json()". - Return ![](https://i.imgur.com/2RXpUci.png) ## 38. Automate Cisco Platforms with Powershell ### Getting PowerShell Core v7 - Powershell handles connecting to, authenticating and querying devices. - Powershell is an open source. - Download 'Powershell core 7 github releases' - Pick the correct installer - Windows, Mac, Unix or Linux(command line, google the command) ### RESTCONF - Use Powershell to implement YANG model-driven programmability against REST API endpoints. - Code ![](https://i.imgur.com/Fjfv17W.png) - line 7: 'Invoke-RestMethod' is the method of Powershell which handles REST API connection, passing structured data back and forth. - v7: Any response it gets back(JSON, XML payloads) it automatically converts the response to Powershell objects. - v7: It also handles sessions and authentications way more better than previous versions. - 'Invoke-RestMethod' will store the result as a Powershell object in the 'response' variable. We'll receive JSON response back but this method will convert it to Powershell. - line 1: uri is devnet sandbox endpoint for IOS-XE device. IOS-XE device already havs RESTCONF interface on it. 'Cisco-IOS-XE-interfaces-oper' is a YANG data model and we're going into the 'interfaces' container. We're getting configuration of 'GigabitEthernet1' here. - interfaces container: config data - interfaces states container: status, operational statistics data - line 3: Creating encrypted password object by calling ConverTo-SecureString command, specifying actual password('D_Vay!_10&'), '-AsPlainText' forces the conversion from plain text to the encrypted plain text and storing the result in a 'password' variable. - line 4: Creating PSCredential object with two param. first is the username of 'root', second is an encrypted password object. - line 5: API header. We only accept YANG data in a format of JSON back to us. - line 7: SkipCertificateCheck because we're using self-signed certificate. - line 15: Take 'response' variable, convert it to JSON and write output. This will write the entire response body out as a JSON response. - line 17: Parse out the specific data I want. Need '' quote to keep - and : inside the string. - line 19: If the 'admin-status' is equal to 'up', write 'interface.name is up' - Result: Powershell structure is converted to JSON object. ![](https://i.imgur.com/bvap5jv.png) ### Meraki - Goal: Get all of the Meraki devices in the specific location. - Logging in to the Meraki Dashboard. - There aren't username and passwords when we're dealing with Meraki programmatically. - Use API key that's provisioned from Meraki portal to get started. - Code ![](https://i.imgur.com/4LUu6qn.png) ![](https://i.imgur.com/JmnoluY.png) - line 1: Endpoint - line 2~5: Headers - line 7: Pick out correct organization within a Meraki query structure. We don't need to do any specification because that's handled in the headers(API key). I'm going to get multiple 'org's return to me from the DevNet Sandbox. - line 12: Save an organization ID that has a name of 'DevNet Sandbox'. - line 18: Get the list of networks within the orgId. Network is a branch or a location in Meraki terms. - line 31: Get all devices that are located in that branch. - line 37: Get all the list of the devices and write them in JSON format. - Result: All of the devices that are on DNSMB2. ![](https://i.imgur.com/JEgWt4S.png) ### ACI - Goal: Get the attributes of Application Profile. - Code ![](https://i.imgur.com/PO8z2Ud.png) ![](https://i.imgur.com/Tkx3EJ6.png) - line 1: Endpoint url - line 2~9: Payload. Structured object in Powershell uses @{}. - line 11: Header - line 14~15: We're posting the 'payload' body in. Convert to JSON before posting as a body. - line 19: ACI returns a token, not a body or header. So Powershell creates the session variable 's' and stores token as a session object in that varibale. - line 24: Managed Object tree structure. Specifying Application Profile named 'Save_The_Planet'. - line 26: As a result of this Get method we will receive 'JSON' object but Powershell automatically converts it to a Powershell object. - line 31: Use the session 's' as a Websession. We don't need to use 'token' because session is saved as a variable and we can reference it as itself. - line 33: Write that Powershell object. I only want to return 'attribute'. - Return: Token and the Attributes of 'Save_The_Planet' application. ![](https://i.imgur.com/iQRCJ58.png) ### DNA Center Platform - Goal 1. Create Authentication token to use for later request 2. Get client health details - Code ![](https://i.imgur.com/8YWuQR3.png) ![](https://i.imgur.com/SSdUkq5.png) - In the response body from that POST request, we're getting JSON response. So we need to parse out 'token' from that response. - line 1: Specifying DNA Authentication url. - line 4: Secured hased password. - line 5: Instanciate a new object from the PSCredential. - line 9~19: DNA Center authentication requires 'X-auth-token'. To get this token we need to POST request of basic authentication. No structured object like ACI. - line 21: Get client health of all the clients in my DNA Center - line 24: By passing the token in header, it shows that I'm already validated and authenticated. - line 27: The response I'll get is a list of scores with a detail. Who's score is good or bad. - line : Parse out the categories and print out how they're doing. - Result ![](https://i.imgur.com/JtlUaXe.png) ### SD-WAN - Goal: Print all of the devices in SD-WAN - Code ![](https://i.imgur.com/rpK6qWo.png) ![](https://i.imgur.com/GzpYe9M.png) - vManage handles authentication of SD-WAN. - line 2~5: For SD-WAN authentication, we're not posting in JSON payload, we post in dictionary(object) of j_username and j_password. - line 12: No conversion to JSON. We're leaving this as a Powershell object. - line 19: Get a list of all edge devices that are used in SD-WAN topology. - line 25: Websession uses session variable that's made from line 10 code. - Output: All of the edge devices coming back as a Powershell object that we can now parse and make programmatic decision(configure, manage) SD-WAN topology. ![](https://i.imgur.com/6th4GdT.png) ## 39. Computing and Application Deployment Models ### Virtual Machines and Containers - Bare Metal refers to the physical computer components. It has CPU, memory, NIC, DISC. - Virtual Machine(Popular) - Virtual Machine runs in memory of Bare Metal server. VMs share all the hardware under with other VMs. - Completely software defined so it's flexible. - Software running in VM are independent. - Hypervisor is the software that manages Virtual environments on the bare metal computer. ex. VMWare vShpere, VirtualBox, Hyper-V(Windows, Microsoft) - Type 1: Runs directly on the bare metal. Enterprise grade software. Not popular. - Type 2: Runs inside OS of computer. ex. Hyper-V. You need to have Windows OS in bare metal machine, and Hypervisor runs in Windows environment and shares VMs inside of that. - Container - Containers are another abstraction layer above VMs. It contains all the software you need to run except OS. - Container is an isolated execution environment. Multiple containers in same computer won't conflict or interact with each other. - Container shares underline kernal(OS) of VM. - More portable and lightweight than VM because it doesn't include all the OS stuffs with it. - But it requires more management and maintenance. - Docker is the management tool for containers. Docker takes container and bundles it up into a single file on your OS. You can duplicate it or send it up to cloud. All you need to run a container in other environment, is the bare metal resources and OS. - Docker has a CLI. With Docker commands, we can interact directly with Containers. Like starting containers, stopping, pulling down new containers from the docker hub online repository, or building our own docker containers from scratch using Dockerfile. ### Edge Computing - Edge Computing - An architectural approach where data processing and storage is moved closer to where it is collected and used. It's pushe dout form the centralized server to edge locations. - Why is it important? - Problem of previous computing: Processing in central and it requires my connection to the network. - Pros - Speed/Latency - User experience - Privacy(Security): By leaving data out of the edge, it allows to keep your data more private. - Resilency: If one of the location goes down, other locations are still up and running. Automatic failover to other locations. - Scalability: Easily add nodes. - Cons - Resource requirements on edge location - Infrastructure complexity in order to support communications and processing - Distribution of security, knowledge, etc: You lose the complete control of security, access to system, knowledge to keep support and maintain the system. - Content Delivery Network (CDN) - 서버와 사용자 사이의 물리적 거리를 줄여 웹 페이지 콘텐츠 로드 지연을 최소화하는, 촘촘히 분산된 서버로 이루어진 플랫폼입니다. 이를 통해 전 세계 사용자들이 로딩 시간을 늦추지 않고 동일한 고품질 콘텐츠를 볼 수 있습니다. - ex. Youtube. ### Cloud Computing - Cloud Computing - Leveraging on-demand resources(processing and storage) and economies of scale to deploy computing solutions. - Cost saving. We just pay for what we use but we get the full benefit of entire, enormous infrastructure that's already been built. - Cloud provides entire suite of infrastructure support to application development and deployment. Application developing companies can quickly and easily scale up to meet demands of their service without worring about deploying new servers, managing them, or keeping security of them. - Cloud supports database, storage, or even entire computing platform like VM. ### Deployment Models - Cloud computing relies on an accessible infrastructure. - ex. Web browser, CLI, script with RESTful API 1. Public Cloud - Public Cloud deployments leverage publicly available cloud infrastructure over the internet. - ex. AWS, Microsoft Azure, Google platform - Pros - It's huge. You can have resources that are physically close to where your users are. - Robust and mature software tools to interface with those databases. 2. Private Cloud - Private cloud deployments leverage privately-owned infrastructure with an organization's network. - It's for your company only. - Pros - Privacy. - Cons - Not scalable. - Not robust software to interface with that cloud. 3. Hybird Cloud(popular) - Pubilc+Private Cloud - For example, company has Active Directory server locally inside of their network and synchronizes those identity up into Azure. - Use case: Backup server, failover from public to private, duplicate the storage for better accessibility ## 40. Understand the Basics of Docker ### Introducing Containers and Docker ![](https://i.imgur.com/zLb3ed4.png) - VM: Virtualizes hardware - Infrastructure is also called host, represents physical server. Ex. Dell - Hypervisor abstracts hardware out of the server. - VM: Virtualizes hardware and create new server. Each VM has its own OS. - Container: Virtualizing OS. - Container is a bare minimum to run the application. Since it doesn't need its own OS, it's more scalable than VM. - Docker: 도커는 컨테이너 기반의 오픈소스 가상화 플랫폼입니다. 다양한 프로그램, 실행환경을 컨테이너로 추상화하고 동일한 인터페이스를 제공하여 프로그램의 배포 및 관리를 단순하게 해줍니다. 백엔드 프로그램, 데이터베이스 서버, 메시지 큐등 어떤 프로그램도 컨테이너로 추상화할 수 있고 조립PC, AWS, Azure, Google cloud등 어디에서든 실행할 수 있습니다. - What we'll do? ![](https://i.imgur.com/IcxU6ku.png) - Host: Ubuntu - Install Docker on top of it. - Turn Flask REST API into a container. Flask REST API will use underlying Ubuntu OS kernal. It'll install Python, Flask, and other packages that needs to run my API within the Container itself. ### 3. Installing Docker - Windows 2016: Get WSL(Windows Subsystem for Linux) up and running. With this, you can use both Windows and Linux containers. - Code - line 1: Remove all old instances of Docker. - line 2: Update repository up-to-date. - line 5~10: All of the dependencies Docker requires. - line 12: Check we've downloaded correct information. - line 17~20: Add repository. - line 22: Update our database so it knows where to get a docker repository. - line 24: Install docker. ce stands for community edition. - line 27: Make sure the docker is up and running. It pulls down the nyancat image from the docker hub. - This docker run command runs containers if you've already been downloaded on your machine. If you haven't been downloaded, it'll pull the image from the Docker Hub and then run it. - Docker Hub - Docker advantage: You can create a container and publish it to the Docker Hub. It works just like Github. Anybody can post/pull code. - With a command '> docker images' you can see what docker images have been pulled down to your machine. ### Containerize Your App with a Dockerfile - Application I have - Flask API code ![](https://i.imgur.com/dGUTEFJ.png) - I'm listening on api/enpoint - Result ![](https://i.imgur.com/uNE0iAF.png) - Listening on 0.0.0.0 port - Go to my local address(loopback address) 127.0.0.1:5000/api/endpoint ![](https://i.imgur.com/6HWTIc4.png) - I get a JSON object that I got on a code. - I need to turn on this on container by using Dockerfile. - Create a Dockerfile on the application folder - Get Docker extension on VScode. It gives you syntax highlighting. - Create a 'Dockerfile' file. Dockerfile works just like a Shell script. It identifies series of commands to run in order to construct my Docker container. - Code ![](https://i.imgur.com/KtB3I98.png) - line 1: Base Os. Ubuntu(Linux) - line 3: Contact information - line 5: Make sure the repository is up-to-date and install 'python3-pip' and 'python3-dev' environment. - line 7: Specify how many commands to run and what params to specify there. My flask API is listening port 5000, so I will allow traffic in on port 5000 in this container. - We need a correct version of flask. - @ Terminal @ Docker folder - sudo pip3 freeze > requirements.txt - requirements.txt ![](https://i.imgur.com/FWwrjc6.png) - Every single library that's available to pip with a current version I have. - Just leave 'Flask=1.1.1' in the text file. - line 9: Copy 'requirements.txt' file in the app container(folder) with a same name '/app.requirements.txt'. - line 11: Tell container to change paths in the container and make the base directory to '/app' - line 13: Install requirements. '-r' is telling that we're using a requirements file. - line 15: Set up my environment. Copy './myAPI/myAPI.py' API file directly into '/app' folder. - Environment is all set. Start the API. - line 17: Specify what program we want to enter in to execute. My entrypoint is 'python3' program. - line 19: Run 'myAPI.py'. The parameter will be 'myAPI.py'. - Result: Turned Flask API running in a Python script into a containerized(virtualized) applicaiton, a dockerfile. When container launches, it'll have everything it needs to get up and running. ### Working with Docker Commands - In the Dockerfile folder, build a Docker container from that Dockerfile. ![](https://i.imgur.com/3IBGRBn.png) - sudo docker build -t dockerfile:latest . - Tag this with the latest version. - This code will build up a container with everything we told to do. Update repository, install Python, grab correct flask pip install, firewall items, etc. - sudo docker images ![](https://i.imgur.com/jraqXAS.png) - See all of the docker images that I downloaded or created. - sudo docker run -d -p 5000:5000 dockerfile - Run a docker container that's already in my repository. By using this command multiple times with different receiving port number(like 5001:5000), we can build multiple containers. - -d: Detached mode. I don't want to enter the Docker container instance, I just want it to run. - -p: Port forward. Hey Ububtu, if you see a traffic on port 5000, forward it to the container that we're running in the port 5000. ![](https://i.imgur.com/JPluQNS.png) - There's a NAT(Network Address Translator) between host and container. - To access to the REST API application on 172.20.21.49:5000, there are two ways. - Port forward: All of the traffic will come to 10.10.21.24 and forwared based on the port number. - Transparent network: Container is bridged out to the physical network and it grabs an IP address from DHCP. So all containers will have own IP address. - dockerfile: What image do we want to run. - sudo docker ps - See what Docker container is up and running. - Every container generated gets radomly generated name. You can interact diretcly by name. - Check ![](https://i.imgur.com/Q3033bs.png) - Address: 127.0.0.1:5000 - My container is up and running, that we built from dockerfile. - sudo docker stop 6ld - Stop container - We can put [name or container ID] on 6ld's place. - sudo docker ps -a - See all of the containers I've ran or are currently running. - sudo docker container rm nervous_hodgkin - Remove container. We don't need this container any more. - sudo docker images - Container and image are different. Containers are the unique instances that we've created from image. ### Pushing and Pulling with Docker Hub - Purpose: Deploy image to the Docker hub so we can scale our application up to multiple hosts. - Docker commands - sudo docker login --username=khutch6 - Log in - sudo docker images - Pull up the image list. - sudo docker tag [image ID] [repo: latest] - Prep image for docker hub by giving a Docker Hub tag. - [repo: latest] == khutch6/falskdemorepo:latest - sudo docker push khutch6/flaskdemorepo - Push Flask API that we created into a container up to the repository. - sudo docker pull khutch6/flaskdemorepo:latest - Download the image from the docker hub. ## 41. Describe the Components for a CI-CD Pipeline ### What is CI_CD_ - CI-CD - Continuous integration, continuous delivery. - A set of practices that automates our delivery of software we create to our users. - Goal: Make deployment easy, predictable, routined and on-demand. - CI - Automating the combination of code changes from multiple contributors into a single code base. - Components 1. Source code control - git 2. Build automation - Automatic compilation of code. Anything that needs to happen to build or package your software, should happen automatically once you check your code changes into your source code control. 3. Unit testing - Testing of individual components of functions of software. Test Driven Development. Ensure it's producing the result that you expect. 4. Branch merging 5. Integration testing - CD - Continuous delivery/deployment - Automating the delivery of IT services(code file, infrastructure(database, network, server)) to users. - Components 1. Central repository - Push merged code up to centrla repository 2. System testing - Ex. Ensure your code has access to different services, code can communicate with all the different processes on you network that needs to be communicated 3. Deployment 4. User-acceptance testing - Delivery of software into user's hands, and see it's working as expected. ### Integrating, Building and Testing - Rules - Maintain a single source repository. Everything that you need to run that source code should be included in your single source code repository. - Source code - Test scripts - DB schema files - 3rd party libraries - Commit to source control kicks off the CI/CD pipeline. - Automate the build process. - Because manual processes are susceptible to mistakes. ex. typing commands - It includes compiling, executing DB schema, creating config scripts. - Tools to do this: Jenkins, Travis.ci(integrates with github) - Unit test built software - All programming languages has unit testing libraries available. - JavaScript - JEST - Python - UNITTEST - C# - NUNIT - JAVA - JUNIT ### Delivering and Deploying - Continuous Delivery(CD) is about putting software into user's hands. - Rules - Ensure code is stored in a centralized backed up location. ex. git repository - Automate System Testing to validate system deployment. - Test overall interaction of the entire system. - ex. Test different components can communicate one another - ex2. Make sure given ports are opened on the firewall so that traffic can come through and reach to web servers - ex3. Database is up and running on server, listening on the correct port - Environment cloning: Your test environment is match of your production environment. - Infrastructure as code - Immutable infrastructure: Your infrastructure never changes. Use Infrastructure as a code everytime you need to make configuration change. - Additional pushes to - UAT: User Acceptance Testing. Separate environment from testing that is dedicated to your user. - Production ## 42. Secure Data in Your Applications ### Resting or Moving - Securing data is legally required. - PII: Personally Identifiable Information. ex.names+(dates of birth, SSN, payment). - PHI: Personal Healthcare Information. Regulated by HIPAA. - Data in Transit - Protected by - Usernames/pwds - Keys. ex. SSH - Certificates - Use HTTPS, SFTP and SSH instead of HTTP, FTP and SSH. - Data in Rest - Data stored somewhere should be encrypted. - ex. Files in disks - encrypted - ex2. DB in database - database encryption - ex3. Data processed by application in RAM - should be purged - ex4. Authentication data in Application - should be stored as environment variables ### Secure Data in Transit - Clear texts can be listened while they move across the wire. ex. username and password, data in CLI/REST API, file transfer - Tool to capture traffic: Wireshark - Filter out the traffic. ex. Telnet, SSH - Get SSL Certificates that encrypts data in transit - gogetssl.com - letsencrypt.org ### Environment Variables - Without using environment variables, username and password variabls are stored in code. - Environment variable is stored in the OS environment. - Command - export SWITCHUSER='cisco' // Set environment variable - echo $SWITCHUSER // Get environment variable - Python can grab environment variable. ![](https://i.imgur.com/WYLF1sl.png) 1. Get from code - line 4: os.environ.get() // Get the environment varible directly from OS system environment 2. Get from .env file @ Terminal - Sudo pip3 install python-dotenv @ '.env' Code ![](https://i.imgur.com/ndcComf.png) @ Main code - line 7: Get environment variable from .env file. We need to import load_dotenv on the file header. ### Encrypting Data at Rest - Disk Encryption - Usually built directly into OS. - How? - Take key or password and encrypt every single bits that's on the disk. Only the person who knows the key can decrypt it. - What kind of Disk Encryptions are there? - Windows - BitLocker. You can also use Group Policy to deploy BitLocker to your entire environment. Store the encryption key and passwords in your Active Directory. That way you can recover your hard drive from a secure, Active Directory Database. - MacOS - File Vault. Recover data with your Apple ID. - Linux - Tomb. Install application Tomb, you'll have a password to decrypt your disk. - Database Encryption - How? - On SQL server - With Master key and a Slave key. Slave key is decrypted by the master key. Slave key is what's used to encrypt the Database. - On Azure SQL (cloud) - AlwaysEncrypted. Only the right users/applications have access to the keys to decrypt the Azure SQL database. ## 43. Identify OWASP Standard Threats ### What is OWASP? - Open Web Application Security Project - Non-profit charity, dedicated to educate different types of common attacks and things you can do to mitigate these attacks. - owasp.org - Information: What the attack does, examples, prevention sheet, etc. ### Cross-Site Scripting (XSS) Attacks - What is XSS? - Injection style attack? - Attackers put malicious codes on the subit form so that the code can be injected to the backend server. - For example - Put JavaScript code on the comments in the website so when anyone browses the website will download all the content on the site including the Javascript code which will execute. If I'm logged in, my web browser is caching a login token so the Javascript code can send my token to bad attacker. My identity is stolen. - XSS steals or alters data ### SQL Injection Attacks - Commonly used attacks. - Inject malicious database code to our website. So when a person clicks a submit button, it goes back in database and performs sequal actions(SELECT, INSERT, UPDATE, DELETE) ![](https://i.imgur.com/HV9bL1d.png) - To avoid this, you need to sanitize your database inputs. - XSS steals or alters data ### Cross-Site Request Forgery (XSRF) Attacks - Injection style attack - Forging a request as if it was someone else. - For Example, - Comments section, Javascript. Logged in, auth token stored in the browser, Javascript gets downloaded and executed. It now has access to all of the permissions and rights that you have. - Rather than stealing the data, it'll execute a request as if it was you. - ex. Purchase something and change shipping address, Transfer money to unknown account ## 44. Understanding the Basics of Linux and Bash ### Linux Distros - First Open Source OS - GNU + Linux: Linux runs underline OS Kernal and GNU had 3rd party additional packages/libraries on top of it so called as App. - Linux keeps a repository of all applications where you can install them. - Linux distribution - On top of Linux kernal, people add their own apps, GUIs, libraries. They can distribute their own OS. - Variations of Linux - Ubuntu - Come from Debian family - Forms: Desktop, server(run Active Directory, DNS, File share, Email) - Package Manager - Aptitude. It's like an app store. - > apt install MyApp // Package manager knows where to find MyApp application and download it. - CentOS - Comes from Red Hat(owned by IBM) - CentOS vs. Red Hat - CentOS: Usually a release or to behind RedHat in terms of releases. No Enterprise support. It's a lot like working with Red Hat Enterprise Linux. - RedHat: Professional grade support. - Package Manager: YUM(Yellowdog Update Modifier) - other Linux distros are SUSE, Kali, Mint, Redhat, Debian etc. - Most of the network devices whether it's Juniper, Cisco, or others, they use Linux a lot of times. So knowing the basic bash, file navigation system is going to help you. ### Filesystem - Root folder - In Windows: C:\, D:\ - In Linux: / - In GUI ![](https://i.imgur.com/3i4Hi9G.png) - Everything in Linux machine starts in root folder. - Usb Drive is in 'dev' folder - To make contents available, we have to mount files in 'dev' to 'mnt' folder. Then you can browse them as folders in 'mnt' folder. - In terminal - username@machineName - commands - pwd // Print working directory I'm currently working in. - ls -l // What's the contents I have here. Blue denotdes folders, others are files. '-l' shows detailed information like permisson, creator, owner, file size, and creation/modified date. - cd // Change directory - cd .. // Move up one folder - cd ../dd // Move up two folder - Relative path: 상대 경로는 현재 작업 디렉토리를 기준으로 파일 또는 폴더의 위치를 설명하는 경로. ex. cd Documents - Absolute path: 파일 또는 디렉토리의 전체 위치를 포함. ex. cd /user/local/bin or ../.. - cd $home // Environment variable - Environment variable: Dynamic variable that's on the machine. ex. $home = /home/knox ### Creating, Editing, Moving and Deleting Files - Commands ![](https://i.imgur.com/NejUQeg.png) - touch basic.txt // Create 'basic.txt' - nano basic.txt // Brings us into text editor of 'basic.txt'. ctrl+x is exit. - cat basic.txt // Outputs content straight into shell. - vi basic.txt // vi is a popular text editor. You can make an edit with Shift+i key. ':wq' write the file and quit. - mkdir demofolder // Create new folder. - mv ./basic.txt ./demofolder // Move basic.txt file into demofolder. './' means this folder I'm currently in. - cp ./basic.txt ../ // Copy the basic.txt to one folder up. '../' is a previous folder. - rm basic.txt // Remove file. ### File Permissions - Windows use NTFS for file permissions. - Linux handles permissions in two ways. - Who - We can assign permission based on individual user, group or other(not the user or not the group). - What action to permit - (R)ead [point:4] - (W)rite [point:2] - E(x)ecute-Open or Run [point:1] - ex. Permissions of Basic.txt - 7 5 0 - User has total privilege of RWX(read, write and execute) because it's 7(4+2+1). - Group has privilege of R_X(read and execute) which is 5(4+1). - Other have ___(no permission) at all. ![](https://i.imgur.com/EYk0Qmc.png) - 'd': directory - 'rwx': User that I have has Read, Write and Execute previliges. - 'r-x': Group only have Read and Execute privileges. - chmod 755 basic.txt// Change the privilege to rwxr-xr-x on basic.txt file. ### Working with Packages - Ubuntu: On terminal(Ubuntu commands) or Ubuntu Software Appstore. - CentOS(YUM commands) - Terminal - sudo apt update // All of the repository and items are up-to-date. - apt list --upgradable // Show the items that need to be updated. - sudo apt upgrade // Upgrade to the latest software. - sudo apt install python3 // Install the package I want. It automatically knows where to get latest version Python3 and install it. - sudo apt install nginx // ngnix is a popular web Linux web server. - sudo apt remove nginx // Uninstall the app nginx. - echo 'deb https://www.ui.com/downloads/unifi/debian ~' // Specify exact location to download when you want to install an application that Ubuntu doesn't know where to get. ## 45. Identify the Principles of DevOps Practices - Issues of a single day, big release - Large set of changes - Long delay between development and deployment - Slow time-to-market - Zero work/life balance - Solution - High performance teams consistently deliver services faster and more reliably - Agile - Taking your project management and turning it into iterative and incremental process. Quickly define, build test, and release changes and updates and fixes to your software. - In contrast: Waterfall process - Use source code control in order to routinely, frequently merge your code changes. - Advantages - Easier integration - Faster time to deployment - Human-centric: don't need to block 4 days of weekend ### Continuous and Automated - Continuous makes deployment **routine** - CI/CD - Less risk on development & deployment - Automated - Automation makes deployment **reliable** - **Faster** development & deployment - ex. Software Defined Network, Infrastructure as a code ### End-to-End Responsibility - Principle - Organizational and functional visibility - Everyone can see the feedback and instrumentation in production environment - Cross-functional training and skills development - User-centered planning and development. What is the user need to accomplish? - Summary - End-to-end responsibility means every stakeholder in project lifespan from developer to deployment has visibility and responsibility for the entire product lifespan. ## 46. Unit Tests and Test-Driven Development (TDD) ### Unit Testing - Unit testing is the initial testing phase. - Unit(function, method of a class, module) testing is - Granular - Modular - Automated - Vocabs ![](https://i.imgur.com/vXDmiQN.png) ![](https://i.imgur.com/CNnPfiN.png) - Unit: The piece of code/automation being tested. ex. line 4~6 'calcCircumfrence' - Test case: The code performing the test. ex. line 7~9 'TestMyCode' inherits from(subclasses) unittest. Method 'test_circumfrence' runs when 'unittest.main()' runs because the method name starts with a word test. - Test runner: The script/execution engine for automating test cases. ex. Which of your units passed/failed?. ex. line 11 ### Using the Python unittest Module - Test suite - Library or third party framework that automatically test our code. - We'll write test cases and compare the result with expected result - Unittest is the built-in unit testing framework that ships with Python. - Step - Create test cases by subclassing unittest.TestCase - Define tests by creating methods beginning with 'test. It's only going to execute function that begins with a word 'test'. - Assertion methods ![](https://i.imgur.com/LGvgL72.png) - Code ![](https://i.imgur.com/mQGLwWy.png) ![](https://i.imgur.com/ADW3wG6.png) - Each methods inside TestMyCode are test cases(unit tests). ### Unit Testing with Postman - Postman uses the 'Chai Assertion Library' to enable unit testing in it's post-response scripts. - Chai is a Behavior-Driven-Development(BDD) design that relies on language chains. ![](https://i.imgur.com/jsYBqQv.png) - @ Postman > Tests ![](https://i.imgur.com/1mcifkW.png) ![](https://i.imgur.com/HDMQ8VG.png) - It is a Javascript scripts executes after the response comes back from the server. - line 1: First parameter is a test name and second is function that'll actually run our test. - line 2: My response has to have a status 200. - Result ![](https://i.imgur.com/3IFXm6d.png) - Another test: Test if my API is responding quickly enough. - Code ![](https://i.imgur.com/GFWNfYC.png) ![](https://i.imgur.com/N7pPv51.png) - Result ![](https://i.imgur.com/HjDTaPl.png) ![](https://i.imgur.com/1nRsXSm.png) - Another test: This endpoint should have a name of 'C-3P0'. - Code ![](https://i.imgur.com/vFeXkgl.png) ### Test-Driven Development (TDD) ![](https://i.imgur.com/gj5pE2W.png) - Difference: Write a test before code. Write a test that should absolutely fail when they run. - TDD helps to enable or enforce many sound practices for development: - Small, iterative changes - Automated test suite/runner - Tests become design exploration tools: test becomes scriptive, documentary objects, things that help you understand what the system is supposed to do. ## 47. Install and Configure Ansible ### Infrastructure as Code - Infrastructure as code - Rather than configuring devices on CLI or GUI, our systems and networks are deployed from codes and scripts like Python, Powershell. - Why Infrastructure as code? - Problem: Developers and Infrastructure(Operations) teams were not on the same page. ![](https://i.imgur.com/oAstnnC.png) - During development, there'll be a lot of web applications(web page) and web servers duplicated from the main web app and server. It's hard to manage all of them because developers and operation teams are keep making a change in the environment they're using. - Benefit ![](https://i.imgur.com/sS2ZRdR.png) - We have a base level of production which tells how the infrastructure works "right now". - Quick and reliable environment set up - because they all came from the same code - When things are changed in individual environment, they have to checked back in, approved and tested in that code. - As they're checked-in to this code, then other environments get updated too. - Ansible - A base code automator and orchestrator. - It deploys all of your infrastructure from the base code. - Ansible checks in and verifies that the code is maintained in its desired state. ### CICD Pipeline - How are source code managed? - CI/CD ? - NetDevOps Steps - Change code - Commit to the source code - Other codes sync(pull&merge) to the source code & The code that's been committed is built and tested against testing environment in automated fashion - Succeeded - Production - To build and test the networking, we may use tools like VIRL. It spins up a copy of our environment as a simulated environment. - Then we may create a test using Cisco PyATS(Automated Testing System) to run simulated tests against this environment. ex. OSPF connectivity, ping testing, throughput testing ### Installing Ansible - Goal: I have Ubuntu machine that's running Ansible. I want Ansible to automate and orchestrate two other Linux machines. One is another Ubuntu machine, another one is CentOS. - Same username/password on all of the devices - Steps ![](https://i.imgur.com/MzM6LMb.png) - Install Ansible - Configure Ubuntu(172.16.1.40) and CentOS(172.16.1.3) to allow SSH access with keys instead of password. That way, Ansible can run automatically in order to login into Ubuntu or CentOS box. - Code - line 9: Install a library. - line 11: Add Ansible repos which knows where to go to get the latest version of Ansible. - line 13: Make sure we have most up-to-date repo. - line 15: Install Ansible. - line 18: Confirm Ansible is working by running the ping test against localhost. ### Configuring Remote Devices - Code ![](https://i.imgur.com/b2spaMc.png) - line 22: Configure local hostname like local DNS. With this, my machine can resolve IP addresses to the name, and vice versa. ![](https://i.imgur.com/LKTfZx1.png) - line 26: Generate SSH key. - line 29: List the keys. We have id_rsa, id_rsa.pub key and we'll use .pub since it's a public key. - line 32: Copy the generated key 'id_rsa.pub' to Ubuntu machine. Do the same thing on centos. (to enable SSH in Ubuntu machine, use command 'sudo apt-get install openssh-server') - line 37: SSH into Ubuntu machine and change the config on visudo. Do the same thing in Centos. - As a result, - We have end-to-end connectivity with the ability to run our commands as sudo without needing a password. Ansible is installed, remote connection to our server is set up. ### Inventory File - Ansible knows all of the devices and network items based on an Inventory file. - Format - [group name] - IP address or hostname - Code ![](https://i.imgur.com/i68Dtc8.png) - line 45: Go to Ansible's host file and create an inventory there. ![](https://i.imgur.com/5AqZ3y7.png) - line 54: Ping against all items in the hostfile. -m stands for module, which makes Ansible to run the ping module for all hosts. ### Modules ![](https://i.imgur.com/K0xqj2Y.png) - Ansible was built in Python so there's collection of pre-built Python script modules. You can use as your case. - Code ![](https://i.imgur.com/KpdPKX6.png) - line 57: See what the uptime is on all of my linuxhosts. Specify using raw shell module, '-a' specifies an action we'll do. - line 59: Shell command. Validate what version of Python we're using. - line 61: Run the 'Who am I' command. It'll return what user context we're using right now. - line 63: Elevate user context to root(sudo user) with '-b' for become. ### Playbook File - Ansible Playbook is where automation actually happens on our infrastructure. - Playbook - A YAML file that just outlines the instructions it needs to run. - Goal: Connect to all of my linuxhosts, get some configuration info and basic stats on that machine and write it to a text file on the desktop of those machines. - Code: YAML Playbook ![](https://i.imgur.com/25D9WIl.png) - line 1: Every YAML file starts with --- - line 3: on what devices - line 6: Specify actual module name itself. We're using shell module here, with 'umane -a >~' command. Then save the output on 'output.txt' file. - Run the playbook @CLI ![](https://i.imgur.com/XEwNUKj.png) - Task [Gathering Facts]: When Ansible connects to devices, it gathers system information on each one of these devices. You can make decisions of what you want to execute based on these information. For example, ubuntu uses apt for package manager where centos uses yum. So I'll use this os information to determine which module I want to run. (Use 'ansible ubuntu -m setup' command to see the whole information of ubuntu.) - Ran two tasks I've created on each of the devices in linuxhosts. - changed=2: It made a change to this environment two times because it created a file and wrote to the file. - ok=3: Successful output on three tasks. ### Variables and Facts - Code: Variables ![](https://i.imgur.com/azWNDHd.png) - line 4~6: Set variables - line 9: Run a shell command that echo to a file 'vars.txt'. - Result ![](https://i.imgur.com/O2Davtq.png) - Code: Facts ![](https://i.imgur.com/dPMJmn6.png) - line 8: Write OS family information to the correct user which is 'ansible_user_id'. - line 10: Debug command to write the message what is the default NIC interface. - line 10: Default NIC name is a name this VM is using to send traffic out. - line 12: Get result of ls command and register(save) it to the variable 'dirs'. - line 15: Debug command to write the message of all of the directories out to Ansible - Result ![](https://i.imgur.com/3fCiv7q.png) ### Conditionals - Code: If statement ![](https://i.imgur.com/3NOWCfH.png) - line 3: Become the root(admin) user. - line 6: Use apt module because it's an Ubuntu machine. This will make an error on centos. When there's an error, it'll roll back and undo what it was doing. - line 8: Ignore the error - line 11: Register the outputs so ubuntu will get success but centos will get failed. - line 13~15: Install nginx when the registered results have failed. It uses yum module to install nginx. - line 15: Specifing the condition to run a conditional state. - Run the code - ansible-playbook condition.yml ![](https://i.imgur.com/JWuTrxw.jpg) - Code: Another If statement - ![](https://i.imgur.com/cpwYSWO.png) - Run each 'Uninstall nginx' commands based on the os_family. ### Loops - with_items == for item in items - Code ![](https://i.imgur.com/M2FaQQP.png) - line 7: Create file. - line 10: Run shell command(line 9) echo which writes(appends) "{{item}}" to a 'loops.txt' file, passing in different data everytime it executes the loop. - In Ubuntu 'shell' command is to create or write and 'apt' is to install. - Result ![](https://i.imgur.com/YcFgtLk.png) ## 48. Automate Your Entire Network with Ansible ### host_vars and group_vars - Folder: Ansible>networking>host_vars - Environment ![](https://i.imgur.com/32LOA5w.png) - First, make sure Ansible knows these devices by adding them to a host file. - @ CLI - > sudo nano /etc/ansible/hosts - Add below to this 'hosts' file. ![](https://i.imgur.com/8jNv5CV.png) ![](https://i.imgur.com/ivdCSaV.png) - Use DevNet sandbox 'ios-xe-mgmt-latest.cisco.com'. - host_vars - To repeat the same tasks on multiple devices, we can use host_vars. It is a default functionality of Ansible. ![](https://i.imgur.com/haM1yrG.png) - When I run one of these playbooks, Ansible automatically looks for a folder 'host_vars'. Inside the file, I have YAML file that specifies ip address(host name). Below is the script inside that YAML file. - ![](https://i.imgur.com/r6F5pip.png) - Code ![](https://i.imgur.com/hsZHFCu.png) ![](https://i.imgur.com/r6F5pip.png) - line 3 in above image: Looks at what host it needs to interact to and go to the matching file in host_vars for each individual devices that there is. Over tasks, Ansible can automatically change the variables used, based on the host_vars. - Ansible - "Ok I'm on a Nexus 9K device. It has 172.16.1.68 for hostname. Let me look 172.16.1.68 in the host_vars folder." - Go to 172.16.1.68.yml - "There is a file. And it has an object alled local_loopback. Within that I can see two subobjects." - So it imports as a variable the object {{local_loopback}} that we can use as a variable. - We handles authentication also this way. You can specify authentication and connection param and how to actually connect to this particular network device. - It works both on custom made objects that we're creating like loopback interfaces or ansible environment variables like authentication parameters. - group_vars - Instead of setting variables for each hosts, we can set variables for the whole group. ### Network Modules - When you build your playbook, use Ansible network modules. In this module, we're focusing on Nexus, which has its own API, NX-API. It can interact via CLI commands or REST API. - Ansible has modules against Nexus(NX-API), IOS, Juniper JUNOS, Arista EOS, Netconf and Restconf. - Code ![](https://i.imgur.com/qGw5qaF.png) - line 8~11: Required variables to use nxos_facts module. - Goal: Connect to Nexus 9K device 172.16.1.68 and gather the facts of the host that's specified in line 8. Get NX-OS facts and store it in variable nxos_data. And in the next task, print out all the data. - For authentication, we need to specify the password value in task or in the environment variable $ANSIBLE_NET_PASSWORD. It's described in Ansible module documentation. 1. In the host_vars itself 2. Creat a file in the same folder as the playbook '.ansible_env' ![](https://i.imgur.com/uZJqdYQ.png) - When I run any playbook in the same folder, it's going to automatically have these environment variables available to the playbook. - @CLI >source .ansible_env// Export the .ansible_env values as a source ### Creating Loopbacks on Nexus Devices - Goal: Create 2 loopbacks on each device. Loopbacks are on 172.20.1.0/24 and 172.22.1.0/24. - Steps - Connect to a Nexus device. ex. Nexus9K - Create two empty loopback interfaces on the device. - Code - 172.16.1.68.yml (Nexus9K) ![](https://i.imgur.com/BXSVWRK.png) ![](https://i.imgur.com/wSJUW4G.png) - line 3: Switches group that includes nexus9k and nexus3k. As we're using switches group it'll find 'host_vars' or 'group_vars' object. There it'll import local_loopback. - line 8: nxos_interface module creates empty loopback interface. Specify device name and description. - line 9: item = looback1 and then loopack2 - line 16: nxos_ip_interface module configures ip protocol(IPv4), address and subnetmask(prefix). - line 17: Make sure this item is created. Idempotent - Ansible will make these configurations are present on the network devices. If it's not present, it'll create it. It's not trying duplicate itself. - ![](https://i.imgur.com/9yBojYm.png) - Nothing chaged. - All of these will run again on nexus3k(172.16.1.4). - Code - Goal: Delete loopbacks. ![](https://i.imgur.com/0Ci2cpU.png) - Changed loopback's state to absent. ### Gather Facts and Issue CLI Commands to IOS-XE Devices - IOS-XE devices have netconf and restconf. - Purpose: Programmatically interact with Cisco CLI. - Module: ios_facts - Function: Collect facts from remote devices that is running IOS. - Cisco IOS CLI mode - privilege(by >enable) - Code - line 7: Gathering facts - line 8: Specify provider item which has the connectivity - line 10: port 22 = SSH to perform all of these operations. - line 11: Entering privilege mode - line 12: Enable secret password - line 13,14: SSH username and password - line 16~: Ansible stores Ansible specific variables that we can then request to get more on IOS device. like what version of IOS is running, hostname of the device, ip address, serial number, entire running config, etc. - Summary - Gathering facts from IOS command line device. Enter privilege mode, issue show commands, parse out the results into variables that we can work with. - Code ![](https://i.imgur.com/iZK54v3.png) - Goal: Execute commands for the command line from Ansible. - line 7: Use ios_command module. - line 17: if_data is stored as a Python object. ### Restconf and connection variables - How to interact with our IOS devices using industry standard protocols, Netconf or Restconf - Endpoint - http://ip/restconf/interfaces - Credentials and Port will be configured in environment varible. And these variables will be specific to devices we're conning to, in other words, host_vars. - Code: 172.16.1.25.yml host_var contains connectivity. ![](https://i.imgur.com/8TQbygf.png) - Goal: Setting up connectivity using Restconf endpoint using Ansible and enviornment variable. - line 2~9: Ansible needs these information in order to get IOS-XE device. - line 7: http://ip (host) + /restconf + /[task. individual endpoint] - line 8: Variable name is ansible_user not ansible_net_user like it was in on environment variable. ### Restconf GET Playbooks - After Restconf getting the connection, now we can run the request. - Code ![](https://i.imgur.com/ddcbeJM.png) - Goal: Run HTTP GET request - line 3: Specify host I want to connect to. - line 7: restconf_get - Get request on config data on all of the interfaces in path. Return the output as JSON. - line 10: URL: http://1721.6.1.25/restconf/[path] - line 14: Print out the running config of our interfaces. ### Jinja Templates - What are we doing here? - POST request with JSON(that has information of loopback interface) to create a new loopback interface. - Jinja Templating - Make JSON payload dynamically to apply payloads (configruations) differently on each device. - Jinja Template file(.j2) - ![](https://i.imgur.com/RN25Zqe.png) - Code ![](https://i.imgur.com/Yy0r1B4.png) - line 8: Bring items from host_vars file and store loopbacks as items. - line 9: Grab a jinja file from the src, inject those items into the template and spit it out as a file called ./output.json. - Output ![](https://i.imgur.com/pDGRxv8.png) - Created JSON file with the data from host_vars based on the template and Jinja capabilities. ### Restconf Config Playbooks - Code ![](https://i.imgur.com/1ukHuMr.png) - line 7: First task is creating the dynamic JSON template. - line 11: Second task is creating loopbacks by posting the template to correct enpoint of interfaces. - line 12: Iterate over which loopback we want to create. - line 13~: Using restconf_config module. - line 17: [host_vars] Use yaml object, new_loop in host_vars, and convert it to JSON. - line 18: [Jinja templating] Content is a JSON file we've just created. The way you do this in Ansible is specify we're looking up a file 'output.json' and I'm piping that JSON file to string. Because the format of JSON converts string into JSON. We need to conver string into JSON, not JSON into JSON. - line 25~: Delete interfaces we've created using dynamic variable {{item.name}}. ## 49. Describe How a Switch Performs Layer 2 Forwarding ![](https://i.imgur.com/eCtUCPX.png) - TCP/IP layers - MAC address = Ethernet address (12 digit from manufacturer) - Hexadecimal(16-base) - Commands @ CMD - ipconfig /all - ping 10.16.0.1 - tracert -d 192.168.1.200 // Path the packet is taking form source to destination without name resolution. - Commands in network device - show ip int brief - show int vlan 10 - show mac address-table - ping 10.0.0.1 - Switch roles 1. Learn 2. Forward 3. Flood - VLAN - Segmenting L2 network. After we create VLANs, we assign ports to it. - Each VLANs has unique ip address. - L2 VLAN = L2 broadcast domain. Because broadcast happens inside VLAN. - Commands - show vlan brief - conf t // Create VLAN - vlan 2 - name sales - int range gi 2/0-3 - switchport access vlan 2 - end - 802.1Q Trunk - Trunk is a 802.1Q header(tag) that identifies the VLAN. - 802.1Q Tagging is adding additional information, especially VLAN ID, on the frames, that are sent between two switches over a configured trunk link. - Trunk port = Port that goes to core switch - dot1q: Type of encapsulation we use for trunking - Commands - show int trunk - show run int gig 0/1 - switchport trunk encapsulation dot1q - switchport mode trunk // Want to be a trunk, not dynamically negotiating ## 50. Describe How a Router Performs Layer 3 Forwarding - Don't need to know how to configuration. - IP address works in L3, Network layer. - IP address is 32bits in length, represented by 4 dotted decimal numbers. ex. 10.16.0.10 - Network address: 10.16.0.10 - Prefix: 10.16.0.0/16 - Training a Router 1. Directly Connected Network - By configuring ip address on interfaces. - show ip route - show ip int brief - conf t - int gig 1/0 - ip address 10.12.0.1 255.255.255.0 // Now router directly have connection to this network - end 2. Static - Manually configure the route. - conf t - ip route 10.23.0.0 255.255.255.0 10.12.0.2 // ip address, subnetmask, next hop 3. Dynamic - Rotuer to communicate with each other. We don't need to manually configure every route. - Two ways - RIP: Distance vector. Slow to converge - OSPF: Link state, uses adjacencies(formal agreement). Quick to converge. - Commands - conf t - int g 0/0 - no shut - ip add 10.1.0.1 255.255.255.0 - end - show ip route - conf t - router ospf 1 - network 0.0.0.0 255.255.255.255 area 0// Every network in this router plays OSPF. So by this we can get OSPF routes to other router which is running OSPF. - When the packet matches the prefix, that packet will be sent to that route. - Default route - When there's no matching prefix(route) in routing table. - Command - conf t - ip route 0.0.0.0 0.0.0.0 10.12.0.2 ## 51. Describe Transport Layer Functions and Protocols - Transport layer: Layer 4 - TCP(Transmission Control Protocol) - Reliable - Connection-oriented protocol: Verify if the packet actually get there - For example, - Application layer protocol: FTP, HTTP, HTTPS, SSH - 3-way handshake: SYN - SYN,ACK - ACK - But little overhead, so slow. - UDP(User datagram protocol) - For example, - Voice, video, live stream, DNS - Connection-less: Sends packet and doesn't check whether or not packet got there. - Little faster than TCP. ## 52. Describe Application Layer Functions and Protocols - Application layer service - protocol - Web - HTTP, HTTPS - Remote connectivity - Telnet, SSH(tool: Putty) - DNS(Domain name system. www.com -> IP address) - NTP(Network Time Protocol): Time synchronization - Port - Well Known Ports(WKP) for Well known services that we've agreed to. - DNS: UDP 53 - NTP: UDP 123 - SSH: TCP 22 - Telnet: TCP 23 - HTTP: TCP 80 - HTTPS: TCP 443 - Netconf(uses SSH): TCP 830 - Source and destination ports are written on L4 TCP/UDP header. - DHCP(Dynamic Host Configuration Protocol) - Here's your IP(and mask, Gateway, DNS). - Purpose: Dynamically get IP address information via DHCP - Pool of IP address. - Steps(DORA) 1. Discover: Client -> Server 2. Offer: Server -> Client 3. Request 4. Ack - Port: UDP(Server port 67, Client port 68) - SNMP(Simple Network Management Protocol) - SNMP agents running on network devices and query SNMP agents to get information, alerts. Now we use Netconf, YANG models. - End to end IP communications - BOB going to www.mysite123.com 1. DNS - Application: DNS - Transport(L4): UDP header-source port, destination port - Network(L3): Source ip, destination ip - Data Link(L2): MAC address(12 hex, 48bit(4bits each)) - source(bob), destination(Gateway's L2 address) - This data will be inject into Physical layer(L1). - ARP - Know L2 address of another local address including local gateway. - Send broadcast "I'm looking for the mac address of this IP address. Send me a mac address if you know.". - Multilayer Switches ![](https://i.imgur.com/sYdVC32.jpg) - L2 switching + L3 routing in one physical device - Phantom IP Router = Logical router between L3 networks - SVI: Logical L3 interface. - Command - int vlan 10 - int vlan 20 - We can assign ip address, L2 mac address. - Commands - show int trunk - show ip int brief // Logical L3 interface will show up here. ex. VLAN 10 ## 53. Topology Overview and Lab - Links betwen core 1 and 2 is trunk. It carries multiple VLAN traffic using 802.1Q. - Commands - L2 - show vlan brief - show interfaces trunk - show interface // every interface - show interfaces vlan 10 - show mac address-table //incoming, outgoing packet's mac address - L3 - show ip route - show ip int brief // 'ip int' - Firewalls - form: physical device, logical device(VM), software running on personal device - Access Control List(ACL): allow or deny traffic 1. Packet Filtering in L2/L3/L4 - ex. Drop the packet that has destination on TCP 23. 2. Stateful Filtering - Outbound: Firewall remembers L3/L4/port information and return path will match my stateful table. Then the firewall will dynamically allow that reply back. - Inbound traffic: If someone outside tries to start a connection and send traffic in, firewall doesn't have a stateful information so it will drop the traffic. - Inbound 2: Associate TCP 443 to ip address of servers. Allow that specific traffic even if traffic is initiated from outside. 3. NGFW(Next Gen Firewall) - Firepower, Firepower Threat Defense(FTD) appliance - Deep packet inspection: regardless of what L3/L4 ip address and ports are, it can see what apps are in use. Identify the traffic and just allow the traffic we want to. - Important - If we're using applications netconf(TCP 830), network admin should allow those ports through the network. Whether we're using packet filtering or stateful filtering. - IDS/IPS devices (also referred to sensors) - Intrusion Detection System(IDS): monitor mode. send an alert, not stop the traffic. - Intrusion Prevention System(IPS): configure to stop that traffic in its tracks. - How? - Located between firewall and router - IPS drops malicious packets like anomalies, protocol, signatures(specific known patterns of traffic or within packets), reputation(intelligence on block of address) - Network Redundancy - Active/Standby - Active(30%)/Active(30%) - Problem: Use different path when the packet goes out and reply comes back. Go out in FW1 and come back in FW2. - Solution: Replication of state information on FW1 and FW2, Symmetrical routing(go out and come back on same path) - Load balancer - Function 1. Redundancy, Fault tolerance - If one server goes down, load balancer will send it to the server that's working. 2. Load distribution across multiple servers - Connect load balancer to the servers and monitor them. How many connections they're currently supporting, current state. - Send a new connection to the least busiest server. Then track sessions or use cookies to keep connection to the same server while connection's running - Network Address Translation (NAT) - Swap private network address to globally routable address - Private network - 10.0.0.0/8 - 172.16-31.0.0/12 - 192.168.0.0/16 - How? - Static NAT: Configure it before translation - 10.16.0.10 -> x.x.x.x - Dynamic NAT: Wait for the traffic before translation - 10.16.0.10 -> Pool of IP address - What is being translated? - Two ways. In initial flow of traffic, what are we swapping? Source ip address or destination ip address? - Source NAT - Private network -> Public network - Destination NAT - Public network -> Private network(pubilc server) - PAT(Port Address Translation) / Overload - Subset of NAT. - Share one globally routable IP address with 65K private addresses. - NAT device is tracking all internal users that are going out to the internet based on port. ## 54. Explain the Management, Control and Data planes - Planes - Control plane - How to forward - For example - Routing protocol - MAC address table - STP: Where to forward, where to block - Data plane - Forwarding the traffic - Management plane - For examplen - SNMP, NETCONF, SSH - OSPF: Control(OSPF), data(ping), management(Commands) - Control plane: When I turn on OSPF to use that protocol, it'll dynamically make a connection with other devices which are also running OSPF. - Commands - conf terminal - router ospf 1 // 1 is a process ID - network 0.0.0.0 255.255.255.255 area 0 // Enable OSPF on all interfaces in this router - show ip protocols // show any dynamic routing protocol that's running on this device - show ip ospf neighbor - show ip ospf interface // Which interfaces in this router actually have OSPF enabled network - show ip ospf interface brief - Spanning Tree - Block ports to not have loop. Control L2 forwarding. - Control plane - In Cisco switch spanning tree is turned on by default. - Commands - show spanning-tree vlan 10 - conf t - spanning-tree vlan 10 priority 4096 // Management plane. Changed configuration to modify forwarding decision. - ping 10.16.0.10// Data plane ## 55. Describe Network Elements that Impact Applications - What makes network fail? - L2 frame forwarding not working correctly - L3 routing not working correctly - Latency, Jitter - real-time application - NAT/PAT - Other elements that can cause network fail? - Firewall, IPS, ACL, Proxy - Proxy: 프록시 서버는 클라이언트가 자신을 통해서 다른 네트워크 서비스에 간접적으로 접속할 수 있게 해 주는 컴퓨터 시스템이나 응용 프로그램을 가리킨다. 서버와 클라이언트 사이에 중계기로서 대리로 통신을 수행하는 것을 가리켜 '프록시', 그 중계 기능을 하는 것을 프록시 서버라고 부른다. - MTU size: If the traffic size is bigger than what switch or router or any of the network device can handle, device will fail sending it. - Port filtering by Firewall - ACL(Access Control List) - Ex. Goal: Block the Telnet connection to 2.2.2.2:23 using ACL - conf t - ip access-list extended No-Telnet - deny tcp any host 2.2.2.2 eq 23 // Deny tcp traffic coming from anywhere going to 2.2.2.2 port 23 - permit ip any any - int gig 0/0 - ip access-group No-Telnet in // Apply no-telnet to inbound traffic - end - show access-lists - Insecure Protocols - Plain text, no encryption - HTTP, Telnet are the example of insecure protocols - Solution 1: Use secure protocols. HTTPS, SSH. - Solution 2 : Use VPN that are correctly configured to allow access from user to the device. - Things need to be considered - split tunneling, split DNS, allow correct ports and transport protocols. - VPN tunnel provide protection. But VPN blocks the connection when it has a wrong configuration on what ports or ip should be allowed. - Split tunneling: Use VPN for the ip addresses we allow, and for the rest of the addresses, use external connection - Split DNS: 내부 보안을 위해서 내부 네트워크와 외부 네트워크에 대한 DNS 응답을 달리 하는 것.