# AZ 204 Notes ## Virtual Machines * infrastructure as a service, complete control down to the actual file system ### Basic * set subscription * set a resource group * set a name that you did not use before * set a region * use a region that is closest to your user base * beware of regional restrictions when it comes to storing data * this will change other available options * set availability option, which is either servers to be far apart or close together * set an image, like windows server 2019 datacenter * performance size, better costs more obvs * set admin account username and password * set port rules * default is none on inbound ports * you may set options like RDP, HTTP etc. ### Disks * set OS disk type * Premium SSD is default and the way to go * other options are standard SSD and HDD * set the encryption type * default is that azure takes care of it * you may also have y our own key in something like Azure Data Vault * add optional additional disks * the size of a disk also affects the maximum IOPS and its maximum throughput * setting 'use managed disks' to 'no' is not recommended for pretty much any scenario ### Networking * set the virtual network, you may use an already existing one * set subnet * NIC network security, default is 'Basic' ### Management * set monitoring options * boot diagnostics * OS guest diagnostics * diagnostics storage account * set auto shutdown * this will prevent unneccessary costs by turning off the machine automatically ### Advanced * nothing super relevant here * you may add extensions * you may set the VM generation, but G1 is always recommended, since G2 does not support everything like encryption ### Tags * create cusom tags that will be handy for things like billing and figuring out what system the resource is used for etc * examples could be 'billingcode:a101-9009', 'environment:dev' and 'createdby:ChrisMcFury' ### Review * includes a summary with all the options and a cost summary * you can download your setup as a template, which are exam essential ### Deployment * there will be a summary of all resources and their deployment status afterwards * you may click on any to see its details * a 'next steps' section tells you about recommended things to set up, like the auto shut down ### Connect to a VM using RDP * click on the connect section in the top left corner of the resource overview, or within the settings tab within the dock on the left * there are different options including: * RDP * SSH, which is for linux machines * BASTION, which lets you connect without the RDP port open, but this needs to be installed on the machine first * open the port to be able to connect via remote desktop ### Opening the port * go to the resource group and select the 'network security group' resource * select the 'inbound security rules' within the 'settings' section * click 'add' and fill out a new inbound security rule * set the service to RDP * set the priority, the lower the number the higher the priority * set the name * you may connect via remote desktop afterwards with your admin login credentials ### Setting up the VM as a web server * click 'add roles and features' within the Server Manager Dashboard * pick the web server role * select features like ASP.NET * do not select features you wont need for security reasons * finish setup ### ARM Templates * templates are converted into JSON * the JSON contains mandatory sections * schema * content version * resources * to deploy resources using this template you can add it to a library * when you create a resource using your template you can autofill all the parameters by clicking 'edit parameters' then 'load parameters' then select the parameters file that you downloaded with the template * username and password are not autofilled and have to be added manually ### Creating a VM in PowerShell * use 'connect-AzAccount' if you are using your external console * steps to create a VM: 1. create a new resource group ``` shell New-AzResourceGroup -Name MyResourceGroup -Location EastUs ``` 2. create a vm ``` shell New-AzVm -ResourceGroupName MyResourceGroup -Name MyVm -Location EastUs -VirtualNetworkName "myVmNet" -SubnetName "default" -SecurityGroupName "mySecGroup" -PublicAddressName "myPubAddress" -OpenPorts 80, 3389 ``` 3. input username and password ### Creating a VM in Azure CLI * creating the VM with the Azure CLI is way better in my opinion, because you do not need to log in or anything and also have CLI specific commands * commands are used in the bash console 1. creating the resourcegroup ``` shell az group create --name cligroup --location eastus ``` 2. create the vm ```shell az vm create --resource-group cligroup --name aznewvm2 --image win2016datacenter --admin-username aztestuser ``` 3. set a password ## Azure App Service * platform as a service, an abstraction of infrastructure as a service with less control, cloud based ### Create a Web App * add new resource, then select the 'Web' tab and pick 'Web App' * enter subscription and resource group * give it a unique name accross azure * choose the publishing type, code or docker container * pick a programming language * pick the operating system * set a region #### App Service Plan * default size is Standard S1 * changing the size has many different benefits and greatly change the included features and hardware, like automatic scaling and azure compute units * [check the different plans](https://azure.microsoft.com/en-us/pricing/details/app-service/windows/) and reasons why you would use basic, standard or premium ### Configuration * scale up => change the plan itself * scale out => change the scaling instances manually or autoscale * you may set different scaling rules like adding/decreasing instances based on CPU load ### Web Jobs * can be applied to a web app * run in a set interval on your application, like hangfire ### Deployment Slots * allow you to easily swap systems * next to no downtime * you can easily revert back a production state if anything went wrong * allows you to set traffic rules, so that 90% of the traffic goes on prod and 10% goes on test ### Diagnostics * in a Web App => Overview => Diagnose and solve Problems * monitoring section in the sidebar lets you: * set up alerts * view metrics * create log files ### Diagnostics logs * example shows how to set up saving log files in a blob storage * set a container after initialization is finished * log stream * temporary logging feature that shows events in a console in real time ### Create a Web App in PowerShell * see available commands for web apps ``` shell get-command *AzureWebApp ``` 1. create a resource group ``` shell New-AzResourceGroup -Name "powershellwebapp" -Location 'EastUS" ``` 2. create app service plan ``` shell New-AzAppServicePlan -ResourceGroupName "powershellwebapp" -Name "aznewasp123" -Location "EastUS" -Tier "Free" ``` 3. put the web app into the plan ``` shell New-AzWebApp -ResourceGroupName "powershellwebapp" -Name "azsjdwebapp121212" -Location "EastUS" -AppServicePlan "aznewasp123" ``` ### Create a Web App in CLI 1. open cloud shell and set it to bash 2. create a group ``` shell az group create --name cliwebapp --location eastus ``` 3. create the appservice plan ``` shell az appservice plan create -g cliwebapp -n mynewasp ``` 4. create the web app ``` shell az webapp create -g(roup) cliwebapp -n(ame) mynewwebapp -p(lan) mynewasp ``` ### Additional CLI setup 1. open cloud shell in bash state 2. one command creates a group, plan and web app ``` shell az webapp up --location eastus --name azsjdnewapp13049134 --html ``` ### The Web App Console * side navigation section Development Tools * use the console to check the deployed files for example * advanced tools => Go => Kudu Dashboard * overview with uptime and tools for your deployed web app * you can access a logfiles directory ### Quiz * What language is not supported? Windows binaries * What is Continuous Deployment? Your code is deployed to the App Service as soon as its saved to GitHub/Docker/Dropbox * Does Azure App Service support Linux? yes * Which of the following are NOT considered part of Azure App Services? Virtual Machines (This is considered Infrastructure as a Service, not Platform as a Service) * How many applications can you deploy to a single App Service plan? To the limit of the app service plan you have chosen (10 free, 100 shared and unlimited for basic+) * What does ACU stand for? Azure Compute Unit * You start the process of creating a Windows Web App to deploy a Python application. The application works on your local machine, but you want to ensure all the Python dependencies on your Windows machine is also available for the Web App when deployed as an App Service. What is required to get the Python app to run inside a Windows Web App? You need to configure the Runtime stack of the Instance Details when creating a web app in Azure Portal * How do you do background scripts in the Platform as a Service model? Deploy it as a web job attached to a web app ## Containers * containers are a different way of deploying your code * packaged pieces of code that have all dependencies * downside is that they dont scale well * upside ist that they are packaged image, which then lets you deploy it on different systems easily ### Install Docker for Windows * install Docker Desktop * visual studio has Docker support * setup solution docker support 1. right click solution => Add => Docker Support... 2. choose linux or windows 3. optionally install Docker Desktop if you havent already ### Publish to Azure Container Registry * creating a Docker Container Registry in Azure 1. create resource 2. choose 'Container Registry' 3. choose a resource group, set a name and a location 4. optionally set up networking and encryption 5. finish setup 6. in visual studio, choose publish target Docker Container Registry and select your newly created Azure resource * on the publish overview you may use the 'Edit Image Tag' Action to mark the image for dev/test/prod or alike 7. finish and wait for the image to be build * you will now see your container image in the 'Services/Repositories' section in the Azure navigation ### Azure Container Instances * enable admin user access in 'Settings/Access keys' in the Container registry resource * for this example a Container Instance resource is created * set 'Image source' to 'Azure Container Registry' * set your image and finish setup * ACI => Azure Container Instance ### Web App Containers * set up web app resource and choose 'Single container', 'Docker Container Registry' and your container image * slower deployment than with a Azure Container Instance ## Function App * functions are files and have to be hosted inside a storage account * they can be published via code or a Docker Container * code is restricted to the following runtime stacks * NET Core * Node.js * Python * Java * PowerShell Core * Custom Handler (new; supports other languages like PHP) * supports Linux and Windows * serverless option means that you pay for the number of executions, you have no control over CPU, server instances or anything * stateless * always triggered by something * short lived, will shutdown after a certain amount of time unless it is durable * functions are intended for small pieces of code that perform a task in a very short amount of time ### Hello World Function App 1. go to your function azure resource 2. select 'Functions' in the 'Functions' section of the sidenav 3. click 'Add' 4. set a development environment * VS Code * Any Editor + Core Tools * Develop in portal 5. select a template * HTTP trigger * timer trigger 6. give it a name 7. set the authorization level * anonymous (url) * admin (master key) * function (access key) * if you check the default code, you will notice multiple files * function.json (settings) * index.js (function code) ### Triggers and Binding * go to the 'Integration' tab of your function, you will see the following sections * Trigger * Input * Function * Outputs * types of triggers include * HTTP trigger * timer trigger * blob storage trigger * email trigger * etc. ### Testing Functions * timer trigger is added * run the built in Test/Run feature * check the Filesystem Logs to see if the function runs successfully ### Creating a Function that writes a Blob * add an Output binding in the integration tab of your timer trigger * binding type: Azure Blob Storage * set parameter name, path and the storage account connection * the container of the path must already exist * set a random name by using 'container/{rand-quid}' * you can now see your binding in the functions.json file * context gives you access to the bindings (e.g. context.bindings.MyBlobBindingName) * run the function, it will now write blob files with the given content ### Function App Settings * added things will be added to the host.json file * deployment slots can be used just like the previously learned swap system * you may set application settings under 'Settings/Configuration' * you may set a daily usage quota * you may not change the runtime language, but the version * you can set up authentication * you can set a custom domain * you can use TLS/SSL only * you can put your function into a network * a function will scale automatically by default * api management lets you set up daily quotas and so on ### Durable Functions * stateful * can be running for a long time * can suspend while waiting for something like an API call * durable functions consist of three things * client (sets up the orchestrator and gets it running) * orchestrator (traffic hub, actual code) * activity (performs the task) * Function Patterns * function chaining: F1 calls F2 calls F3 * fan out / fan in: F1 calls 3xF2 calls F3 * asynchronous API: F1 calls F2, both return a status which is checked by F3 in an interval * monitor: function waits for something to happen * human interaction: F1 waits for user input and then calls F2 or F3 ### Create Durable Functions 1. create a function app like before 2. go to 'Development Tools/App Service Editor' in the sidenav 3. go into the console and create a file ```shell touch package.json open package.json ``` 4. edit the file as follows ```shell { "name": "mydurablefunction", "version": "1.0.0" } ``` 5. switch back to microsoft azure and go to 'Development Tools/Console' 6. install the durable function by entering ```shell npm install durable-functions ``` ### Code a Durable Function 1. create a new function and select a 'durable' template 2. create an orchestrator function with the 'durable function orchestrator' template 3. create an activity function with the 'durable function activity' * function starts the orchestrator, orchestrator calls the activity functions * run it and see what it does depending on your code ### Add Delays and Timers to a Durable Function 1 'Development Tools/Console' ```shell npm install moment npm install typescript ``` 2. go back to the function code 3. add some code to add an hour between activity function calls * this will put the activity call into a hibernation state that does not require resources and will automatically come back to life ```js const deadline = moment.utc(context.df.currentUtcDateTime).add(1, 'h'); yield context.df.createTimer(deadline.toDate()); ``` * do NOT use .Sleep or whatnot! * DF used to have a 7 day limit ### Function Core Tools DO IT AGAIN ### Custom Handlers * allow you to use languages that are normally not supported * set up a function app with 'Custom Handler' runtime stack * you have to manually deploy your code from something like VS Code * important file is 'host.json' which points to your 'handler.exe' ### Extra [List of Supported Bindings](https://docs.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings?tabs=csharp) ## Azure Storage Accounts ### Creating a Storage Account #### Basics * one very important thing during setup of a storage account is the region, because it affects performance, what features you have access to and prices * if you want super fast latency you choose premium * account kind should be StorageV2 in 99% of the cases * you limit yourself by using v1, because it is not going to be supported anymore and only makes sense when you want to add to a pool of already set up v1 accounts * replication options include * LRS: locally redundant storage, azure will keep 3 copies of each file in the same location and datacenter * GRS: geo redundant storage, will keep 6 copies of your files, 3 as in LRS and then another 3 in another location/datacenter * RA-GRS: will give you a url for each file * ZRS: zone redundant storage * etc... * access tiers are hot and cold, hot for many accesses and cold for files that mostly are just stored but not accessed frequently #### Networking * connectivity methods are Public endpoint (all networks), public endpoint (selected networks) and Private Endpoint * public endpoint means that outside sources can see the locked door * private endpoint means that they can not even access the url, even if they had the key for the door #### Advanced * secure transfer required should be on * you may set up soft delete to keep deleted files for a certain amount of days * you can set up a data lake with a hierarchical filesystem ### Create a Blob Container * setting a container access level is very important, there is private, blob and container, anything other than private is rather dangerous ### Access Keys and Shared Access Signatures * primary endpoint is read/write while secondary is read only * you access the storage account via access keys and/or connection strings * SAS is the best way to give access to your storage ## CosmosDb Generally, there are 5 different types of Cosmos databases supported in Azure 1. Core (SQL) * uses JSON documents and supports common sql statements 1. Azure Cosmos DB 1. Cassandra 1. Azure Table * this is not the same as a blob table storage 1. Gremlin (Graph) * very unique type that is used for data that has many relations, like people with professions with communication and so on * Azure Cosmos DB and Cassandra are primarly for migration or using existing code without modifications ### Creating a Core type DB * set subscription, resource group, name and location * capacity modes are 'Provisioned throughput' and 'Serverless' * Geo-Redundancy will add another instance of your DB in another location, based on your selected location; doubles the cost!! * Multi-region Writes will give you the ability to write to all of your globally created databases; also has an impact on cost * network options include 'All networks', 'Public endpoint' and 'Private endpoint' * by default a Backup Policy is already set up, which will periodicly store your data in 2 copies * you may adjust all these options, like interval and so on * the sidenav has 'Access control (IAM)' to grant other people access to the db * under 'Replicate data globally' you may set up as many regions as you like, for example US-East, EU-West and Japan East * you can find your connection keys/strings under 'Keys' ### Creating a Container * set db name, autoscale, max RU/s, Container id and the Partition key * the partition key is very important, because it has an impact on your RU/s and general performance; it should be something that makes sure the objects you want to store are evenly spread out; for example, if you make a 'employee' container, the id should not be '/country' because you might have a company that is only located in the US, so querying would always result in hitting the entire partition ### Creating a Document * when you create a document, there will be some properties that are automatically created by the db; e.g. '_rid' * in the example, only querying 2 documents used 2.28 RU/s * adding a 'WHERE' clause to your query will actually add RU/s ### Data consistency Options this is used to manage how fast data is replicated across regions * go to 'Default consistency' in the sidenav * there are 5 different consistency options * strong (reads are guaranteed to see the most recent write) * bounded staleness (you may set up a maximum lag time in days, hours, minutes and seconds) * session (default; based on user session, the same user will have no delays in different regions) * consistent prefix (guaranteed that the data will be replicated in the same order, but not at what exact time) * eventual (wild; no time consistency is required) ### Managing Change Feed Notifications * you may add a function with the 'Azure Cosmos DB trigger' template * set up function name, cosmos connection, db name, collection name and the output collection name ('leases' by default) * you can see changes inside the live log console * this does not get triggered on deletion ### Quiz 1. What is the downside for setting "strong consistency" on a Cosmos DB database with a lot of replicas? There might be delays in reading date as it waits for the data to be replicated. 1. What is one major advantage that Cosmos DB offers over a Storage Account table storage? Sub 10ms latency guaranteed ## SQL Database * best solution for most situations, easy to migrate data and so on ### Creation * you must create or select a server the database will be hosted on * creating it will let you set a server admin login user; set a region * single database or elastic pool * an elastic pool is a sharing of resources; you may have multiple databases which will allocate more resources automatically when needed * DTU/s were used before to determine how much performance your DB has * less flexible, since you can not set cpu or memory options * nowadays they added vCore options which let you determine how many cpu cores the database is going to use; this is much more expensive than the DTU based option * this has a serverless option too * networking optios include No access (default), Public endpoint and Private endpoint * you may set up advanced data security, which is basically a firewall ### Geo Replication * go to 'Settings/Geo-Replication' inside a SQL database resource; you will se a list of all regions that are set up for the database * click on a region you want to set up the database in; it will open a wizard where you will add another server and everything * you can set up failover policies; in case a db goes down it will access it in a different region ### Blob Containers * you basically know everything about this already * AzCopy is basically the same as Azure Storage Explorer, but for console hipsters * some AzCopy commands * az login * azcopy /Source:CONTAINERURL /Dest:DIRECTORYURL /Pattern:"FILENAME" * aquiring a lease of a blob is basically a lock; anyone that wants to do any operation with he blob will need the lease id afterwards ### Access Tiers * pricing strategy based on accesses * hot: used for lots of accesses of the blobs * cool: less accesses, keeps the file for a minimum of 30 days * archive: the least accessible, 180 days retention ## Azure Active Directory * identity => Azure Active Directory (different from Windows Active Directory) * used for single sign for example * relevant for registration and so on, also social integrations * supports multi factor auth * [Github Quick Start Code](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2) ### Creation * set organisation name, domain name and a region * a new AAD has no subscriptions ### Structure 1. Open your website 1. Redirect 1. Show MS login 1. Return access token 1. Verify access token ### Registering an App * set supported account types * optionally set the redirect URI ## Azure Access Control * almost every resource within Azure has the following access control options ### RBAC * Role based access control (RBAC): Get the work done with the minimum amount of priviledges * you can go to any subscription within azure and see the roles and such under Access control (IAM) * there are a giant list of different roles you can assign to a user ### SAS * Using shared access signatures (SAS) * you can customize the SAS in many ways like setting the service, resource types, permissions, a time window and a protocols * services include Blobs, File, Queue and Table * resource types are Service, Container, Object * Permissions are Read, Write, Delete, List, Add, Create, Update and Process * protocols are HTTPS only and HTTP/HTTPS ## Secure Data ### Storage Accounts * encryption at REST: storage accounts are encrypted by default, the only thing you can set up is your own key * encryption in transit: in the storage account 'configuration' section you can turn on 'Secure transfer required' ### SQL * on the server level of a SQL database there is a 'Security/Transparent data encryption' section; this is the same on the database level, where encryption is on by default ### Azure Key Vault * creation includes setting a name, subscription, resource group, location, pricing tier (A1 Standard/P1 Premium), asccess policies and virtual network access * only the P1 tier has a Hsm (hardware security solution) option * very cheap * within a key vault the main sections are Keys, Secrets and Certificates * your code can use a keyvault client to receive keys and secrets from azure * generally a good idea to not have hard coded keys/secrets in config files * [Code Sample](https://github.com/Azure-Samples/key-vault-dotnet-manage-key-vaults) ## Scaling Apps and Services * one of the key components of using the cloud * scale up: moving up or down on service plans * scale out: amount of instances of the application in this service plan * ACU is the relative measure of performance, S1 has 100, S2 has 200 and S3 has 400 being the highest for a single instance ### Manual Scaling of Azure App Service * b1 is the first tier supporting manual scaling * you may set the amount of app instances in the 'Scale out' section of your app service ### Automatic Scaling of Azure App Service * S1 is the first tier that supports auto scaling * now you can set autoscaling rules * scaling can be based on a metric or a specific instance count * rule metric sources include: * current resource (newfreeplan) * storage queue (classic) * storage queue * service bus queue * application insights * other resource * you can define a metric name for a metric source, in the example 'current resource' is used, which includes: * cpu percentage * memory percentage * disk queue length * http queue length * data in * data out * you generally want to set an up and a down rule so that both ways of scaling are covered ### Virtual Machine Scale Sets * provides a group of identical virtual machines that you can set up to scale in and out * supports up to 1000 instances * lets you set autoscale rules during creation, options include: * min/max number of VMs * cpu tresholds for increasing/decreasing instances ### Scaling a Single Virtual Machine * changing the size of a vm will disrupt operations, because the current vm has to be copied over, shut down and rebooted on another machine before it is available again * you can add a 'Load Balancer' which will automatically balance traffic across virtual machines ### Handling Transient Faults * an example of this is when you have multiple instances running, one of them is currently processing an operation that would take 15 seconds, but because of the scaling, that specific instance is turned off, which leads to the operation being aborted * add retry/back-off logic * use queues, databases and other messaging systems, basically decoupling individual systems ## Caching and Content Delivery Networks * Redis is an open source application used for 'in memory' caching and allows you to store data, for example userdata of your website, which can be read with extremely low latency from that point onward * CDN is a content delivery network, which reduces load for your server and increases user performance by copying content files, that can then be received from an external server ### Creating a Redis Cache * set DNS name, subscription, resource group, location and pricing tier * it makes sense to have your location in the same as the app or whatever that it is for * pricing tiers include a lot of options, anything from B0-6, S0-6 and P0-6 * premium prices can reach as high as almost 12k dollars * do not make a redis cache your primary data source * premium tier has an availability zone option ### Reading and Writing to Redis in .NET * nuget package 'StackExchange.Redis' is better than the official one from MS, because there is way more documentation for it * hooked up with a connection string * the two primary methods used in the examples are 'StringGet' and StringSet' * other methods allow to store geolocation, hashes, lists, sorted sets and streams ### Creating a CDN Profile * content delivery network * will take any content like images, audio, videos and other files and store them on another server, which will dramatically increase performance for users and your server, because it can be loaded from another source * creation includes setting a name, subscription, resource group and a pricing tier * there are multiple companies offering this service, Microsoft, Verizon and Akamai ### Creating a CDN Endpoint * you can add endpoints after creation * you have to alter your app so that it uses the newly created endpoint, instead of its own * an endpoint has an origin type, which includes: * storage * cloud service * web app * custom origin * has protocol options for http and https and their ports * whenever you need to change a static resource that has already been copied, you have two ways of going about updating it: 1. 'purge' your CDN, which you can also do with an automatic API call via code 1. you can also simply change the file name and both files will exist ## Monitoring and Logging ### Virtual Machine Logging * the 'Monitoring/Diagnostics settings' section inside the VM resource has a monitoring option that has to be manually enabled * in the 'performance counter' tab you can set what kind of information you want to log * things that can be logged include cpu, memory, disk, network and so on * the default logging interval is 60 seconds * in the 'logs' tab you may set log levels for application, security and system * in the 'sinks' tab you may set up application insights to store data as well ### Function App Logging * a function has a 'Monitor' tab that lets you set up application insights * you now have a simplistic graph overview for your function ### Azure Monitor * new service and meant as a central dashboard * has activity logs, alerts, metrics, logs and is linked to application insights * you can turn on diagnostics in the 'Settings/Diagnostic settings' tab for individual resources * there are three ways of storing diagnostics 1. storage account 1. stream to an event hub 1. send to log analytics * if you want to use the log analytics you have to set up logs in the 'Monitoring/Logs' section of the resource itself (might only be for VMs) * this should be the correct answer on the test for any visibility/logging part ## Consuming Azure Services ### Logic Apps * not in the test anymore ### Azure Search * lets you implement a search engine based oon indexes * Basic is the first tier that is useful, because it has scaling and load balancing (replicas) * anything from Standard price tiers also offer partitions for data ### API Management * basically a portal that sits in front of your APIs * allows you to throttle, rate limit and quota your APIs * monitor health of APIs and have all kinds of analytics * pricing tiers affect SLA, AAD integration, virtual networks, regions, scaling, cache size and rps * there is a consumption plan ### Configure API Management * you can add 'Products' to the API Management, which kind of works like a role that you can set to require a subscription or approval for example; in the example a Partner and Public level are set this way * supported APIs are OpenAPI, WADL, WSDL, Logic App, API App and Function App * you can find a section under 'API Management/APIs' that lets you set inbound/outbound and frontend/backend policies; some examples include: * creating custom headers * filtering IP addresses * limiting call rates * mocking responses * setting query parameters * CORS * has a version history * the URL to the API frontend can be found in the overview ### Swagger / OpenAPI * one of the sections in the exam is importing an API using the OpenAPI specification * you can add an 'OpenAPI' and enter an OpenAPI specification URL that will automatically pull any information, like the name and description, and fill out the fields within Azure * you can even managae external APIs this way ### Events * an event is a notification that contains a small amount of information, usually used for triggers #### Event Grid * event grid is inside of Azure * structure: event sources - topics - event grid - event subscriptions - event receivers * event sources: * blob storage * media services * azure subscriptions * resource groups * event hubs * IoT hub * service bus * custom topics * event receivers: * azure functions * logic apps * azure automation * webhooks * queue storage * hybrid connections * event hubs #### Event Hub * event hub is for external sources outside of Azure * there are different hubs like event hubs, IoT hubs and so on * can be the layer in front of an event grid ## Application Messaging * a way of different systems to communicate with each other; you leave a message and it will be processed at any time in the future ### Azure storage Queues * store small bits of data for an application to read * security works similar to other storage related functions by using connection strings * maximum size can be 64kb * xml or json is recommended * basic quing system * cheap ### Service Bus Queue * enterprise level queue * managed service * FIFO * more expensive * more features than a storage queue * maximum message size 256kb * has different service plans that affect message size, topics, etc. ## Case Study Question Tips * dont read the case study scenario right away and instead look at the question to see what they want to know first, then go back to the text and look for clues * most questions are asking about a specific function or feature within Azure ## Practice Tests ### Test 1 * Prevent anonymous usage and use OpenID * Validate JWT * CosmosDb, millions of documents without distinct values for partitioning, scaling strategy for individual containers, select partition keys * a concatenation of multiple property values with a random suffix appended * a hash suffis appended to a property value * log all changes to blobs and blob metadata * enable the change feed on the storage account and process all changes for available events * api management strip response of version info * new policy * consume messages ASB publish subscribe in code * subscriptionClient.RegisterMessageHandler(ProcessMessageAsync, messagehandleroptions) * store blobs for several years but always available immediately * cool * provide read access to serviceprincipal for reading resources from a group, what steps: 1. navigate to resourcegroup 1. select access control (IAM) 1. add role assignment with role as reader * application executes short lived process on a schedule * azure function * azure RBAC roles to upload images to azure container registry * AcrPush * Contributer * modify app service api to use a queue that scales and handles request spikes * create a queue triggered azure rfunction to process the queue and move extratction logic to the azure function 1. The XML Segment is part of the <inbound> section of the API policy 2. An error will be thrown if the body size is >265K 3. The policy will retain the higher version if the request is http://preparationlabs.com/api/8.2/ * yes no no * register an app in Azure Active Directory 1. select the azure ad instance 2. select new registration in app registrations 3. create a new application and provide the name, account type and redirect url * access multiple subscriptions, store and retrieve a storage account key and secret from azure key vault power shell commands 1. Get-AzStorageAccountKey -ResourceGroupName $resGroup -Name $storAcct 2. Get-AzKeyVaultSecret -VaultName $vaultName 3. Get-AzSubscription 4. $secretvalue = ConvertTo-SecureString $storAcctKey -AsPlainText -Force Set-AzKeyVaultSecret -VaultName $vaultName -Name $secretName -SecretValue $secretvalue 5. Set-AzContext -SubscriptionId $subscriptionID ### Test 2 * ARM template sections * parameters * outputs * variables * user defined functions * resources # Extra ## Messaging/Event Systems and their characteristics ### Event Grid Event Grid is an eventing backplane that enables event-driven, reactive programming. It uses the publish-subscribe model. Publishers emit events, but have no expectation about how the events are handled. Subscribers decide on which events they want to handle. Event Grid is deeply integrated with Azure services and can be integrated with third-party services. It simplifies event consumption and lowers costs by eliminating the need for constant polling. Event Grid efficiently and reliably routes events from Azure and non-Azure resources. It distributes the events to registered subscriber endpoints. The event message has the information you need to react to changes in services and applications. Event Grid isn't a data pipeline, and doesn't deliver the actual object that was updated. * Dynamically scalable * Low cost * Serverless * At least once delivery of an event ### Event Hubs Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. It facilitates the capture, retention, and replay of telemetry and event stream data. The data can come from many concurrent sources. Event Hubs allows telemetry and event data to be made available to various stream-processing infrastructures and analytics services. It's available either as data streams or bundled event batches. This service provides a single solution that enables rapid data retrieval for real-time processing, and repeated replay of stored raw data. It can capture the streaming data into a file for processing and analysis. * Low latency * Can receive and process millions of events per second * At least once delivery of an event ### Service Bus Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics. The service is intended for enterprise applications that require transactions, ordering, duplicate detection, and instantaneous consistency. Service Bus enables cloud-native applications to provide reliable state transition management for business processes. When handling high-value messages that cannot be lost or duplicated, use Azure Service Bus. This service also facilitates highly secure communication across hybrid cloud solutions and can connect existing on-premises systems to cloud solutions. Service Bus is a brokered messaging system. It stores messages in a "broker" (for example, a queue) until the consuming party is ready to receive the messages. It has the following characteristics: * Reliable asynchronous message delivery (enterprise messaging as a service) that requires polling * Advanced messaging features like first-in and first-out (FIFO), batching/sessions, transactions, dead-lettering, temporal control, routing and filtering, and duplicate detection * At least once delivery of a message * Optional ordered delivery of messages [Resource here](https://docs.microsoft.com/en-us/azure/event-grid/compare-messaging-services?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fservice-bus-messaging%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json) ### Managed Identities Here are some of the benefits of using managed identities: * You don't need to manage credentials. Credentials aren’t even accessible to you. * You can use managed identities to authenticate to any resource that supports Azure AD authentication, including your own applications. * Managed identities can be used without any additional cost. There are two types of managed identities: * System-assigned. Some Azure services allow you to enable a managed identity directly on a service instance. When you enable a system-assigned managed identity, an identity is created in Azure AD. The identity is tied to the lifecycle of that service instance. When the resource is deleted, Azure automatically deletes the identity for you. By design, only that Azure resource can use this identity to request tokens from Azure AD. * User-assigned. You may also create a managed identity as a standalone Azure resource. You can create a user-assigned managed identity and assign it to one or more instances of an Azure service. For user-assigned managed identities, the identity is managed separately from the resources that use it. [Resource here](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview) ## Blank Logic App 1. Create a blank Logic app 1. Add a logical app trigger that fires when one or more messages arrive in the queue 1. Add an action that reads IoT temperature data from the Service Bus queue 1. Add a condition that compares the temperature against the upper and lower thresholds 1. Add an action that sends an email to specified personnel if the temperature is outside of those thresholds 1. Add a condition that compares the temperature against the upper and lower thresholds 1. Add an action that sends an email to specified personnel if the temperature is outside of those thresholds ## Authentication and Authorization * Authentication is the process of proving that you are who you say you are. * Authorization is the act of granting an authenticated party permission to do something. It specifies what data you're allowed to access and what you can do with that data. ## Change Feed Processor Solution * The monitored container: The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container. * The lease container: The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. * The host: A host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different instance name. * The delegate: The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.