Terrence Chou

@terrencec51229

Find Your Greatness and Keep Your Faith; we are not only just seeking what are we good at, but also keeping growth continuously albeit slowly.

Joined on Sep 11, 2019

  • Background <font size=2>About Myself</font> <font size=2>The Value of CCIE</font> Core Infrastructure <font size=2>Why We Need Segment Routing</font> <font size=2>Pure Accelerate 2019 Re-cap</font> Cloud Infrastructure Networking
     Like  Bookmark
  • <span class="fontColor2">Because of fascinating features and thorough services, organisations are more willing to embrace public cloud platforms (AWS, Azure, GCP, etc) to enlarge their business footprint. A robust defence and recovery framework against cybersecurity incidents is always a key to ensure business continuity. Intrinsically, every technical stuff could be fully controlled in that people define each of them; in other words, every employee could be a potential exposure point, accidentally revealing their business environment to the external world. According to numerous public researches, it is really. All of us have to keep in mind that modernisation not only means service frameworks but also cyberattack patterns! Every catastrophic disruption could be resulted from the organisation's pillars and one of them is identity!</span> In general, the types of attack we have learned could be summarised below: To temporarily stop your services functioning - The most common case is the DDoS attack and it happens quite often, especially when a bunch of competitors share the same commercial market(s). Intrinsically, this kind of disruption would be a short-term period in that the intention behind the scenes does not aim to completely destroy your service(s), but temporarily stop you gaining revenue from the specific event(s) instead. The common DDoS attack is volume-based, meaning that your business continuity primarily relies on how many atypical requests you could mitigate before they get in your core infrastructure; for instance, leverage your ISP's Anti-DDoS offering or deploy the RTBH (Remote-triggered Black Hole) architecture to achieve. To infiltrate your business environment(s) and exfiltrate sensitive data - Compared to the DDoS attack, the infiltration attack is much more difficult to prevent in that it could get in your environment(s) via a variety of manners; for instance, visiting a suspicious website or opening a phishing e-mail without too many awareness. An unexpected daemon/process could reside in your house and steal your treasures silently. When you notice that there is something wrong, it does not mean that the event just gets started, but comes to the end instead. In order to gain more granular visibility and formulate a robust runbook whenever an event comes up, most organisations typically implement the NDR (Network Detection and Response) and EDR (Endpoint Detection and Response) solutions to strengthen. When we look at an essential of those solutions, they enrich both the observability and security on the infrastructure layer without any doubt; however, they do not take too many focus on the application layer. We have lived in an era where every modernised cyberattack aims to manipulate your service(s) and even tamper with your business data instead of taking over your core infrastructure. <u>The most effective and easiest manner is to penetrate any of the identities's permission (as a trojan horse) within your organisation</u>. Is it FEASIBLE?! Our resources are well-protected via a number of security frameworks across different tiers! However, the truth is that this kind of tragedy has happened over and over again, it is just because you have not been aware of.
     Like  Bookmark
  • <span class="fontColor2">For most organisations, one of the intentions to kick off their cloud journey is to get rid of maintaining any underlying infrastructure component entirely due to a variety of considerations, for instance, tedious hardware lifecycle and different technology focus. However, does that mean the host-based cloud platform could not present any value to organisations? Not at all! Because everything depends on the use case nonetheless.</span> Because of agility and elasticity, most organisations get started on their cloud journey and get involved in the whole cloud ecosystem widely. Other than those factors, the most fascinating point is the pay-as-you-go model which gives organisations another way to stretch their service capacity for supporting any short-period/temporary situation without investing in traditional infrastructure as they used to. Everything looks quite rational, doesn't it? However, one key sometimes is missed from the outset - <u>Which framework will you adopt for either cloud extension or cloud migration? Rehost, Replatform, or Refactor?</u> We have heard that embracing the cloud is an inevitable trend quite often, however, the influence it makes is more than just a trend. <u>The whole cloud ecosystem not only breaks the traditional boundaries (roles and responsibilities) but also forms a brand-new working model.</u> This new norm changes each technical team's ownership significantly because each of them is able to provision resources, grant accesses, expose services, and even more without involving other teams as they used to. But, this transition also forms inconsistency and confusion which could potentially result in several side effects, for instance, increasing operational difficulty and unwanted spending, especially when the organisation is a large-scale enterprise. From the above-mentioned situations, we are able to correlate them with an extremely prominent concern: <u>Do we just want to launch/move every single workload to the cloud without too many changes? Or, are we keen to refactor the service framework completely?</u> <span class="fontColorH2">Is Anything Discrepant in Cost Across A Decade?</span> From my personal perspective, moving to the cloud could not be just for reducing whatever cost because this is a tremendous misunderstanding if you put "cloud is cheap" into your mindset. If it really is, <u>why does the FinOps principle come into play?</u>
     Like  Bookmark
  • <span class="fontColorH2">Find Your Greatness and Keep Your Faith</span> This slogan is excerpted by Nike and I just want to remind myself whenever I look at it. We are not only just to find out what are we good at, but also have to keep growing up continuously although slowly. We have been involved in the new era of mixed territories, such as networking, security, storage, and virtualization. They are not standalone anymore, closed to each other instead. Therefore, my goal is not only to keep what I have been familiar with, but also to learn what is news, and evaluate if it is worth to try to optimize the existing environment or resolve the known issues. To be not just a Networking! <span class="fontColorH2">The Driver of Write-down</span> As of today, I have been benefited from several useful blogs such as Nicholas Russo, Cody Hosterman, and even any contents could be found by Google. I have had an idea to push my notes to be readable since a long time ago, therefore, I might be able to assist someone in the future as well as those guys. The progress is not so fast, but I will fill this place as my best as possible. Intrinsically, all of my posts would much focus on Networking and Cloud spaces.
     Like  Bookmark
  • <span class="fontColor2">Leased line or IPSec VPN? These two terminologies always arise when you want to bridge two locations. In this cloud-massive era, IPSec VPN could still be easily deployed in any architecture because what it needs in essence is the Internet; however, when we turn to the leased line, what we could do since we have nothing in the on-premises?</span> Leased line or IPSec VPN? The adoption primarily depends on one of the following factors, some of them, or even all of them if technologies are met, e.g. protocol/feature: Permanent/temporary use - If the scenario is a PoC, you typically would not prefer ordering a dedicated circuit to verify your requirement from either the schedule aspect, the cost aspect, or both. However, if the scenario is opposite, which means it is a production environment, the Internet-based VPN typically is not the first priority for consideration. With/without SLA - Not every case you would need a commercial agreement to safeguard your business even if the scenario is a production environment in that the requirement highly depends on what is the magnitude of the service. Cost - Every solution, no matter it is open-source-based or commercial-based, could be divided into two pieces: CapEx and OpEx. The CapEx primarily focuses on how much budget I need to scope and how much expenditure I need to pay. The OpEx primarily emphasises what needs to beware if leveraging any existing resource, e.g. capacity or reliability. When we turn to technical requirements, e.g., do we have sufficient infrastructure resources to deliver (router/switch/firewall), do those resources have sufficient licenses to support (BGP/GRE/IPSec), we typically do not concern them too much because they are easily qualified by the existing environment.
     Like  Bookmark
  • :::info :bulb: The following outlines do not cover ALL the events. I was capturing some of them that were worth to be kept in my mind instead. ::: <span class="fontColorH2">FlashArray</span> <span class="fontColorH3">DirectMemory</span> The main functionality of DirectMemory is to accelerate the READ performance. Based on current design of FlashArray, both <span class="fontColor">X70</span> and <span class="fontColor">X90</span> are the only supported models. Intrinsically, DirectMemory is for the caching purpose so that it does not protect by RAID. If one of the DirectMemory caches malfunctions during reading, it would affect the performance only (higher latency than usual).
     Like  Bookmark
  • <span class="fontColorH2">What Is Supercloud?</span> What is Supercloud? You probably have several questions, e.g., Is it a new terminology? What does it differ from the Private Cloud, Public Cloud, and Multi-cloud? Before we dive into each of them, let us retrospect the transition between the cloud models at the outset. Typically, most organisations get started from on-premises data centres/colocations where aka the Private Cloud; their operation team fully handles the underlay infrastructure routines, e.g. procurement, installation, or optimisation. Because the benefits introduced by cloud computing have been accepted widely, more and more organisations kick off their digital transformation journey and move their business on either Amazon, Microsoft, or Google these Public Cloud vendors.
     Like  Bookmark
  • <span class="fontColorH2">Outline</span> I have had an idea that to share my perspective of what the infrastructure role will be after passing the AWS Solutions Architect - Professional exam. My intention is certainly not flaunting how the honor it is, instead, it is more about sharing the following thoughts in the cloud world specifically within these years. Moved from an infrastructure owner to an infrastructure consumer. To be more widely involve in various territories. Obviously, one of the common strategies in the cloud world is that to <span class="fontColor">use</span> the services offered by each provider instead of <span class="fontColor">managing</span> them, no matter the solutions you adopted are IaaS, PaaS, or SaaS. For this reason, a bunch of the infrastructure routines offloaded for instance; Renew the warranty of each hardware/software resource. Assess the capacity of each component such as networking, server, storage, and data center facilities.
     Like  Bookmark
  • <span class="fontColorH2">Preface</span> Let us forget anything about the cloud before we get started the subject today. In the past, when we needed to launch any service on top of the IT infrastructure, what we did could be summarised below typically; Evaluate how much capacity is required to afford the loading. Consider how to safeguard applications that expose to the Internet with minimum compromise or without compromising the degraded performance. Consider how to govern communications between applications across environments with elasticity and granularity. When we look at the cloud era nowadays, the first task is completely offloaded to the CSPs, hence it is no longer a concern (well, the only thing that you definitely need to care about is how to ensure that you will not get surprised when you receive the bill :expressionless:). However, the rest of the tasks are still our responsibility. Before considering anything about protection, management, or both, you need to build a place to accommodate those business-critical applications; but, what does the landing zone differ from the on-premises infrastructure design? Because a landing zone is able to deem an SDDC (software-defined data centre) in essence. That is a great question, is not it :sunglasses:? ++Concisely, the traditional infrastructure focuses on reachability; however, the landing zone much focuses on application-driven design.++ What does it mean? The following scenarios will discover more.
     Like  Bookmark
  • <span class="fontColorH2">Outline</span> Because of the following reasons (mainly from my aspect), more and more organisations have considered moving and launching their businesses on the cloud. <span class="fontColor">++Pay as you go++</span> - Obviously, it is absolutely attractive from the cost management perspective. The payment is only asked whenever you launch any of the services on the cloud.One thing needs to keep in mind is that not everything is covered by this charge model due to <span class="fontColor">the storage is an exception</span>. When you turned off an EC2 instance, you would not be charged for compute resources, e.g. CPU, memory, or the OS license, until you turn them up; however, the disk space is allocated so that it would be charged nonetheless. ++Unlimited resources (fewer depedencies)++ -  How long does a set of products or services to be deployed in the on-premises environment? The Product team only cares about how quickly the Infrastructure team could fulfil their requirement. However, from the Infrastructure team aspect, they have to ensure that all of their managed resources are sufficient to be occupied beforehand. What if the resource is absent? Does the Product team accept any compromise? How long does the new procurement to be in place? Luckily, you no longer need to take the above-mentioned points into account in the cloud world due to they are completely managed by each CSP. In other words, you just need to focus on what is the most cost-effective/full-tolerance design and carry it out afterwards. ++Rapid deployment (more flexibility)++ - You could provision anything whenever and wherever you are, then decommission everything once they are no longer required. All of your applications are able to serve across multiple AZs and even regions easily. Do you aim to launch a new application in a single region? Or even, do you plan to stretch your business across multiple regions? If so, what you need to do is just carry them out without any constraint. Besides building everything from scratch on the cloud, the quickest/easiest way is to have a copy there. Therefore, not only AWS-native services but also several 3rd party solutions are able to make it happen and worth keeping an eye on as well. So, let us get started.
     Like  Bookmark
  • <span class="fontColorH2">Where Is It Comes From Before we dig into what is Distributed Cloud, let us see two concise explanations first. <span class="fontColorH3">Gartner</span>Distributed cloud is the distribution of public cloud services to <span class="fontColor">different physical locations</span>, while the operation, governance, updates and evolution of the services are the responsibility of <span class="fontColor">the originating public cloud provider</span>. <span class="fontColorH3">IBM Cloud</span>Distributed cloud is a public cloud computing service that lets you run public cloud infrastructure in multiple different locations  -  not only on your cloud provider's infrastructure but <span class="fontColor">on premises, in other cloud providers' data centers, or in third-party data centers or colocation centers</span>  -  and manage everything from a single control plane.With this targeted, centrally managed distribution of public cloud services, your business can deploy and run applications or individual application components in a mix of cloud locations and environments that best meets your requirements for performance, regulatory compliance, and more. <span class="fontColor">Distributed cloud resolves the operational and management inconsistencies that can occur in hybrid cloud or multicloud environments.</span>Maybe most important, <span class="fontColor">distributed cloud provides the ideal foundation for edge computing</span> - running servers and applications closer to where data is created. In summary, there are three key factors from the abovementioned definitions. Distributed Cloud is an extension of the region where the cloud service provider has not launched any service yet.
     Like  Bookmark
  • <span class="fontColorH2">Retrospect</span> In my old post, The Evolution of Cloud Networking on AWS I elaborated on what and why Transit Gateway could revamp your network transport. Although Transit Gateway has been generally available since November 2018 (ready to 4^th^ anniversary), it is still the most powerful feature in the Cloud Networking space across the board. If you think that Transit Gateway will be the last fascinating networking offering then you are definitely wrong! <span class="fontColorH2">New Launch</span> In July 2022, AWS formally announced another cool feature called Cloud WAN. As the matter of fact, I was a bit confused about its name due to WAN typically means external/public networks; however, what Cloud WAN is responsible for is not really about WAN, instead, ++it is more about globally consolidating all of your network ingredients, e.g. VPC, Transit Gateway, Site-to-Site VPN, and SD-WAN across regions into a single and unified management console++. Other than the name, I was also a bit confused about the key differences when compared with Transit Gateway in terms of their functionalities, especially after I read the Preview post. As a result, the following questions came up in my brain;
     Like  Bookmark
  • <span class="fontColorH2">Business Model Nowadays</span> As of today, more and more organizations have either evaluated their cloud adoption strategy or investigated the requirement of the multi-cloud; obviously, the cloud world is no longer blurred. Other than Application Modernization (serverless or containerization), another topic that has been paid attention is the multi-cloud networking. The multi-cloud networking has been limelighted by two threads; Multi-cloud network transit. Multi-cloud network management. From the technical standpoint, the multi-cloud network transit is covered with the multi-cloud network management; when turning to the commmercial standpoint, they are supported by a different way.
     Like  Bookmark
  • <span class="fontColorH2">Native Cloud</span> Typically, from the architecture design perspective, you would not consider putting everything together within a single VPC. In most cases, the resources would be segregated from the role of the environment. For instance, Production, Developer, and Testbed. Although we certainly know what granular management that tagging rendered, however, the division is required for the non-technical considerations, which is more about the management purpose instead. As a result, the whole environment would be composed of a bunch of VPCs and even across different accounts/organisations. Therefore, AWS has rendered a couple of managed services to bridge all of them together. :::success :bulb: Since this post much focuses on the VPC-level solution rather than the Endpoint-level solution, hence PrivateLink is not covered here. ::: <span class="fontColorH3">VPC Peering</span>
     Like  Bookmark
  • <span class="fontColorH2">The Control Plane Simplification</span> Intrinsically, Segment Routing (SR) could be treated as NG-MPLS. As the matter of fact, it really is. The main drivers for embracing SR instead of traditional MPLS (LDP/RSVP) are summarized below. <span class="fontColor">The optimization of the signaling. It is carried out by decoupling additional protocols.</span> ECMP supported. It is one of main shortages of RSVP. Easily distinguish the traffic is being stuck on which node from the troubleshooting aspect. The reason why SR optimizes the signaling is becauase none of the signaling protocol is required to make SR function, the IGP (OSPF/ISIS) extentions take it over instead. Therefore, RSVP is not required at all for fulfilling MPLS TE in the SR domain. <span class="fontColorH3">What Does Segment Routing Differ Traditional LDP</span>
     Like  Bookmark
  • :::info :bulb: This content is re-posted from my LinkedIn article which posted in March 2020. ::: "The Value of CCIE" - this subject has been debated since a long time ago and it is almost refreshed every year after the Top #N certificates of the year were released. I am not trying to make a duplicate place to debate, I just want to share what I think in my mind instead. I think when people commence planning the entire proposal for the CCIE journey, the primary motivation would treat it as a milestone or an achievement. The reason I exclude the salary is that it does not 100% make sense from my perspective. In order to get there, there are a couple of things that will be taken place. Firstly, you have to spend time to be more and more familiar with what you have been and have not been good at. As a result, something needs to be compromised or even sacrificed. Secondly (continue with the first point), you have to prepare most of the required materials to ensure that you could be qualified for this title. Therefore, additional expenditures are unavoidable.
     Like  Bookmark
  • <span class="fontColorH2">Before We Get Started...</span> Is access leak easy to penetrate in my environment? The answer is Yes without any doubt. Therefore, another question may arise in your mind: if it really is a spotlight that is worth keeping an eye on, why did I not adopt any reaction in the past accordingly? The answer is cloud. Before cloud adoption gets popular, every entry point of access is well-defined by either Security or the well-trained operation team. All kinds of management accesses, e.g. HTTP or SSH, could only be granted from specific sources, no matter the request is initialised from either the internal environment or the Internet. In addition, all bidirectional communications must be passed through the peripheral appliance, e.g. firewall. Obviously, there are not too many ways to accidentally expose your services with unwanted profiles. However, what happened after cloud adoption had explosive growth? We all understand one of the cloud essentials is convenience, because of this strength, developers no longer require collaborating with the infrastructure team to publish their ideas over the world, they could do everything they want by themselves completely. On the other hand, this convenience results in unnecessary exposure. When we turn to the previous scenarios, they would look like below:
     Like  Bookmark