Cloud Essentials+ Journey – Request for What?

Purchasing and implementing new solutions is a big part of the game for IT departments. Any time we deal with purchasing new gear or solutions, we are working with vendors, partners, and resellers. When working with these outside entities, there are different requests that we can draft to send to our partners/resellers/service providers. These are named request for information (RFI), request for proposal (RFP), and request for quote (RFQ). Here are my interpretations of these three request types.

Request for Information (RFI)
A request for information (RFI) is sent to vendors, partners, resellers, or service providers when an organization knows at least at a high level the business need, but they require more information about how a solution functions, the problems it solves, and how it is implemented. An RFI will generally contain at least the following information:

-Description of the need or definition of the project at hand.
-Any necessary background information on the organization or the need/project.
-Any qualifications or caveats for selection of the solution/product.
-List of objectives for the need, project, or initiative.
-Timeframe of when the organiztion wants to purchase and implement the potential solution.

Request for Proposal (RFP)
To me, an RFI would be sent early on; in the investigation process of a potential project, when an organization is trying to find out what potential solutions are out there and available. A request for proposal (RFP), on the other hand, is sent out when a business is more serious about a particular solution. An RFP is sent out when a customer wants to see the specifics on how a vendor, partner, or service provider will implement a given solution and how it will fit and operate within the customer’s environment. At a high level, the RFP should outline:

-The project or need in question.
-The customer’s budget for the project/need.
-The timeline for which the customer wants to implement the proposed solution.

Request for Quote (RFQ)
My interpretation here is that a request for quote (RFQ) is similar to an RFP, but more focused on the associated costs of a solution. The RFQ is sent out to vendors/partners/resellers when an organization is ready to make a decision on a potential solution and wants to see both how the given solution will be implemented, but also the associated costs. Once a solution is selected as a result of an RFQ, that is when the contract/purchasing process will begin.

Rounding it Out
Unfortunately, products and solutions do not just magically show up on our doorsteps when we need something. For more in depth solutions, there may be multiple phases to the purchasing process and we can use request types such as the RFI, RFP, and RFQ to help us along the journey to meeting a business need.

Security+ Journey – The Beginning

From a learning an growth perspective, I have decided (at least for now) to lean more toward “going wide” than “going deep/narrow” in facets of information technology. I have enjoyed changing things up and broadening my knowledge and skillsets. Especially for the role that I am in currently, it makes sense to gain at least a base level of knowledge in different topics outside of network infrastructure which is wheren I have primarily “lived” for the last ten years or so. That is why I jumped into the base cloud concepts recently in preparing for the CompTIA Cloud Essentials+ (CLO-002) exam. Now that I have prepared for and completed that exam, I want to start my next journey. I have found that with projects and initiatives that I have worked on, I had thought about best practices around security early on. If security is neglected from the start, we are likely to regret it (maybe more than once). That being stated, much of my knowledge around security has been picked up through experience over time. That is definitely not a bad way to learn and grow, but I want to take it a step further. I want to take some time now and go back to basics to build up a more foundational knowledge of information security concepts. That is why I am starting my preparation for the CompTIA Security+ (SY0-601) exam. I looked the exam objectives and I think it will benefit my learning journey, especially in my current role.

To prepare for the Security+ exam, I am using the following materials:

  • CompTIA Security+ (SY0-601) Exam Prep Bundle
    • Official CompTIA Security+ Study Guide – eBook
    • CompTIA CertMaster Practice for Security+
    • Exam voucher and retake voucher
  • CompTIA CertMaster Labs for Security+
  • CBT Nuggets Security+ (SY0-601) Online Training
  • Anki flashcards throughout the study process for review

I am looking forward to digging in and building a stronger foundational knowledge of information security. Feel free to follow me along this journey here as I document my interpretation of concepts as I go through my studies.

Cloud Essentials+ Journey – Cloud Migration Phases

Writing from experience, without knowing much about it, “moving to the cloud” can be a nebulous concept. It seems easy to just think that, “hey, we have applications hosted in our data center today, then tomorrow they will be in the cloud.” However, a cloud migration is much more than that. A fair amount of work needs to go into the strategy and planning of a cloud migration, and not all of that is technical. The goals and desires of the business need to be taken into account early to make sure that moving to the cloud is the right business choice. A cloud migration should have multiple phases, with the early phases focusing much on discovery and analysis with a goal of determining whether or not certain applications and services should be migrated into the cloud and if a cloud migration aligns with business goals and requirements. A cloud assessment addresses these early phases. At a high level, these cloud migration phases can include the following.

  • Resource Discovery
    • Before applications and services can be moved to the cloud, we need to know what we are running and supporting currently. I think of this as the inventory phase. This is where we do a lot of information gathering to see what we have today.
  • Applicability of Existing Resources in the Cloud
    • Once we have our application and service inventory, we can begin the analysis to see which applications, if any, make sense to be cloud deployed. Not all applications make sense to be cloud delivered. As a recommendation, it may be good to avoid migrating the following types of applications to the cloud. The reason being is that these types of applications may not fit cloud structures well.
      • Legacy/”old” applications.
      • Applications that have been heavily modified/changed/customized.
      • In-house developed (“home-grown”) applications.
  • Aligning Existing Services to Available Cloud Solutions
    • When applications and services are deemed suitable to be cloud deployed, we now need to see what cloud options are available that well fit our existing needs. Some applications may just be a lift and shift migration from on-premises into the cloud, while in other cases, a rip and replace or repurchase migration might make more sense. For instance, an on-prem application might satisfy a business need today, but it may make more sense to purchase a new cloud solution to meet the need instead of trying to migrate the existing application to the cloud.
  • Application/Service Migration to the Cloud
    • This can be thought of as the actual deployment phase. There are seven main types of cloud migrations that I will plan to dig into in another post in this series:
      • Rehost (lift and shift)
      • Replatform (lift, tinker, and shift)
      • Refactor (rip and replace)
      • Repurchase (drop and shop)
      • Retire
      • Retain
      • Hybrid
  • Ongoing Support and Day-to-Day Operations
    • In my opinion, day-to-day operations really should be seen as a migration phase for the main reason of remembering that it exists. Once an application has been migrated to the cloud, that does not mean the work is done. Just like on-premises applications, there is still maintenance and support in one form or another that exists in the cloud.

As you can see from the list above there is a fair amount of time and effort that goes into a cloud migration, a lot of which that happens well before the actual migration. Another important thing to remember when thinking through a potential cloud migration is the objectives of the business. Something that should be done early on is a feasibility study that not only looks at the technical side with checking the feasibility of applications to be cloud-delivered, but also the business side. For example, does is make sense for the organization to shift from primarily capital expenditures to primarily operating expenditures? As eluded to in the beginning, there is much more to a cloud migration than just flipping a switch.

2022 – The Year of Books

Up until this year, other than technical-related material, I had never been much of a reader. I could never be trusted to sit down on a consistent basis and get through books. That being stated, over the last few years I have been looking for ways to continue to better myself, not just on a technical/career level, but as a human in general. I had been listening to a number of podcasts and wanted to take that a step further. I so badly wanted that next step to be books. Books of different varieties and genres to continue to open my mind to different thoughts and perspectives. But…sentence number two still haunts me. Thinking through the fact that I was getting through a fair amount of podcasts while doing random things around the house, while driving, etc., I wondered if I could apply the same strategy to books. Trust me, if there is a path of least resistance out there, I am bound to find it. My laziness knows few boundaries. I took to Twitter to get recommendations about audiobook platforms. Thanks to the recommendations of a couple kind folks, I was able to get introduced to the Libby platform. Libby provides you a way to connect with your local library to check out audio books for free! I did not even have a library card, but through my local library I was able to get a digital version online and start with Libby right away, using the app on my phone. There are plenty that are available and if they are not at the time, you can put individual audio books on hold and the app will alert you when they are available. I am not sure if it differs per library, but all books I have checked out give me two weeks to finish them. Thanks to this platform, I have been able to get through over thirty five books this year. This is pretty big for a person who has barely delved into books at all in the past. I know I am not “reading” in the traditional sense, but I am still consuming the content and that is the important part to me at least. Here is the list of books I have gotten through this year. All but two have been through the Libby app. The other two were actual paper books, which knowing who I am, getting through two actual paper books in a year is quite the feat (the bar is fairly low here). Some of these in this list do not have the full titles listed, but a quick online search should point you in the right direction if you are interested in any of them.

  1. American Sniper – Chris Kyle, Scott McEwen, Jim DeFelice
  2. Anxious People – Frederick Backman
  3. The President’s Daughter – Bill Clinton, James Patterson
  4. What Got You Here Won’t Get You There – Marshall Goldsmith
  5. Ready Player One – Ernest Cline
  6. Dare to Lead – Brene Brown
  7. Killing the Mob – Bill O’Reilly and Martin Dugard
  8. Unwinding Anxiety – Judson Brewer
  9. Little Fires Everywhere – Celeste Ng
  10. The Bomber Mafia – Malcolm Gladwell
  11. You Are a Badass: How to Stop Doubting Your Greatness and Start Living an Awesome Life – Jen Sincero
  12. Ready Player Two – Ernest Cline
  13. Deep Work – Cal Newport
  14. Call Sign Chaos: Learning to Lead – Bing West, James Mattis
  15. The Last Thing He Told Me – Laura Dave
  16. Range: Why Generalists Triumph in a Specialized World – David Epstein
  17. About Where the Deer and the Antelope Play – Nick Offerman
  18. The Guest List – Lucy Foley
  19. In Cold Blood – Truman Capote
  20. Now, Discover Your Strengths – Donald O. Clifton, Marcus Buckingham
  21. Keep Sharp – Sanjay Gupta
  22. The Tattooist of Auschwitz – Heather Morris
  23. The Dark Hours – Michael Connelly
  24. Extreme Ownership – Jocko Willink and Leif Babin
  25. The Power of Habit – Charles Duhigg
  26. The Wright Brothers – David McCullough
  27. Shadows Reel – C.J. Box
  28. Malibu Rising – Taylor Jenkins Reid
  29. No Cure for Being Human – Kate Bowler
  30. How the Word is Passed – Clint Smith
  31. Killing the Killers – Bill O’Reilly and Martin Dugard
  32. The Wish – Nicholas Sparks
  33. The Power of Regret – Daniel H. Pink
  34. The Pioneers – David McCullough
  35. The Nineties: A Book – Chuck Klosterman
  36. Nine Perfect Strangers – Liane Moriarty
  37. The Subtle Art of Not Giving a F*ck – Mark Manson
  38. Killers of the Flower Moon – David Grann
  39. The Recovery Agent – Janet Evanovich
  40. Talking to Strangers – Malcolm Gladwell

I am so glad that I leveraged this app throughout the year and am looking forward to continuing next year. I have tried to get a decent mix of fiction, historical, and career-related/self improvement books. I try to keep a decently open mind about things and I think books are a great way to get different perspectives. I encourage others to not only check out the Libby platform, but any method that allows you to consume content from books. Books not only allow your brain to think through concepts, they are also great conversation starters. Happy reading!

Cloud Essentials+ Journey – What’s Your ‘Objective’?

In the previous post in this series titled Be Ready for ‘Anything’, we introduced the concept of disaster recovery. While organizations can do their best to design and implement redundant, highly available systems, disaster recovery plans still need to be made to handle the recovery process from large downtimes and/or disasters when they happen. Disaster recovery planning can be a daunting task and potentially even difficult to know where to start. We need tools and methods to help us determine what types of solutions to implement and how much we may need to invest in disaster recovery. As I have brought up before, any type of technology planning and implementation should first start with knowing and understanding the business requirements, and DR is no different. A couple of concepts to help us plan for and implement an efficient DR process are recovery time objective (RTO) and recovery point objective (RPO). I will be honest, these are two concepts that I had come across before and had no idea what they meant, so I was very glad to get to dig into the meanings as part of this Cloud Essentials+ journey. In the rest of this post I will cover what I have learned about the definitions of RTO and RPO.

Recovery Time Objective (RTO)
In my opinion, recovery time objective (RTO) is a bit more of a straightforward concept of the the two recovery objectives. Just as it is there in the name, time is the key concept with RTO. The goal of recovery time objective is to understand the amount of time a system or application can be unavailable before it is highly detrimental to the business and customers. Another way to look at it is that RTO is the amount of time in which applications/services must be back online after an outage or disaster.

Recovery Point Objective (RPO)
While RTO is focused strictly around understanding the amount of time in which recovery needs to happen, recovery point objective (RPO) is centered around data loss toleration. There is still a time concept with RPO, but it is directly related to data loss. RPO helps us understand how far out of date our data can be when it is restored so that an application or service is still relevant and usable. Understanding how far out of date data can be upon restoration ties directly to helping us know how often we need to take backups of our data. A more direct description of RPO is that it is the amount of time between the last known, good backup and a the outage or disaster. Let’s get into a quick example. An organization has an application that they support. The application’s database server is currently backed up once per day at midnight. That database server fails at 10:00 AM and must be restored from backup. Once that server is restored, there would be 10+ hours of data that is lost. The organization can use the recovery point objective concept to determine if that is good enough or if backups need to be taken more frequently.

Rounding Out Our Objectives
Understanding and leveraging the concepts of recovery time objective (RTO) and recovery point objective (RPO) can help us ensure that we are designing and implementing effective and efficient disaster recovery plans. In order to understand the requirements that go into determining RTO and RPO values, I feel that it is important for IT leaders to communicate with business leaders. IT departments need to properly understand the needs of the business to make sure they are delivering technology solutions, including disaster recovery, that meet the needs of the business.

Cloud Essentials+ Journey – Be Ready for ‘Anything’

An application or service is really only good to a consumer if it is functional and available for use. As much as we may wish they wouldn’t, IT systems break, go offline, and thus become unavailable. With this in mind, we know that if our business and customers cannot tolerate certain amounts of downtime, we have to plan for that and implement systems that can withstand certain levels of adversity. As with anything in IT, knowing the business level requirements is highly important so that you can design systems that meet and exceed expectations. The collective “we” have been designing and implementing redundant and highly available systems in on-premises data centers for years. The problem of things breaking does not go away in the cloud. Even in the cloud, we still need to understand our requirements and design, build, and deploy systems with redundancy, high availability, and disaster recovery in mind. In the rest of this post, I will do my best to give my interpretation of the definitions of these concepts.

Redundancy
If nothing ever broke down or had major issues, then we would not really need to worry about redundancy. Redundancy means that you are taking potential failures into account and removing single points of failure in the environment. The concept is that if one device or component goes down, there is another such device or component ready to carry the load, preventing large, lengthy downtimes and service interruptions. Given my background, I will give a networking example. In traditional Layer 2 campus networks, within a building we may have access layer switches where endpoints attach to the network, then connected to a distribution layer that aggregates the access layer. If business requirements suggest that there is a low desire for downtime, and budget allows, we will most likely have two switches in that distribution layer. Each access layer switch will have at least one link to each distribution switch. If there is a link failure between the access and distribution layers, or if one of the distribution switches goes down, the other is still available to pass traffic. Now, just because there is redundancy, does not mean there is no downtime. In the traditional Layer 2 network scenario, if a distribution switch goes down, some clients may still experience a service interruption while the Spanning Tree Protocol does its thing and the network re-converges. However, that downtime should be relatively brief and much better than if there was only one distribution switch and clients/devices were down until the switch could be repaired or replaced. If we want to further lessen the impact of device/system failure, we may need to take redundancy a step further and investigate high availability options.

High Availability
The goal with high availability is to reduce the impact of a failure as much as possible. In my experience, devices configured in HA share configuration and operational state information so that if one device goes online, the other takes over immediately. I have come across two main different types of HA systems, active/standby and active/active. With active/standby systems, both devices are synced, but only one (the active) is handling traffic. The active and standby are constantly communicating so that the standby knows if the active goes down. Once that happens, the standby takes over immediately, limiting the amount of impact and downtime. In the active/active scenarios that I have seen (and think I understand), both devices actively pass traffic for the data plane, but only one device handles the control plane functions. If the active control plane device fails, the standby takes over. Because the standby device was actively participating in the data plane before the failure, impact should be limited (Note: there are actions you can take with network devices to further speed up the control plane switchover to reduce that control plane impact as well.)

Disaster Recovery
While organizations can go through due-diligence in designing redundant, highly available systems, they still need to plan for failures, outages, and disasters. That is where disaster recovery enters the picture. At a high level, disaster recovery involves having specific plans and procedures for restoring services and operations after an outage or disaster. DR plans really need to be catered to the needs and requirements of the business. Two tools to help us in the disaster recovery planning process are recovery time objective (RTO) and recovery point objective (RPO). These will be covered in a subsequent post.

In Closing
Being ready for anything is a tall order and probably not feasible. What we can do is be as prepared as possible for adversity and things to go wrong (as they will). An important step is working with business leaders to understand their operations and what the availability requirements are for the different systems in the organization. Once those requirements are known and understood, we can better serve our organizations through the concepts of redundancy, high availability, and disaster recovery.

Cloud Essentials+ Journey – Software-Defined Networking

Networks have definitely been evolving over the years. While you can certainly still build and manage networks the good old fashioned way with the command line interface and interacting with your network on a device by device basis, there are definitely alternatives. There are technologies, applications, and systems that will automate your network operations and even assist with the initial build! Or, if you are not ready for that or don’t quite have the budget, you can still script the typical manual device interaction process to make it quicker, easier, and more streamlined. In this post, we are going to dig into the main alternative that I mentioned above. I am referring to the concept of software-defined networking (SDN). This was a phrase and concept that was very nebulous to me for quite some time (and it still is some days). I think you may get different answers from different people when you ask them to define software-defined networking. My high level definition, or at least my interpretation of SDN is as such. Software-defined networking abstracts the underlying network from the applications it supports in an attempt to have a dynamic, flexible infrastructure to support the varying business needs while remaining stable and resilient. Alright, I know what you’re thinking, “wow, it doesn’t get much more nebulous than that”. I guess that is my fancy way of saying “hooray for overlays”! I think that the keys to SDN are more-so in the characteristics than the definition, but I at least wanted to get my high level interpretation out there. In the rest of this post, we will go over the characteristics of and a use case for software-defined networking.

SDN Characteristics
My interpretation of SDN characteristics are the following.

  • Centralization of Management/Configuration
    • A software-defined network should contain a separate management plane. This would typically be redundant servers that admins and engineers interact with to configure, manage, and monitor the network. Having a dedicated management plane controls the configuration of the network and allows for the removal of manual, per-device administration through automation.
  • Centralization of Monitoring
    • By having a centralized management plane, you can also have centralized monitoring of your network. The individual network devices can be configured to stream logs and telemetry to the management plane server(s) so that admins and engineers have a single source for monitoring and correlation.
  • Separation of Control and Data Planes
    • This characteristic is directly related to the “hooray for overlays” comment I made earlier. With traditional networking, all routers need to hold a route in their routing tables for all possible destinations in the network. Also with traditional networking, when you add routers to the mix, you isolate networks and remove the ability for clients to roam the network and maintain their Layer 2 adjacencies and IP addresses.
    • By removing at least most of the control plane from the individual routers, you allow for more efficient routing tables. SDN also allows for IP mobility through the use of overlay tunnels. Essentially, a native packet gets encapsulated into an overlay packet to get forwarded to the destination router. Rather than routers needing to know about all possible destinations in the network, they leverage a centralized control plane. Routers will inform the centralized control plane of devices connected to themselves and when a router needs to forward a packet to destination it does not know about, it queries the control plane. The control plane process will inform the source router which destination router holds the destination host and the source router creates a tunnel (overlay) with the destination router to forward the packet across the underlay routed network.

Use Case
The major use case that I want to bring up for SDN is around stability and mobility. To mean, a network design dream is to extend Layer 3 to the access layer to remove the complexity and potential stability issues around relying on large spanning tree domains. However, with traditional networking if you extend Layer 3 to the access layer, you remove the ability for clients on separate switches to be on the same Layer 2 domain and subnet. There are legacy applications and systems that rely on this to be possible. With software-defined networking, network engineers can deliver stable, Layer 3 underlay networks while still supporting this legacy mobility with overlay technologies such as LISP and VXLAN.

In Closing
Software-defined networking has really changed how we manage and operate networks over the years. One thing to keep in mind is that while SDN adds many benefits, there are potential downsides to consider. While SDN can remove the complexity of large spanning tree implementations, it adds complexities with overlay technologies. Yes, SDN products can handle that complexity for you, but if something goes wrong, you may need to understand how these overlay technologies operate to be able to troubleshoot quickly.

You Only Have So Much Energy

On the Art of Network Engineering podcast, we talk often about the importance of mental health and taking care of yourself, not only to then be able to take care of others, but to also enjoy a healthy and fulfilling life. One of the the things I struggle with on a daily basis is what I refer to as overthinking. I have this constant fear that if I am not doing something “productive” (whatever that really means), I am wasting opportunities to better myself and that will ultimately and surely come back to bite me at some point without fail. I feel like I constantly need to be looking for that “next big thing” to be working towards. While growth and self improvement are important, this adds a ton of stress to my daily life, but honestly, I have a hard time turning off this mindset. As with many concepts, balance is key.

I recently listened to the audiobook version of The Subtle Art of Not Giving a F*ck by Mark Manson. While the title certainly draws one’s attention, try not to let it fool you. This book is not a guide to just not caring about anything so you can live care-free life. Instead, it is more about understanding that you cannot possibly care about and spend time on every potential thought and action that comes your way. You need to understand what is important to you in life and focus your energy on thoughts and actions that support the important people and things. Here are some of my key takeaways from this book as well as notes that this book has made me think about.

Understand What is Important
As stated in the introduction of this post, I (and I am sure many others out there) are constantly flooded with thoughts about what what actions and directions to take in life on a daily basis (and honestly much more frequent than that). While social media definitely has its benefits, I think a side effect is that it can leave us wanting more. We constantly see what others are doing and getting and it can lead to us instantly comparing our lives with the lives of others. I think that has the potential to quickly lead us down a rabbit hole of despair. Understanding that it does not make sense, and really is not fathomable to care about every single thing that comes our way, we need a method filter out the stuff that should not consume our energy and add to our stress levels. One way to accomplish this that really makes sense to me is to first understand what is important to you in life. I feel that if you do not understand what is important to you in life, then you really do not have a good way of knowing where to direct your energy and what you should care about or let get to you. I think this could lead to caring about too much and eventually making yourself miserable. But if you understand what is important to you, that gives you the filter you need to let go of the stuff that does not align with your values and goals.

Be Less Wrong
This concept was really interesting and intriguing to me. My default mindset to my career and learning in general has been that I need to get to the level of understanding where things just make sense and I feel as close to an expert as possible. If I do not get to that point, I feel like something is wrong and I cannot give myself true credit for growth until I get there. Wherever there actually is, sometimes I am not sure. I think I set these sometimes not fully defined lofty goals for myself. And maybe that is part of the problem, I probably need to be better about setting clear expectations for myself. Anyway, back to the concept of being less wrong. My interpretation of this is that rather than striving for absolute perfection and accepting nothing less (similar to my default mindset mentioned earlier), we accept growth as a continuous process to get better at different aspects of life and becoming less wrong. To me, it is allowing yourself to celebrate small wins rather than only being satisfied with achieving a massive goal that might not even be feasible. This really spoke to me.

How This Book Applies to Me
I will be honest, I was drawn to the title of this book. That being said, I am glad I spent time on it as I really found it helpful. I think I put too much pressure on myself for constant growth and to always be moving in the right direction. A problem with that is I do not always take the time and effort to understand what it actually is that I am trying to accomplish and what value it will really bring. While I feel like I have a decent idea about what is important to me in life, I find that I have this constant desire for “more”, that often may not align to my core values. I am constantly searching for more happiness when it is often already in front of me staring me in the face. My good buddy Andy Lapteff brought up the concept of gratitude lists that I have been writing in as close to daily as possible to help keep things in perspective. In closing, I will state that this book was helpful to me and I have already started applying some of these concepts when the dreaded overthinking strikes. I only have so much energy, I need to apply it efficiently.

Cloud Essentials+ Journey – Content Delivery Networks

One of the major benefits of leveraging cloud services is the ability to spin up resources quickly. Depending on what consumers are deploying in the cloud, they, or perhaps their customers may also want fast access and good performance to those services and data. For instance, let’s say you run a video streaming platform. You would want to make sure that your customers experienced fast access to that video content with low latency. You would not necessarily want all of that content centrally located for consumption. Performance might be satisfactory for customers geographically close to the centralized cloud data center, but could be unacceptable for customers further away. This problem is solved with a concept called content delivery networks.

A content delivery network is a system of distributed compute and storage environments. The goal behind content delivery networks is to get data hosted as close to the consumers as possible so that they have a good experience when they access the content. At a high level, content is created originally on a system, then replicated and synchronized to the endpoints participating in the content delivery network, to be consumed by customers. Content delivery networks have the following characteristics/requirements:

  • Content synchronization mechanism – As stated above, the content needs to as close to the consumers as possible. A CDN must have a defined method for replicating and synchronizing that data amongst the different cache locations in the network. You as a content owner or distributor want to make sure that a consumer can get to the same content no matter which system they end up accessing within the content delivery network. The look and feel must be the same.
  • Global cache locations – To provide the most value, a CDN must have compute and storage environments in as many strategic locations as possible. In my opinion, content creators and distributors would want to be leverage a CDN that has the ability to get them the furthest reach to potential consumers.
  • CDN-enabled namespace – How do CDNs steer traffic to make sure that consumers are requesting content from the closest possible source? This is accomplished with Domain Name System (DNS). A CDN-enabled namespace provides support to resolve hostnames to different IP addresses depending on where geographically the DNS queries are sourcing. For instance, if I am a consumer trying to stream a cool new TV show from a service and I am in the northeastern part of the United States, I hope that when my device does a DNS query for the streaming service, it will resolve to a CDN location relatively close to where I am physically located.

To me, content delivery networks, or essentially distributed compute and storage environments are a huge value proposition from cloud service providers. CDNs enable organizations to deliver content quickly and efficiently to consumers without having to build out their own, potentially global presence. On the other side, consumers are able to experience good performance by having content delivered from close proximity.

Cloud Essentials+ Journey – Shared Responsibility

Although it goes without saying, especially these days; when it comes to information technology infrastructure and data, security is paramount. That goes for both on-premises infrastructure and data, as well as cloud-hosted. I want to dive more into the cloud side of this thought. People have been running workloads, applications, and services in private, on-premises data centers for years and years, so it seems obvious that we have certain security responsibilities and concerns. In the cloud, it might not always be clear, and the fine print needs to be read and understood. I think it would be easy for a consumer to think “well, this application is delivered ‘as a service’, so I don’t need to really need worry about the security of my data, it’s all just taken care of for me”. That being stated, there is one concept that I have found that breaks down this potentially nebulous security in the cloud concept well. That is the shared responsibility model.

At a high level, the concept of the shared responsibility model helps you understand where the responsibilities lie, between the cloud service provider and the consumer in a cloud deployment. The phrase that I have found that seems to explain the shared responsibility model very well is: when it comes to security, the cloud service provider is responsible for security of the cloud and the consumer is responsible for security in the cloud. I cannot quite remember if something along those lines is a direct quote from someone or some organization, but I like how it is laid out. Now, let’s dig into that statement a bit deeper. My interpretation of this is that cloud service providers are responsible for securing the services and the underlying infrastructure, while the consumers are responsible for securing the data and potentially the applications that run in the cloud. I say potentially when it comes to applications, because I think it depends upon the cloud service model in question. If it is a Software as a Service application, then the CSP would be responsible for security of the application. However, in the scenario of Infrastructure a as Service, the consumer would be responsible for application and operating system security. Ah yes, my favorite statement when it comes to information technology: it depends. In any event, the consumer really is responsible for ensuring data security and compliance in the cloud. Something that it seems we hear often in the news is that researchers continue to find unauthenticated, unsecured cloud hosted data storage on the internet. If we are following the shared responsibility model, this would be the fault of the consumer, rather than the cloud service provider.

I think that when it comes to cloud computing, especially relating to security, it is important to take the time to fully understand what you are doing and how you are implementing different services. While what you are consuming is being delivered “as a service”, you should still understand that you may have extra responsibilities and actions to take to properly secure your applications and data. When in doubt, take a look at managed/professional services as an option to help you out. To me, it isn’t always feasible for a company to have experts in every facet of technology, including cloud services. There is no shame in asking for help.