Cloud Essentials+ Journey – Shared Responsibility

Although it goes without saying, especially these days; when it comes to information technology infrastructure and data, security is paramount. That goes for both on-premises infrastructure and data, as well as cloud-hosted. I want to dive more into the cloud side of this thought. People have been running workloads, applications, and services in private, on-premises data centers for years and years, so it seems obvious that we have certain security responsibilities and concerns. In the cloud, it might not always be clear, and the fine print needs to be read and understood. I think it would be easy for a consumer to think “well, this application is delivered ‘as a service’, so I don’t need to really need worry about the security of my data, it’s all just taken care of for me”. That being stated, there is one concept that I have found that breaks down this potentially nebulous security in the cloud concept well. That is the shared responsibility model.

At a high level, the concept of the shared responsibility model helps you understand where the responsibilities lie, between the cloud service provider and the consumer in a cloud deployment. The phrase that I have found that seems to explain the shared responsibility model very well is: when it comes to security, the cloud service provider is responsible for security of the cloud and the consumer is responsible for security in the cloud. I cannot quite remember if something along those lines is a direct quote from someone or some organization, but I like how it is laid out. Now, let’s dig into that statement a bit deeper. My interpretation of this is that cloud service providers are responsible for securing the services and the underlying infrastructure, while the consumers are responsible for securing the data and potentially the applications that run in the cloud. I say potentially when it comes to applications, because I think it depends upon the cloud service model in question. If it is a Software as a Service application, then the CSP would be responsible for security of the application. However, in the scenario of Infrastructure a as Service, the consumer would be responsible for application and operating system security. Ah yes, my favorite statement when it comes to information technology: it depends. In any event, the consumer really is responsible for ensuring data security and compliance in the cloud. Something that it seems we hear often in the news is that researchers continue to find unauthenticated, unsecured cloud hosted data storage on the internet. If we are following the shared responsibility model, this would be the fault of the consumer, rather than the cloud service provider.

I think that when it comes to cloud computing, especially relating to security, it is important to take the time to fully understand what you are doing and how you are implementing different services. While what you are consuming is being delivered “as a service”, you should still understand that you may have extra responsibilities and actions to take to properly secure your applications and data. When in doubt, take a look at managed/professional services as an option to help you out. To me, it isn’t always feasible for a company to have experts in every facet of technology, including cloud services. There is no shame in asking for help.

Cloud Essentials+ Journey – Cloud Deployment Models

Most recently in this series, we took a look at the different available cloud service models of Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). At a high level, the cloud service models describe what is being delivered to consumers. In this post, we’ll explore cloud deployment models. In my opinion, in contrast to cloud service models, cloud deployment models describe more of how the different cloud services can be deployed. Before we get into the individual cloud deployment models, I think it is import to revisit the characteristics of cloud computing, as defined by NIST. Remember, the cloud isn’t just “somebody else’s data center”. For brevity, I’ll just list them here, but please visit the previous post on cloud characteristics.

  • On-demand self service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service

Now that we have had our cloud characteristics refresher, let’s jump into the different available cloud deployment models.

Private Cloud
As I learned from Cloud Bart, private cloud is all about exclusivity. A misconception I had before I started this learning journey was that private cloud meant it had to be a data center that you, as the consumer, owned and operated. While that can be the case, it does not have to be that way. As alluded to in the first sentence of this section, private cloud is less about location and more about the exclusivity of services. At a high level, private cloud essentially means that you are not sharing resources and storage with other customers. You also maintain control of your data. This also means that private cloud will likely be the highest cost deployment model solution for you as the consumer. The private cloud model makes sense for organizations that need to abide by certain regulations that require that level of exclusivity.

Public Cloud
Public Cloud is very similar to private cloud, just without that exclusivity factor. With the public cloud deployment model, consumers receive hosted services from a cloud service provider. The resources the consumer leverages in this model are pooled by the CSP and distributed as necessary among the different, unaffiliated consumers of the CSP’s offerings. Consumers leveraging public cloud also have less control over where their data is stored. In the cloud assessment phase, consumers need to understand their requirements to make sure that that their data can be stored in the public cloud model. Public cloud can typically be thought of as the least expensive cloud deployment model.

Community Cloud
It is probably not the best explanation, but I think of the community cloud deployment model as somewhat of a hybrid between private and public cloud. The community cloud model is multiple organizations with similar requirements or regulations going in together on a cloud solution. The cloud deployment and resources are exclusive to the organizations in the community cloud. In my opinion the biggest benefit to community cloud is that the participating organizations share the cost of the cloud deployment.

Hybrid Cloud
Hybrid cloud seems to be a common and popular scenario for different organizations. The concept of hybrid cloud is that a consumer leverages services in at least two of the different cloud deployment models. For instance, an organization could run a government regulated application in private cloud, and a public facing website in the public cloud. Hybrid cloud is sometimes referred to as the “best of both worlds” model.

A key takeaway that I have gotten from my Cloud Essentials+ studies is the contrast between cloud service models and cloud deployment models. From my interpretation, cloud service models deal with what is being delivered from a cloud service provider, and cloud deployment models focus on how those services are being delivered or deployed. The defined cloud deployment models are private, public, community, and hybrid.

Cloud Essentials+ Journey – Cloud Service Models

So far in this series, we have covered a definition of the cloud and gone through the various cloud characteristics. In this post, we will get into the different ways cloud services can be delivered or consumed. These are known as the different cloud service models. Cloud service models describe what specifically is being delivered to the consumers, as well as which components are the responsibility of the cloud service provider and which components are the responsibility of the consumer. There are three main cloud service models out there with which to become familiar:

  • Software as a Service (SaaS)
  • Platform as a Service (PaaS)
  • Infrastructure as a Service (IaaS)

One key takeaway here is, as these service models are listed above, from top to bottom, the consumer has the least amount of responsibility with SaaS, and the most amount of responsibility with IaaS. Let’s dig into each of the above cloud service models.

Software as a Service (SaaS)
With the SaaS cloud service model, consumers are receiving a turnkey software solution. The software is hosted by the cloud service provider and the consumer merely has to connect to it with a client device and they are off the races. The CSP manages software updates and takes care of all of the underlying infrastructure (including compute, storage, networking, and facilities). As with many other things, there are trade-offs with SaaS. While you as the consumer have no responsibility for the underlying infrastructure, you also have no control over what that infrastructure is our how it is maintained. That being said, in my opinion, most of the the time that probably does not matter much. Examples of SaaS platforms include Google Workspace, Microsoft Office 365, and Cisco Webex. The main target market for Software as a Service applications is end users.

Platform as a Service (PaaS)
Platform as a Service moves the the consumer responsibility and control down a layer lower than Software as a Service. With PaaS, the consumer is provided with a virtual machine (VM), preloaded with an operating system and certain platform software. The consumer can interact with the VM/platform software on top of that VM operating system. While the consumer is responsible for running the platform software, they do not own responsibility of the operating system, virtualization layer, nor any of the underlying infrastructure. The target audience for PaaS is software developers and database administrators. The appeal to PaaS is that developers and database admins can focus on their core competencies without having to build and maintain the underlying VM and infrastructure. Examples of Platform as a Service offerings include Google App Engine, Heroku, and AWS Elastic Beanstalk.

Infrastructure as a Service (IaaS)
Infrastructure as a Service would be the closest thing to running virtual machines in your own data center. In fact, if you hear the term “lift and shift” in regard to migrating on-premises VM workloads to the cloud, IaaS is what would be in play here from the cloud service provider. With IaaS, the CSP delivers the consumer essentially a blank virtual machine. All of the underlying infrastructure, up through the virtualization layer is provided and supported by the CSP. The consumer is responsible for loading and maintaining the operating system and any software on top of the operating system. An important thing to remember here with IaaS (and any other cloud offering in which the user is responsible for software) is licensing. As a consumer, you are responsible for not only maintaining the OS and other software, but you are also responsible for ensuring all software is properly licensed. While you have the most control with IaaS as a consumer, you also have the most responsibility out of all the cloud service models. Examples of Infrastructure as a Service offerings include Microsoft Azure, AWS EC2, Rackspace, and DigitalOcean. The target audience for IaaS is IT/Systems administrators.

Cloud service providers definitely give us options to choose from when it comes to cloud services. It is important during the cloud assessment and cloud design phases to determine which model(s) make the most sense for you and your organization. Keep in mind that you might not leverage just one service model. You could run SaaS for your email and collaboration applications, PaaS for your development team, and IaaS for apps and servers that make sense to do so. Hopefully these interpretations made cloud service offerings less cloudy for you! (terrible pun, yes I know)

Cloud Essentials+ Journey – Cloud Characteristics

Like I had stated in the introductory post for this series, I had thought the cloud was really just someone else’s data center, and that’s it. So, if I’m hosting someone’s applications in my data center, I’m the cloud, right? Well…maybe. While hosting of data and applications is big piece to the puzzle, there is much more to the cloud than that. I am not a big fan of nebulous concepts and topics. I typically like structure, detail, and boundaries. One thing I have really appreciated about what I have found in my cloud studies so far is that we can put some high level characteristic definitions to cloud computing.

So, what is the cloud? At a high level, it is a way to deliver applications and services to consumers in an on-demand, scalable, and measurable way. The National Institute of Standards and Technology (NIST) is a US government entity with the following mission: “To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.” NIST developed and documented a list of characteristics to define cloud computing services. Here, I will list the characteristics, giving my interpretation of them inline. For a service to be labeled a cloud service by NIST definition, it should have the following characteristics.

  • On-demand self-service
    • Consumers of the service should be able to leverage it and spin up (and down) resources on their own without the need of IT or cloud service provider assistance. An example is having a self-service portal that consumers can leverage to get access to products and services.
  • Broad network access
    • Cloud services should be accessible from many different client types and operating systems, across many different network types (private connectivity, internet, etc.).
  • Resource pooling
    • Compute and storage resources are aggregated in bulk and divided and assigned dynamically, as needed to consumers. Specifically where or how the resources are allocated from the pool is abstracted from the consumer. The consumer does not need to know where the resources are coming from, they just need to know that they have access to the resources that they need, when they need them.
  • Rapid elasticity
    • Compute and storage resources should be able to be scaled up or scaled down either on-demand or dynamically. This is a huge benefit and a key differentiator between cloud computing and traditional on-premises computing. In the traditional model, resources typically had to be purchased outright, which makes scaling more difficult. To ensure that you have enough resources for peak times you had to scale your purchases to match that need. However, outside of peak times, resources were going unused and that investment is less effective. With cloud, you can scale up and down much more efficiently. During peak times, you can get access to more resources (you just have to pay more) and in non-peak times you can scale down to conserve resource usage and cost.
  • Measured service
    • A big value proposition of cloud services is that they are typically pay-as-you-go services. There is no capital, upfront costs to purchase resources and you only pay for the resources you consume. For a service to be deemed a cloud service, there must be a way to meter, document, and bill upon resources consumed.

I feel that these documented characteristics give me a much better understanding of what makes the cloud, the cloud. With these definitions in mind, I feel more confident having discussions about cloud computing, knowing that it is much more than just “someone else’s data center”.

Cloud Essentials+ Journey – The Beginning

Much of my career has been focused on network administration and network engineering. I won’t say that I have ever specialized in any one given aspect of network engineering, but I will say that I have gotten the most enjoyment out of routing and switching. I think the reasoning behind that is that many high-level concepts of routing and switching have seemed fairly straightforward to me. I am not a big fan of being confused (even though I typically have a confused look on my face) and many of these concepts and technologies, while some difficult to grasp and learn, really seem to stick once they are there. That’s probably what I mean by saying “fairly straightforward”. I am by no means saying “networking is easy”. I typically need to take a fair amount of time and effort to learn and retain new concepts. Earlier this year, I took on a systems architect position and shortly after that obtained the CCNP Enterprise certification. I then found myself asking, “alright, where do I go from here?”.

There were a couple of factors that helped me answer that question. First was stepping into more of an architecture role. It had been a goal of mine for a while to become more strategy and design focused as an IT professional. I wanted to provide value by trying to help bridge the gap between business and technology. Sometimes that will be directly related to network infrastructure, and sometimes maybe not. Also, in October of 2021, we interviewed Eyvonne Sharp on Episode 68 of The Art of Network Engineering podcast. Eyvonne gave some advice that has stuck with me ever since. I’m paraphrasing here, but the idea is that you get to a point in your career in which to continue to grow, you can either “go deep or go wide”. The premise behind this is that it might make sense to either highly specialize with a narrow focus or to spend more time generalizing on adjacent topics to what you have been used to in the past. While I’m passionate about network engineering, I don’t see myself specializing heavily in any one given technology (not at this point in my career at least). Given where my career has taken me, I’ve decided to spend some time going off my familiar network engineering path. Don’t worry, I’m still a network infrastructure lover through and through, I just feel that given my role and what I want to be doing on a day to day basis, I can provide more value if I start at least getting familiar with IT concepts that are not necessarily focused directly on the network infrastructure itself.

Given its enormous presence over the last number of years, I’ve decided to start getting familiar with cloud concepts. Now, I want to be clear. I’m not looking to get deep in the weeds and jumping in feet first into a particular vendor and pushing all of the cool buttons. I had found that I had not been able to really even explain what “the cloud” is or keep up with others in high level conversations about cloud computing. I just thought that the cloud was purely “someone else’s data center”. Now, it can be that, but it is also so much more. I also had no idea what RPO and RTO meant (I’ll be covering some of these topics in this series). I felt that needed to change. I wanted to find a learning path that focused moreso on high-level concepts, both technical and business related. I wanted to start building a foundation. To assist me down this path, I landed on preparing for the CompTIA Cloud Essentials+ certification. So far, I have found that preparing for this exam has been a decent entry point into learning about high level cloud technology and business principals. This blog series will cover concepts that I am, and have been covering along the way. The goal of these posts are twofold.

  1. I have found that either writing or speaking about topics I’m learning help solidify knowledge for myself.
  2. Hopefully others will find these posts helpful. We’re all in the “thing” together. It’s all about the journey.

To round this post out, I want to share the materials that I have been using to cover the curriculum for this exam.

As you may have seen me post on Twitter many times, I love to #LearnInPublic. Join me as I start my journey of banging my head against the cloud.

Bridging the Gap – IP Helpers

In a recent post, I wrote about a helpful feature on switches named DHCP snooping. Writing that article got me thinking, what else is helpful in regards to DHCP? Ah yes, IP helpers. In that post, I made mention of a concept called a DHCP relay agent, but I did not go into detail on that specific aspect of the DHCP process. In this article, I want to dive into what an IP helper is, why we may need one (or many), and take a look at some sample configuration.

*Note: I use the terms DHCP relay agent, IP helper, and DHCP helper interchangeably.

First off, as we dig into this post, I am going to cheat. Before we get into the definition of IP helpers and the need for them, I want to recap the steps of the individual parts of the DHCP mechanism, also know as the DORA process (which stands for the individual steps known as Discover, Offer, Request, Acknowledgement). I’m cheating because I’m pulling this section directly from my article on DHCP snooping.

  • Discover
    • source: client (0.0.0.0 because it does not have an IP yet) – UDP port 68
    • destination: Broadcast – 255.255.255.255 (UDP port 67) – to be received by an available local DHCP server or DHCP relay agent.
    • general message: “Hi! Would you please assign me an IP address so that I can communicate on the network? Watching cat videos on YouTube is important to me!”
  • Offer
    • source: DHCP server – UDP port 67
    • destination: Either broadcast – 255.255.255.255 or the unicast address of the DHCP relay agent (router) if the client is on a different subnet from the DHCP server. Either way, the destination port is UDP 68.
    • general message: “Welcome! I do have an available IP for you! How does 192.168.100.50 sound?”
    • note: Although the destination is broadcast, the client’s source MAC is in the actual DHCP message so that when the client processes the DHCP Offer broadcast message, it knows that it is for that specific client.
  • Request
    • source: client (0.0.0.0 because it does not have an IP yet) – UDP port 68
    • destination: Broadcast – 255.255.255.255 (UDP port 67) – to be received by an available local DHCP server or DHCP relay agent.
    • general message: “Yes, I would love to have 192.168.100.50. Would you please allocate it to me?”
  • Ack (acknowledgement)
    • source: DHCP server
    • destination: Either broadcast – 255.255.255.255 or the unicast address of the DHCP relay agent (router) if the client is on a different subnet from the DHCP server. Either way, the destination port is UDP 68.
    • general message: “Sure thing, buddy! 192.168.100.50 is all yours (for the specified lease time, that is). Enjoy those internet kittens!”

Alright, we are refreshed on the DHCP DORA process. So now what? What is this IP Helper/DHCP relay agent thing and why do we need it? Let’s start with the DHCP Discover message that begins the whole process of a client attempting to dynamically acquire an IP address. This is an interesting conundrum. A chicken or the egg scenario, if you will. A client needs an IP address to communicate on the network. It needs to send a message to signal that it needs an IP address, but it does not already have an IP, so how will the client find a DHCP server and how will the DHCP server talk back to the client? That is where the broadcast mechanism comes into play. As you can see in the above explanation of the DHCP Discover message, the client just uses a source IP of 0.0.0.0. Because the Discover message is leveraging the broadcast mechanism, the destination is 255.255.255.255. This essentially means, “Hey, anyone on this local subnet please process my DHCP Discover message, and hopefully one of you is a DHCP server!”.

This is all well and good if there actually is a DHCP server on the local subnet with the client. In my experience, that is not typically the case. DHCP servers tend to be either centralized to data centers, or distributed to small compute clusters closer to the clients, but not necessarily directly in each of the client subnets. Because DHCP relies on broadcast messages, and routers do not forward broadcasts, how will a client ever receive an IP address if there isn’t a DHCP server local to the VLAN/network segment in which the client resides? It sounds like we need a “helper” (bad attempt at a joke, but thanks for hanging in there!). We need a way for these DHCP messages to be forwarded to/from the client and DHCP server, and that is where IP helpers enter the picture. At a high level, an IP helper is a feature that runs on a router interface to bridge the gap as the article title states, between the client and the DHCP server that lives on another segment in the network. When configured, the local router to the client will process that DHCP Discover message, and forward it to the specified DHCP server(s) as a unicast message, with the local router’s IP address as the source of the message (this is the same for the DHCP Request message from the client). Conversely, when the DHCP server sends the Offer and Ack messages, they are sent to the unicast IP of the router running the IP helper process, to then make it on to the client.

Let’s take a quick glance at what this looks like with some example configuration (with Cisco IOS/IOS-XE code). Similar to DHCP snooping, there is not much that we have to configure. In fact, it is even less. In this example, we will show that the local router for the client is a VLAN interface (SVI) on a multi-layer switch.

interface vlan 10
  ip address 10.1.1.1 255.255.255.0
  ip helper-address 172.16.1.1

In the above configuration example, we used the ip helper-address command to configure the SVI to send DHCP messages from clients in the 10.1.1.0/24 network to a DHCP server with IP address 172.16.1.1.

In Closing
DHCP is a vital part of IPv4 networks, so I think it’s important to understand at least the high level concepts, especially the DORA process. IP helpers are an important part of making sure clients receive IP addresses when a DHCP server is located on another network segment. The reason for this is in the nature of how clients must request IP addresses. Because they have no IP address, they must send a broadcast message to all devices on the VLAN. Broadcast messages are not forwarded by routers, so the IP helper function sends those DHCP messages to the configured destination DHCP server. IP helpers can also be used to send these DHCP messages to Cisco’s Identity Services Engine (ISE) to assist in profiling devices. Thank you for reading this, and definitely let me know if you have any feedback, especially if I got something wrong.

Book Reaction: ‘The Power of Regret’

I think we’ve all heard these two words uttered from different people we’ve come across, and maybe even said them ourselves a few times when asked about the past: “no regrets”. Why does this seem to be said so often? I think it’s human nature to to try not to dwell on the past when something doesn’t go our way, when we do something we should not have done, or when we have been idle and did not act when we should have. What happens when we think about those events? We feel that regret, and that initial feeling can be rough. We don’t want to intentionally feel bad all the time, so a common response is to live the “no regrets” philosophy and look toward the future with a clean slate instead of living in the past. That being stated, is regret really all that bad? I recently read (audiobook version, hence the italicized font) The Power of Regret: How Looking Backward Moves Us Forward by Daniel H. Pink, and according to the author and research conducted, that answer is a resounding “no”.

The common concept that was brought up in the introduction of this post definitely makes sense on the surface. Why continue to think through tough situations from the past and continue to beat yourself up? Just accept things for what they are and move on, right? Well, this books suggests that isn’t always the best way to go and we should dig deeper into regret. Rather than just pushing regret to the side and setting sights on the present and a better future, the author and corresponding research suggests an alternative. Embrace regret to be used as a learning tool. Take the time to understand what went wrong and use that understanding to either make it right or know with confidence how to handle specific situations in the future. Now, with anything, balance is key. If all you do is continue to run through regret in your mind and feel the pain of it, you may not get the intended positive effects. When used with good intentions with the goal of growth, I definitely see how you can use regret to work in your favor.

I definitely recommend this book to get another perspective than what seems to be the common one on regret. I like the phrase another perspective here. That is what I really enjoy about books like this one. Books like this that shine some light on counter-arguments that may not be as common can be very enlightening and cause you to audit and analyze certain aspects of your own life. I’m all about growth in the different facets of my life, and this book gave me some useful takeaways and tools to continue to grow.

DHCP Snooping – The Helpful Nosy Neighbor

It can turn into a nightmare scenario in a heartbeat. Your phone starts erupting with calls and incidents that people in a specific area simply “cannot connect to the network”. After you dust yourself off from coming to terms that you first have to determine what cannot connect actually means, you frantically get to work. You see no visible alarms, and your switches, routers, and DHCP servers all seem to be in order. You get a hold of the first customer, verify they have physical connectivity to the network, then walk them through the process to help you determine if they have an IP address. They do indeed have an IP address, but it is nowhere near valid for the segment of the network they are on currently. After validating from a few other folks that they too seem to have addresses in the same bogus subnet, you have your correlation. There is most likely a rogue DHCP server connected to that local network segment and when new workstations are coming onto the network or renewing their leases with DHCP, this server is receiving those local broadcasts of the DHCP Discover messages and responding by handing out a bogus IP address.

As you can see, this can be a serious problem, but never fear. A solution exists and it’s name is DHCP snooping. DHCP snooping can stop these issues from happening altogether. Just to note, something like the scenario above does not have to be an event that is malicious in nature. There could be a new system being installed that has DHCP server capabilities and if it is misconfigured or the DHCP services are left enabled by default, it could absolutely interfere as a rogue DHCP server if DHCP snooping is not enabled on the network switches.

How does DHCP snooping save the day? At a very high level, when configured, DHCP snooping will block DHCP offer messages as they enter switchports. The result is that if a rogue DHCP server gets plugged in and starts trying to hand out IP addresses to unsuspecting clients, those DHCP offers will get knocked down at the point of entry; right at the port in which the rogue server connects. Since we’re talking about DHCP offer messages, let’s do a breakdown of the different DHCP messages and why blocking the offers from rogue servers entering the network is important. There are four DHCP messages to consider in the process of clients requesting and receiving IP addresses. This is the DORA process, as some of us are taught.

  • Discover
    • source: client (0.0.0.0 because it does not have an IP yet) – UDP port 68
    • destination: Broadcast – 255.255.255.255 (UDP port 67) – to be received by an available local DHCP server or DHCP relay agent.
    • general message: “Hi! Would you please assign me an IP address so that I can communicate on the network? Watching cat videos on YouTube is important to me!”
  • Offer
    • source: DHCP server – UDP port 67
    • destination: Either broadcast – 255.255.255.255 or the unicast address of the DHCP relay agent (router) if the client is on a different subnet from the DHCP server. Either way, the destination port is UDP 68.
    • general message: “Welcome! I do have an available IP for you! How does 192.168.100.50 sound?”
    • note: Although the destination is broadcast, the client’s source MAC is in the actual DHCP message so that when the client processes the DHCP Offer broadcast message, it knows that it is for that specific client.
  • Request
    • source: client (0.0.0.0 because it does not have an IP yet) – UDP port 68
    • destination: Broadcast – 255.255.255.255 (UDP port 67) – to be received by an available local DHCP server or DHCP relay agent.
    • general message: “Yes, I would love to have 192.168.100.50. Would you please allocate it to me?”
  • Ack (acknowledgement)
    • source: DHCP server
    • destination: Either broadcast – 255.255.255.255 or the unicast address of the DHCP relay agent (router) if the client is on a different subnet from the DHCP server. Either way, the destination port is UDP 68.
    • general message: “Sure thing, buddy! 192.168.100.50 is all yours (for the specified lease time, that is). Enjoy those internet kittens!”
Just to have a visual, here is a look at a Wireshark capture from my local workstation during the DHCP process.

Now that we have a baseline understanding of the DHCP process, why is it important to block those offer messages? Due to the nature of DHCP Discover messages being sent to a destination of 255.255.255.255, any local DHCP server (including potentially rogue servers) will hear and process the message. The server will then respond with an offer message that the client will process and try to then respond with a request. Well, if the offer message from the rogue server is stopped as it enters the switchport, the client will never receive it and only receive an offer message from a legitimate DHCP server.

Wait, well how do we determine which DHCP servers are legitimate and which are rogue, if DHCP snooping is applied to all switchports? DHCP snooping relies on a concept of trusted and untrusted switch ports. When DHCP snooping is enabled on a switch, by default all ports are in an untrusted state. This means that if a DHCP offer message is received on any port, it will be blocked. To allow the DHCP process to function properly, you need to explicitly configure ports that connect to DHCP servers as trusted ports. These ports would be either be switch ports that connect directly to local DHCP servers, or (probably more commonly) uplink ports. This process will allow you to inherently thwart the threat of rogue DHCP servers wreaking havoc on your network, while still allowing legitimate DHCP servers to “do their thing” by providing IP addresses to clients on the network.

Sample Configuration
I’m familiar with Cisco products, so this configuration will be a sample from Cisco IOS/IOS-XE software. What is nice is that there really is not a whole lot of effort involved in setting up and enabling DHCP snooping on a switch. You’ll want to configure any necessary trusted ports where DHCP offer messages are expected, define the VLANs on the switch that are in scope for DHCP snooping, then enable the DHCP snooping process. Here is an example:

config terminal
interface gi1/1/1
  ip dhcp snooping trust
exit
ip dhcp snooping vlan 10
ip dhcp snooping

Wrapping It Up
While DHCP snooping solves a problem that we hope would be infrequent and rare, if a rogue server does pop up either maliciously or accidentally, it can cause major problems if the mitigation is not in place. Having DHCP snooping enabled is a really low-lift way to protect your network from rogue DHCP servers causing what ends up being network connectivity issues for clients. DHCP snooping is also used in fabric/overlay technologies to ensure that DHCP messages get to the proper switch, and ultimately onto the proper clients. But, that’s a whole other blog, for a whole other post. Thanks for reading, and if I got any of this wrong, please leave a comment so I can correct it!

ENSLD Journey – The Intro

I have a 2022 goal of achieving the CCNP Enterprise certification. Late last year, I passed the ENCOR exam and now need to pass one of the concentration exams to achieve the full CCNP certification. Due to my interests, I have decided to go for the design specialization. To achieve this specialization, I need to pass the Designing Cisco Enterprise Networks (300-420 ENSLD) exam. Throughout my study plan for this exam, I plan to create blog posts. The reasoning behind this is to both help solidify my knowledge of the appropriate material and as an attempt to help you in your journey toward the Cisco design specialization. For this exam, my main study materials will include:

  • CCNP Enterprise Design ENSLD 300-420 Official Cert Guide Premium Edition and Practice Test (OCG)
  • CBT Nuggets ENSLD course
  • Pluralsight ENSLD course

I will plan to go through each high level section in all of the listed materials above, and create/review Anki flashcards along the way. Please join me along this journey. I am open to advice and questions! Happy Studying!

Challenges and Opportunities – Year Over Year

At the end of 2020, I made a conscious decision that 2021 was going to be different. Now, I should say that things had been going well. I had been happy with my career and home/family life. However, there was something missing. I had never really set solid goals for myself. I would always just try to “do the right thing” career-wise and all would be good. For the most part it had been, but I wanted to take more control over my career and what comes next in my life. So, at the end of 2020, I decided to set goals. Really, there was only one that I really wanted, and that was to take and pass the Implementing and Operating Cisco Enterprise Network Core Technologies (350-401 ENCOR) exam.

The ENCOR Mountain
Ultimately, I want to become a Cisco Certified Network Professional in the Enterprise track. I had made a decision that this certification makes sense for my career now, as well as potential growth. With that being two exams, and given my track record for the time it takes me to prepare for exams, I wanted my goal to be attainable so I decided that my 2021 goal was to pass the ENCOR exam. Now, I should state that my preparations for ENCOR did not begin at the start of 2021. No, they actually started at the beginning of 2020. However, all throughout 2020, I feel like a made a critical mistake. I was really just going through different material, and never really reviewing anything. So, let’s say for a week or two, I would go through material that covered Spanning Tree Protocol. Well, when I finished that, I would just move onto the next thing, then the next after that. The problem was that I was never reviewing anything so after a short time, my study time around Spanning Tree was pretty much for nothing because I really didn’t retain anything. By the time I realized this “plan” (if you could even call it that) wasn’t working for me, it was already toward the end of the year. Thanks to the recommendation of leveraging the digital flashcard concept with the Anki platform that I received from the Art of Network Engineering podcast, I decided to reset my studying in 2021 and go at this with a better plan for review. You can read my full ENCOR journey story here. That reset and review plan was really what I felt was the difference maker. It took me most of the year, but I did take and pass the ENCOR exam toward the end of November, 2021.

The Podcast
I had mentioned the solid advice around the digital flashcard concept that I received from the AONE podcast. I had started listening to the show shortly after it started in the summer of 2020 and really enjoyed it. I joined their ‘It’s About the Journey’ Discord community and started communicating with like-minded people in there. I started writing for the AONE blog, then in February of 2021, got be on the show. I was so excited for that. This podcast and community really started giving me a sense of involvement and accomplishment. Later in the year, I was invited back for a few episodes here and there, and eventually was made a full time co-host. I really cannot thank the team enough for bringing me into this incredible experience. I had the pleasure to meet A.J. Murray in person and am really looking forward to when I can do the same with Andy Lapteff and Dan Richards. I cannot wait to see what the future holds for this team.

The Year Ahead
With how 2021 is ending, I am really excited for 2022. I have two main career related goals for the upcoming year. The first is to pass the Designing Cisco Enterprise Networks (ENSLD) exam. This will get me the CCNP certification which has been something that I have wanted for a while now. Secondly, I want to write more and become a finalist in the Cisco IT Blog awards. As of now, after the CCNP goal is reached, I’m thinking about diving into learning Python. I am excited for 2022, and hope you are as well! Remember, it’s all about the journey!