ENCOR Journey Series – Quality of Service

Bandwidth is infinite and we never need to worry about congestion, slowness, and packet drops, right? Of course, the answer to this is “no”. Networks transport many different kinds of data, and the different types may need to be handled in different ways to provide the best user experience during times of congestion. When I state “times of congestion”, I’m referring to periods of time in the network in which the amount of traffic is greater than the amount of available bandwidth. This is where Quality of Service (QoS) becomes very important. QoS consists of a modular framework of classifying traffic, creating a policy on how to handle the different traffic classes during congestion, then applying that policy to an interface. That policy can include marking classified packets, dropping certain types traffic while allowing others, queueing traffic until bandwidth is available, and setting aside amounts of bandwidth for different traffic classed during congestion. I think it’s important to note that the actual enforcement of the traffic policy only engages when there is congestion, or essentially, not enough bandwidth to server the current amount of traffic. In the rest of this post, we will take a deeper dive into the modular QoS approach.

Traffic Classification

In order to provide proper service and experience during congestion, to certain traffic types, you need to have a way to categorize, or classify, different traffic types. From a configuration standpoint, this is where class maps come in. Class maps provide you a way to match on certain traffic. You can then call on those class maps when creating policy (policy maps) in the next step. There are many different options for traffic classification at the different layers of the OSI model. Here is a high level look.

  • Layer 1 – interface/subinterface or port selection.
  • Layer 2 – MAC address, class of service (CoS) markings.
  • Layer 3 – IP address, IP Precedence (IPP) markings, Differentiated Services Code Point (DSCP) markings.
  • Layer 4 – TCP/UDP port.
  • Layer 7 – Application recognition with deep packet inspection – Network Based Application Recognition (NBAR).

Policy Creation

Once you have the important traffic classified that needs to be handled certain ways during times of congestion, you now need to figure out what that specific handling is going to be. As stated earlier, this is where we can leverage policy maps in the configuration. At a high level, policy maps call upon the created class maps, then define policy to be enforced when needed. Here are some examples of policy actions.

  • Traffic marking – Marking is used to apply a tag to packets. These tags signify specific classified traffic and can be used by other network devices to apply policy.
    • Layer 2 – At Layer 2, these are called CoS markings and are made across 802.1Q trunk links.
    • Layer 3 – At Layer 3, packets can be marked in the type of service (ToS) field in the header and can be the following.
      • IP Precedence (IPP) – IPP is the legacy method which uses three bits of the type of service (ToS) field in the packet header. This allows for up to eight different markings and was designed to be a direct mapping of CoS markings at Layer 2.
      • Differentiated Services Code Point (DSCP) – DSCP uses six bits of the type of service (ToS) and allows for up to sixty four different QoS markings.
  • Policing – With policing, packets are dropped to conform to the set bandwidth limitation.
  • Shaping – With shaping, packets are queued during congestion so that they can be held and sent as bandwidth becomes available. Shaping is typically done at the edge of the network when connecting to service providers. It is best to be done with TCP applications that are not sensitive to latency and jitter. UDP applications such as voice and video are sensitive to latency and jitter, so we want those packets to get through as soon as possible and not be queued.

Policy Application

Alright, so we’ve classified our traffic and created a QoS policy, now what? Now, the policy map must be applied to an interface so that it can be enforced. In Cisco IOS, this is referred to as the service policy and is applied to the interface in either an input or output direction.

Conclusion

One of the biggest things for me to remember around QoS is that the policy enforcement only needs to kick in during times of congestion. If there is always enough bandwidth, then packets never need to be policed or shaped. QoS is there to insure that the most important traffic is given preferential treatment during those heavy utilization times when there is not enough bandwidth to service all of the current traffic.

ENCOR Journey Series – Multicast Operation

In the last post, we covered the background of multicast. What it is, and why it’s leveraged in networks. In this article, we will dig deeper into multicast operation and routing. When we think in terms of unicast, we think about delivering data toward the destination. We have a destination IP address that belongs to a single device, and we are trying to get packets to it, directly. With multicast, we think in terms of forwarding data away from the source, rather than to a destination. I know that somewhat sounds like the same thing, but I think the big difference here is that with multicast, data is being sent to a grouping of devices and that grouping can be dynamic. Multicast receivers (or clients) can come and go. So, the big idea, is to just forward traffic away from the source and rely on multicast routing to properly deliver the packets to those interested receivers.

Multicast Delivery

So, how do we get packets from the senders (or sources) to the interested receivers, leveraging multicast? First, let’s go over two multicast delivery models, dense mode and sparse mode.

Dense Mode

With Dense mode, the multicast routers assume that all clients in the network want to receive the multicast traffic. This can be thought of as a “push” model because the multicast routers push the traffic to all clients in the network. Then, the routers can “prune” (or stop) traffic if they do not have any interested clients connected to them. However, that prune period is only temporary, and the multicast traffic will get flooded again. Dense mode is really only efficient in networks where multicast receivers are densely populated in all corners of the network. Because this isn’t the most efficient delivery method of multicast traffic, dense mode is not used as much these days.

Sparse Mode

Sparse mode requires that clients request to receive multicast traffic before traffic is sent down the multicast distribution tree (MDT) to them. Where dense mode is a push model, sparse mode can be thought of as a “pull” model. Clients must request to become members of specific multicast groups so that they can pull the data from the senders. Sparse mode was designed to be leveraged in environments where receivers are sparsely located throughout the network, but it also works well in networks that have densely populated receivers. This is a more efficient method for delivering multicast because the traffic is only forwarded to segments of the network where there are interested receivers.

Tree Types

We just brought up the concept of the multicast distribution tree (MDT). The multicast source can be thought of as being at the root of the tree with the receivers attached at the edge of the branches. There are two main tree types with multicast, source trees and shared trees.

Source Trees

Source trees build paths from the true source (or sender) of the multicast traffic all the way to the receivers. Source trees are leveraged in dense mode by default because traffic the source is known by all routers. Because traffic is being forwarded to all points in the network, it flows directly from the source and everyone knows the identity of the source right away. Source trees are also leveraged in sparse mode after initial multicast traffic is received and the source is known, and with source specific multicast (SSM). We’ll get into these shortly. Multicast routers notate source trees in the multicast routing table, in (S,G) format (pronounced “S comma G”) which stands for source and group. The source is the IP address of the sender and the group is the IP address of the multicast group that is receiving the traffic.

Shared Trees

Shared trees are leveraged with sparse mode (at least initially), and build two separate paths. The first is actually a source tree from the multicast source to a central multicast delivery router called a rendezvous point (RP). Then, a shared tree is built from the RP to the receivers. It is called a shared tree because multiple groups can share that rendezvous point. Why exactly do we need shared trees with an RP? Well, multicast receivers don’t always know (or, aren’t programmed to know) the actual source of the traffic they need. They may just know the group that they need to join so that they can receive traffic. In this case the router that connects to the receivers needs to join the shared tree, or in other words, register itself to the RP to receive traffic for the specific multicast group. Multicast routers notate shared trees in (*,G) format (pronounced “star comma G”). Whenever “*” is used, you know that an RP is involved. The RP can be learned by routers in multiple ways that we aren’t going to get into is this post.

Let the Traffic Flow!

I now want to go through the high level process of getting multicast traffic to a receiver with the sparse mode model. Remember that sparse mode leverages a pull model, so the process really starts with the clients that want to receive the multicast traffic. When someone opens up an application on a device that leverages multicast to receive data, the client sends an IGMP (Internet Group Management Protocol) membership report message to the multicast group address of the group they want to join. That membership report is received by the client’s local router, which is referred to as the last hop router (LHR). It is a last hop router because it can bee seen as the last hop in the path of the multicast traffic’s journey away from the source. If a source is not specified in the IGMP membership report, the LHR needs to join the shared tree. This essentially means that the LHR needs to register to the RP that it wants to receive traffic for the specific multicast group. While IGMP is used from the receivers to the LHRs, PIM (Protocol Independent Multicast) is used between the last hop routers and the RP. The LHR creates a (*,G) entry, where G is the multicast group that the client requested to join, and sends a PIM join message toward the RP. The LHR just uses the unicast routing table to find how to reach the RP. Once the interface toward the RP is determined, the router will send a PIM join message out that interface to the multicast address of 224.0.0.13 (All PIM Routers). All interfaces in the path need to be configured for PIM sparse mode (PIM-SM). This process goes on from router to router until it reaches the RP. I learned something from Tim McC’s Cisco Live Barcelona session that really stuck with me. Rather than looking at this as a “tree growing”, the IGMP and PIM join process can be seen as trenching a canal that starts from the receiver and works it way up toward the RP or the source (depending on shared or source tree). At the beginning, when the IGMP membership report (join request) reaches the last hop router, the interface that the request was received on is considered an outgoing interface and the interface which the PIM join is sent out toward the RP is the incoming interface. We can look at this as multicast traffic flowing from a sender into an incoming interface and out the outgoing interface toward the receiver that requested it. Once our canal is built, multicast traffic will then flow from the RP down that canal (outgoing interfaces of each router) until it reaches the receivers. I’ve glossed over the process of the RP building that source tree with the first hop router (FHR), but that is a necessary process as well. When multicast traffic is generated from a source, it is sent to the specified multicast address of the application and received by the FHR (first router in the path of the multicast traffic). The FHR then works with the RP to build that source tree so multicast traffic can flow to the RP, and the RP can be that central point of multicast delivery, and other routers can build shared trees with it. I think it’s important to know that while multicast has it’s own routing protocol and tables, the process still leverages the unicast routing table to find the path toward the RP or direct source of the traffic. The unicast routing table is also used to perform Reverse-Path Forwarding (RPF) checks when multicast traffic enters an interface. The router checks the source of the packet against the unicast routing table to make sure that the packet should have been received on that interface. This is a loop prevention mechanism.

The Switch-over

Alright, that last paragraph got a bit heavy. So, what happens in a shared tree model if the best, most efficient path to the source of the multicast traffic is not through the RP? Once the canal is built and multicast traffic is received by the last hop router, the actual source IP of the multicast sender will be in the packet headers. The LHR can then enter an (S,G) entry in the multicast routing table and build a source tree (leveraging the unicast routing table to find the best path to the source) to the multicast sender. This is the same PIM join process that was used to build that canal to the RP earlier, but now it is being built directly to the source. Once the new canal is built, the LHR can prune its (*,G) entry toward the RP and only use the (S,G) path toward the actual source.

Source-Specific Multicast (SSM)

While the shared tree concept works, and the switch-over mechanism makes the process eventually efficient, there is a chance that we can rely on the applications themselves to make this process the most efficient right away. If the applications (and the network) support it, Source-Specific Multicast (SSM) can be used. Basically, with SSM the applications are configured to request multicast traffic from specific sources (or excluding sources). On the network side, SSM is leveraged with IGMPv3, so the LHR would need to support that version. When the LHR receives the IGMPv3 membership report from the client, if a requested multicast source is specified, the router can build that (S,G) source tree directly to the source without needing to build the shared tree through the RP first. This makes the whole process more efficient.

Conclusion

I’ve said it recently that there is a lot to unpack with multicast, and I’m only scratching the surface here, but I think learning the basics can really get you far with wrapping your head around multicast. For more in depth learning, I encourage you to take a look at that Cisco Live presentation that I referenced above. There are also some good Cisco whitepapers around multicast that are out there as well. Feedback is definitely welcome. Please let me know where I got it wrong or could have explained something better. I am using this blog as a tool to help me better understand concepts. Thanks!

ENCOR Journey – Multicast Background

Unless otherwise specified, I think it is easy to go into a routing scenario or situation assuming that the traffic flows are unicast. What do I mean by this? Unicast traffic flows are what I will call the “typical” end user type communications in a network. One device needs to talk to another device. The source IP is the address of the originating host and destination IP is the address of the “far end” device (for example, server). Let’s give an example of a unicast traffic flow. A person opens up a web browser and types http://www.bogusexamplewebsite.com (I don’t know if this is real or not, this is just an example) into the address bar. As we flow down the stack (or OSI model) from Layer 7 (Application Layer), in order for the packets to be delivered, a destination IP address needs to be known. In this case, DNS will be leveraged to translate the website name into an IP address. Once that process is complete, we now have our unicast conversation, one source IP address to one destination IP address. This is a basic unicast conversation on a network. This seems fairly straightforward, this must be the way that all conversations should happen on a network, right? Well, not always.

Let’s bring another example in. Let’s say that an organization has a need to deliver real-time, streaming video to employees for “daily check-in” meetings. A server gets set up for this, and every day at 9:00 AM, live video is streamed to the organization, and employees can use an application on their devices to “tune in” to this stream if they wish. This could get out of hand in a hurry if we had to leverage unicast for this method of streaming video delivery. If unicast was used, there would be a separate session for every client from the server that requested the stream. This does not scale well as it could heavily consume computing resources on the server and bandwidth on the network. See the example diagram below that gives a visual for how this stream would look if unicast was leveraged for packet delivery.

Video stream delivery with unicast

In the image above, there are four total workstations in the network that request the video stream. With unicast, the video server receives four different requests and sends the same stream four different times. This method can be resource intensive and may not scale well in large environments. Thankfully, another option exists. When leveraging multicast, we can have a streaming traffic flow that looks more like this image below.

Video stream delivery with multicast

While unicast gives us a “one-to-one” packet delivery mechanism, multicast is leveraged to support “one-to-many” use cases. The case of the “daily check-in” meetings is definitely a scenario in which a one-to-many delivery mechanism makes sense. Well, this sounds great, let’s just “turn it on” and walk away, right!? As with most things, there is more to it than that. First off, what are the requirements to support multicast?

  • The end user applications must be programmed to request data from a multicast group address.
  • The server application must be programmed to send data to a multicast group.
  • The network must configured to support multicast routing.

In the requirements listed above, I made reference to a multicast group address. A multicast sender (server) sends data to a multicast group address instead of individual client unicast IP addresses. Multicast receivers (end user applications) “subscribe” to a multicast group and request to receive data that is sent to that multicast group address from the multicast sender. The routers in between facilitate the multicast registration and packet delivery. Multicast addresses live in the Class D IP space of 224.0.0.0/4, which gives a range of addresses from 224.0.0.0 to 239.255.255.255. Within the Class D space, different ranges are specified for different purposes of multicast.

Conclusion

Multicast provides us the ability for more efficient packet delivery for certain use cases. These use cases include applications that provide a “one-to-many” functionality. An example use case is a server delivering streaming video to clients. When applications support multicast, and someone opens the app to request a stream, in modern multicast deployments, the application “subscribes” to a multicast group, which registers the device to it’s local router saying “Hey, please send me traffic destined to multicast IP address X.X.X.X”. The local router then registers itself to upstream routers requesting subscription to receive multicast packets to that specified multicast group. Then, as the stream is sent to that multicast group address, it is forwarded down the multicast distribution tree (MDT) to routers that have registered. Those routers keep track of the downstream interfaces from which the multicast requests were generated, and continue forwarding the packets to their destination. There is more to come, as there is a lot to unpack with multicast! Please join me in future posts.

Automation Journey – Python: The Basics

The title of this post really says it all. I want to become a more automation/programmability minded person and to kick off this journey, I decided to start by getting a familiar with Python. As time permits I am going through Al Sweigart’s “Automate the Boring Stuff with Python” book and Udemy course, and plan to document here what I learn along way. I have started learning the very basic programming concepts and am starting to get familiar with the Python shell and basic syntax. Traditionally, I’ve definitely been a “slow and steady wins the race” kind of guy. I like to take my time and try not to get overwhelmed. Luckily for me, Al’s book and course seems to cater to that. You’re going to have to bear with me during these posts, I’m really starting from square one here. I’m hoping that seeing my documented struggles and hopeful eventual successes might help others along the way. In the rest of this post, I want to go over my explanation/take on the basic terminology that I have learned. And yes, even though this isn’t geared toward exam study, I am making Anki flashcards. Going through these cards is really helping me soak up and retain the new concepts that I’m learning. Here are some concepts that I’m learning in question/answer form.

  • What is Python? Python is a combination of a programming language along with an interpreter that reads and takes action on the code written in the Python language.
  • What are expressions? Expressions take multiple values and evaluate them down to a single value. A basic example is 2 + 1 = 3. This expression leverages a “+” operator to evaluate the two values of “2” and “1” down to a single value of “3”.
  • What is a data type? We can think of a data type as a category or classification of a value. A specific value can only belong to one data type. Three common data types in Python are:
    • integers – Whole numbers.
    • floating point numbers – Numbers with decimal points such as 17.2, 9.7, 256.3.
    • strings – Data type that leverages alphanumeric characters.
  • What is a variable? A variable is memory allocation in a program used to store a value. The variable can then be called to use later in the program. Al explained variables in a way that I really like. A variable can be thought of as a labeled box that is used to store something. In a program that “something” can be set statically or dynamically. For example, a variable could be set as a result of someone typing some input into the program in a certain spot.
  • What is an assignment statement? This is how value gets assigned (or stored) to a variable. An example is age = 23. In this assignment statement, a variable is created named age and is assigned an integer value of 23. In the assignment statement the = sign acts as the assignment operator. Variables contain one value at a time. So, after we set age to 23, if we then type age = 42 the value of 23 has now been overwritten by the value of 42 in the age variable.
  • What is # used for? The # symbol is used in Python to write comments in your code. Essentially, anything typed on a line after the # is ignored by the Python interpreter when the code is run. Here are a couple of use cases for “commenting out” Python code.
    • For documentation purposes to explain what your code is trying to accomplish.
    • For debugging purposes to “disable” certain lines of code to see if they are causing issues.
  • What is a function? A function is an “external” set of code that your program can call upon to run within your code. Functions are already built/established programs for you to use within your code. Built-in Python functions include print() and input().
  • What is an argument? In the point directly above, I explained at a high level what a function is along with its purpose. Well, an argument is a value that is passed to a function. Let’s take a look at a simple example with the print() function.
>>> print('Hi, I am Tim and I am a complete Python/coding beginner. Please Help!')
Hi, I am Tim and I am a complete Python/coding beginner. Please Help!

In the example above, I am passing the string value of ‘Hi, I am Tim and I am a complete Python/coding beginner. Please Help!’ That string value is an argument within the print() function. In other words, it is the value being passed to the function.

Summary

As you can tell, I’m learning the basics of the basics here. Also, I’m going through this on a “as time permits” basis, which probably isn’t the best approach, but building and reviewing the digital flashcards is helping retain the knowledge, at least around the terminology, which is good. If you are learning Python as you’re reading this, then I hope this helps.

Automation Journey – The Beginning…Again

It’s time. Time that I make a conscious effort to start learning concepts and leveraging tools to dive into network automation. I have heard a lot of advice for a while now that a great way to become a better network engineer is to automate repetitive tasks so that you can spend time providing value elsewhere. The concept of providing value is important to me. Whenever I struggle with thoughts around not knowing where to start, I figuratively step back and tell myself to just find where you can use your skills (and build new skills) to provide value. I also want to position myself to continue to remain relevant, and learning automation seems like a great way to go.

As you can probably gather from the title, I’ve been down this road before. I’ve dabbled for a bit and dropped it, but now is time to break the cycle. I am currently working toward achieving the CCNP Enterprise certification and do not plan on taking time away from that goal to study automation. As much as I would love to dive into Cisco DevNet and eventually work toward the DevNet Associate Certification, I do not plan to start that yet. While I work toward CCNP Enterprise, I would like to start my automation journey and find some quick wins. I feel (it might be ‘right’ and it might be ‘wrong’) that I can do that by getting familiar with Python. I have decided to start with going through Al Sweigart’s “Automate the Boring Stuff with Python” book and Udemy course. I am really excited to dig in. I plan to build upon this series and really hope that I can look back on this first post sometime in the future and be proud of my progress. I also encourage you to learn along with me. Or better yet, TEACH ME EVERYTHING YOU KNOW!

ENCOR Journey – STP Features

In the last installment of this series, I keyed in on the STP feature of PortFast. In this post, I wanted to highlight two more STP features, or “add-ons”, that I think are very important for controlling and securing the Layer 2 domains. Those two features are BPDU guard and root guard. Both features are leveraged to react to Spanning Tree BPDUs in similar ways, but in different scenarios, and for different reasons.

First, BPDU guard is leveraged on access ports configured with PortFast. BPDU guard prevents switches from being plugged into access ports and potentially causing Layer 2 loops. Remember, PortFast allows interfaces to immediately transition to the forwarding state, which is dangerous if switches are being plugged in. When BPDU guard is enabled on PortFast interfaces, if a BPDU is received on the port, the port will be placed into an err-disabled state (effectively shut down). BPDU guard can be configured either globally on all PortFast enabled interfaces, or explicitly on specific interfaces with the following commands.

  • Global
    • configure terminal
    • spanning-tree portfast bpduguard default
  • Interface Specific
    • configure terminal
    • interface interface-id
    • spanning-tree bpduguard enable

Next, root guard is a mechanism to prevent switches that should not become the root bridge, from becoming the root bridge. STP root guard is configured on designated ports that connect to downstream switches that should never become the root bridge. If a superior BPDU is received on a port configured with root guard, rather than the designated port transitioning to become a root port, the port is placed into an err-disabled state to protect the current root bridge and to prevent a topology change from occurring. Well designed Layer 2 topologies should have defined primary and secondary root bridges, and leverage root guard if necessary to protect against unnecessary topology changes due to misconfigured or rogue switches. Root guard can be enabled on STP designated ports with the following commands.

  • configure terminal
  • interface interface-id
  • spanning-tree guard root

I have enjoyed gaining a deeper knowledge of STP, including the additional features. I see BPDU guard and root guard as protection mechanisms that help promote a stable topology and assist in the prevention of unnecessary or unwanted topology changes.

Podcast Review – ZigBits Episode 70

  • Podcast: ZigBits Network Design Podcast
  • Episode: 70 – Demystifying the Role of the Network Engineer with Ethan Banks

For a while now on the podcast, Zig has been doing a series of “Demystifying the Role of the Network Engineer” episodes with various guests. In this episode, Zig sits down with Ethan Banks to get his take on the role of the network engineer. This was a fun one for me because I’ve been a Packet Pushers fan for a little while now and really enjoy Ethan’s perspective. I took some notes while listening to this episode, so I am going to list out some paraphrased key topics that were said during the episode, and give my thoughts/interpretations inline.

  • A network engineer is a builder.
    • In an organization, the network engineer is the individual (or individuals) that actually build and implement the network infrastructure.
    • Depending on size and structure, the engineer will take and interpret a design from a network architect, develop a design of their own, or work with a contractor for the design process.
  • When seeking help or mentorship, do your homework first.
    • Unless there is an outage/emergency, you aren’t really doing yourself any favors by seeking help from others without trying to do some research of your own. This will help you learn and build confidence in your own abilities.
    • Mentors and more experienced individuals will probably be more willing to help someone that has proven that they are serious about learning something.
  • Build trust, don’t just try to be the smartest person in the room.
    • If you are lucky enough to be a member of a team, do what you can to make the team stronger, rather than just make yourself look good.
    • You are one person, we are stronger together.
    • Working with different team members brings different perspectives. You never know when someone else might catch or see something that you did not.
  • Repetition and practice is not always easy to accomplish, so proper documentation is vital.
    • There may be certain technologies or skill sets that we do not get to tap into on a daily basis. Having good documentation or at least knowing where to obtain good documentation is important.
  • Do not lose sight of the business, speak the language.
    • I think that it’s an easy trap to fall in (I’m guilty, myself), of thinking that “Well, I’m in IT, I just do ‘IT things’ and do not need to understand the core business purpose and principles”. We are more than just IT workers. We are contributors for a business toward business goals. It can be difficult to provide value to a business if you do not understand the whats/whys/hows of the company.
  • Communicate effectively.
    • I am a firm believer in communicating clearly and often to customers, team members, and management. Not only so that people do not have to wonder about the status of certain things, but I feel that it is a way of showing that you care about what you do and the task at hand. If I can provide information or an update before someone has to ask, then I feel that I am communicating at least decently.
  • Have the ability and drive to learn.
    • Network engineers and other IT contributors work in ever-changing environments. We need to have the ability, desire, and drive to continuously learn, adapt, and grow.
  • Ethan still loves networking.

I love hearing people’s stories, experiences, and advice. Listening to these two incredibly experienced and intelligent individuals provided just that. This was a great discussion with a lot of solid advice for network engineers and I definitely recommend giving it a listen.

ENCOR Journey – STP Portfast

I have been going through Spanning-Tree Protocol study for the last week or so and wanted to highlight a key part of the portfast feature that had not really hit me (and stuck) until now. I’ve known that a big reason and benefit of the portfast feature is to move interfaces (typically access ports) to the forwarding state immediately. However, another big reason and benefit that I’ll say I didn’t realize or remember, but is just as important in my opinion, is the suppression of topology change notification (TCN) BPDUs for interfaces with the portfast enabled.

By default, switches will create and send a TCN BPDU toward the root bridge when they detect a topology change (for example and interface going to a down state). Upon receipt of the TCN BPDU, the root bridge will create a configuration BPDU with the topology change flag set, and flood it to all other switches in the topology. When the non-root bridges receive a configuration BPDU with the topology change flag set, they need to change the MAC address age time to the STP forward delay time and flush out all MAC addresses older than that time period. This is done to prevent frames from being forwarded out interfaces to MAC addresses that may no longer exist out that port anymore. By default, this process could happen very often due to end devices potentially connecting and disconnecting frequently. That could cause a lot of churn and inefficiency in the Layer 2 topology. If portfast is enabled on all access ports connecting to individual end hosts, this TCN behavior is suppressed and only leveraged on non-portfast interfaces (switch to switch connections). To me, this seems very important.

STP portfast can be enabled on individual interfaces or globally on all access ports.

  • Interface configuration
    • interface interface-id
      • switchport mode access
      • spanning-tree portfast
  • Global configuration
    • spanning-tree portfast default

ENCOR Journey – The Plan

I’m not particularly proud of this, but I actually began this ENCOR journey at the end of 2019/beginning of 2020. I spent a lot of time studying throughout 2020, and I won’t call that time wasted by any means, but where I feel I failed is I did not have a proper plan for absorbing and reviewing content. I would go through multiple content sources for each topic, but did not have a review process to go back and keep the knowledge “fresh”. One thing I was proud of in 2020 was that I spent a decent about of time with lab practice of different concepts. All of this being said, something had to change, and I needed a “real” plan. One tool that I have been really excited about (thanks to @artofneteng for the advice) is the digital flashcard concept with Anki. I have incorporated flashcards into most of the steps in my plan. Here is the current plan I’m working through for each topic or grouping of topics.

  1. OCG chapter (creating flashcards along the way).
  2. OCG chapter key topic review.
  3. OCG key terms (create flashcards).
  4. OCG command reference (create flashcards).
  5. Cisco On-demand learning (creating flashcards along the way).
  6. CBT Nuggets (creating flashcards along the way).
  7. Labs as necessary.
  8. Flashcard review.

As far as flashcard content, from the sources I’m leveraging, I create cards for quiz questions, key topics, key terms, and anything else that I deem necessary. One thing that I have found myself doing, is starting each study session by going through flashcards. A benefit I see to this is that I don’t go over just the current topic’s flashcards. I feel that is the review concept that I was missing last year. I feel really good about this plan and am very excited to continue this path. Due to the recent @artofneteng podcast episode titled “Goal Hacks“, I am also now considering adding in practice exams sooner than later. On your path to your goals, I think you need to find what steps work best for you, but I do recommend making a plan and sticking to the core concepts of that plan.

ENCOR Journey – OSPF LSDB

I have been around OSPF for some time now, but I had not until recently (while studying for ENCOR), done a deep dive into understanding how to read and understand the link state database (LSDB). The LSDB essentially contains route reachability information for all known OSPF routes in a network and is built from six different link state advertisement (LSA) types, which are listed and described (with my notes) here.

  • Type 1 – Router LSA
    • Each router in an area generates a single Type 1 LSA that describes all OSPF enabled links.
    • The foundation LSA of OSPF.
    • Type 1 LSAs do not get advertised outside of the area of origin.
  • Type 2 – Network LSA
    • Type 2 LSAs describe the network and attached routers of multi-access links.
    • Like Type 1 LSAs, Type 2 LSAs are not advertised outside of the area or origin.
  • Type 3 – Summary LSA
    • Generated at area border routers (ABRs).
    • Describe networks that originate from another area.
    • Type 1/2 LSAs are recreated at ABRs as Type 3 LSAs to be advertised to other areas.
  • Type 4 – Summary ASBR LSA
    • Generated at area border routers (ABRs).
    • Describe how to reach an autonomous system boundary router (ASBR).
    • Used in conjunction with Type 5 LSAs.
      • Because Type 5 LSAs describe networks that are advertised by ASBRs, routers need to know how to reach the ASBR. The Type 4 LSA is advertised by the ABR to the local area so the local routers know that to reach an ASBR, they need to route to the ABR.
  • Type 5 – AS external LSA
    • Describe external routes that are redistributed into OSPF.
    • Generated at autonomous system boundary routers (ASBRs).
    • The only LSA type that is advertised “in tact” throughout multiple areas.
  • Type 7 – NSSA LSA
    • Describe routes that are redistributed by an autonomous system boundary router within a Not-So-Stubby Area.

The LSDB can be a very powerful tool for discovery. Because all routers within an area maintain and identical LSDB, you can draw out a topology map of that entire area by just logging into an looking at the LSDB of a single router. When doing this, you would just need to focus on the type 1 (router) and type 2 (network) LSAs.

  • Suggested commands
    • show ip ospf database
    • show ip ospf database router
    • show ip ospf database network