NFD26 – Performance Monitoring w/ NetBeez

The NFD26 series of posts are my commentary from specific presentations held for Networking Field Day 26, a virtual event presented by Tech Field Day.

Let’s face it. None of us like getting that call. You know which one I’m talking about. The one that makes you cringe immediately when your brain processes it. The problem description is simply “the network is slow”. I don’t know about you, but this is one of those mood changing incidents. You are immediately enthralled in an internal debate of “what does this even mean and how do I get the proper information to even start troubleshooting!?”. I’m sure it can feel like an interrogation to the person reporting the issue, getting asked so many questions. It can definitely be frustrating for everyone involved. Wouldn’t it be nice to have a fair amount of relevant information at your fingertips before asking a single question? Well, NetBeez aims to provide just that with multiple products across their portfolio.

NetBeez is a network performance monitoring company that can provide visibility with flexible deployment options.

I really appreciate the flexibility that the NetBeez solution provides. You can have a combination of physical wired/wireless sensors, integrated agents into network infrastructure platforms, cloud agents, as well as endpoint software. While I’m not always a fan of loading up more software on endpoint devices, if the software is lightweight, and provides crucial information to assist in resolving issues, I’m all for it. The NetBeez endpoint agent can do just that, and when you are supporting staff that can be working from anywhere, it is essential to be able to find the root cause of issues quickly and efficiently. If you can resolve an issue for someone without having to ask many (or any) questions, that is incredible.

Let’s take a look at the NetBeez Dashboard, via their demo environment. When you first log in, you are greeted by the high level incidents and performance graphs. This can give you a quick, holistic view into your environment.

The rest of the tabs provide you different methods to gain performance information from the sensors in your network or in the cloud. The Agents tab lists information about all of the different agents in your environment as well as the high-level performance statistics for different tests being run by the agents. As far as agent information, you’ll see items such as the agent name, type (wired, wireless, etc.), if there are any incidents associated, and the agent MAC and IP address.

In the Targets tab, we can see the various test that are set up to the configured destinations. As I’m going through the demo now, it’s looks like there is one alert and fourteen warnings with the Baidu configured destination.

As we drill down deeper, we can see that a specific device had a specific HTTP test related issue with the destination, that had since cleared.

The last major item that I want to cover is some of the data that you can see from the workstation agents. As stated above, companies are adopting remote/distributed working strategies now, more than ever. IT support organizations have to be able to support distributed employees in environments that IT does not control. An example of this is the work from home environment. IT may own the asset that the employee is using at home, but often they have no control over the network path to reach the employees’ homes nor the actual network infrastructure in the homes. Being able to provide network performance visibility from a client perspective can be crucial in determining causes of issues and pain points for employees. If someone is at home and connected via wireless, that can prove even more challenging. In the AGENTS > Remote Workers screen, we can a list of remote worker NetBeez agents. Within this screen, there is a WIRELESS tab that shows statistics about the agent’s wireless connectivity. We can see important information such as:

  • Protocol
  • Frequency/Channel
  • Bit rate
  • Link quality percentage
  • Signal strength

Bert’s Brief

Operating and supporting enterprise networks can be a daunting task when you feel like you are flying blind when issues get reported. NetBeez aims to give you visibility into the performance of your networks through path analysis and active testing across multiple form factors. Their pivot into the workstation client agent has surely been a welcome addition for customers over the last year and half. Meaningful visibility matters, and NetBeez has modular offerings to provide that analysis. I’m looking forward to seeing what they have in store next.

NFD26 – Scaling Discussion w/ Arista

The NFD26 series of posts are my commentary from specific presentations held for Networking Field Day 26, a virtual event presented by Tech Field Day.

Something that really peaked my interest during Arista Network‘s NFD26 presentation was the time they spent discussing scaling caveats and opportunities in modern data centers. There continues to be debates, both in the data center and campus around which is better, fixed or chassis-based switches? Now that is obviously a broad and open-ended question, so let’s focus it up a little bit. In terms of data center scale, what makes the most sense, chassis-based switches, or a distributed line-card model leveraging a fixed (or chassis based depending on port density), spine/leaf architecture. Arista definitely came prepared to discuss this and spent time on a couple slides contrasting the two options.

Arista described the two routing architecture scaling methods as either scaling up or scaling out. Scaling up describes a chassis based solution in which physical line-cards can be added to the base (backplane) chassis when additional capacity is required. Conversely, scaling out describes a spine/leaf topology that leverages physically diverse routers/switches that can be distributed throughout the data center. The spines are the physically disaggregated backplane that provide connectivity between the leaf nodes. The leaf nodes then, are the edge nodes where the edge connectivity takes place. Each leaf connects to each spine and leverages ECMP allowing for efficient throughput. Also, because each leaf is connected to each spine, any leaf-to-leaf communication just needs to traverse up to the spine and down to the destination leaf. When bandwidth scaling is needed, spines can be added to provide that. Here are the slides that Arista used to contrast the two scaling methodologies.

Bert’s Brief

I don’t necessarily think this is a “one size fits all” scenario by any means, but I come at it from two different standpoints, design and operational. First, from a design standpoint, when it comes time to scale in the scale up methodology, in relation to port count, you need a whole new chassis and linecard(s) to get started. On the bandwidth side, as stated in the above slide, the chassis fabric determines the bandwidth scale. With scale out, if more leafs are needed (*as long as you have spine ports available) you just need to add leafs. However, if the fixed spine is at capacity, you can physically upgrade one at a time in a fairly simple fashion. From a bandwidth scaling perspective, it might just be a fixed spine that you need to add, connect your leafs and away you go. From an operational standpoint, I think about support. Granted, chassis architectures can grow and improve over time, but it scares me that a failure in the actual chassis (backplane) could require you to need to replace the chassis and migrate linecards over to the new chassis. In a spine/leaf architecture leveraging fixed gear alone, you just replace the single device without needing to mess with the “backplane”. To me, the distributed modular method that the spine/leaf architecture provides makes a lot of sense from both design and operational standpoints. While not directly product driven, I appreciate Arista spending time going through this topic.

NFD26 – Experience First Networking w/ Juniper

The NFD26 series of posts are my commentary from specific presentations held for Networking Field Day 26, a virtual event presented by Tech Field Day.

Before getting started, I want to put some high-level definitions on some terms used throughout this post.

  • MistAI: Juniper’s cloud hosted platform that aggregates analytical data from a customer’s Juniper platforms (wireless, wired, SD-WAN, and data center) and presents it in forms that are meaningful and actionable.
  • Marvis: Juniper’s virtual network assistant (VNA) that allows customers to have a guide to interact with their analytics hosted in the MistAI cloud. Marvis can also be leveraged to dynamically make proactive changes to the network if the need arises (if enabled by the customer, so that Skynet implications can be avoided!).

One of my takeaways from Juniper Network’s presentation at NFD 26 was their take on “Experience First Networking”. I took this as being Juniper’s stance that user experience is at the top of mind and that it is difficult to provide a solid user experience without visibility. Juniper is providing this visibility by crowdsourcing metrics data from across their platform line and aggregating it into the MistAI cloud for customers.

From MistAI, customers can see the health of their networks and clients by viewing the aggregated data from their Juniper platforms, in different catered formats.

When wanting to see monitoring analytics around root cause analysis, MistAI breaks it down into multiple service level metrics and attaches a percentage score to each metric. From there, you can drill down into a metric, to see it broken down into specific classifiers for the metric to get a better idea of issues happening in your network. For example, in the following image, root cause analysis data is being viewed based on the throughput service level metric. Once Throughput is selected, it is broken out into different classifiers with percentages attached to each based on the amount of time that specific classifier caused throughput related issues.

How exactly do customers get this valuable data from their Juniper platforms into MistAI and how is the different platform data aggregated, once in MistAI? I asked those questions and received responses from Juniper staff, depicted below.

Bert’s Brief

Day-to-day network operations and troubleshooting can definitely be difficult. A tough and sometimes tedious task is just the initial information gathering, then later on needing more info, having to get back in touch with the person reporting the problem, etc. It can get frustrating for everyone involved. Having as much information gathered for you at all times and presented in different formats that are meaningful and provide actionable data is incredibly valuable. One example is, instead of having to walk someone through finding their IP or MAC address, you could find all relevant data by just them knowing the hostname of their device (which is hopefully a sticker on the device in some cases). This can make things much easier. Juniper is calling this method Experience First Networking and accomplishes it by aggregating data being sent from a customer’s Juniper platforms and providing relevant data back to the customer.

Networking Field Day 26

I am honored to have been invited as a delegate the the virtual Networking Field Day 26 event from September 14th – 16th. What is Networking Field Day (NFD)? NFD is a series of events put on by Tech Field Day with the purpose of getting network-minded technical individuals in front of the sponsoring vendors that make products so that ideas and solutions can be presented, and the invited delegates can ask questions and provide feedback. There are also other separate, similar events put on TFD that are geared toward security, cloud, storage and AI. For NFD 26, the sponsoring vendors include Arista, Cisco, Kentik, Juniper Networks, Riverbed, pathSolutions, and NetBeez.

How do you become a Tech Field Day delegate? The easiest thing to do is to go to, hover over Delegates and select Become a Field Day Delegate (or click this link to take you directly there). This will walk you through the requirements and next steps. I’ve known about TFD for some time now, but was always hesitant to apply due to the deep feeling of imposter syndrome. The delegates that I have seen selected are either seasoned IT professionals with lots of deep experience on what is being presented, or very strong up-and-comers. I haven’t felt that I really fit into either category. That all changed when we had the opportunity to interview the one and only Tom Hollingsworth on episode 57 of the Art of Network Engineering podcast, that I encourage everyone to check out. To summarize, Tom gave some very encouraging words that TFD wants a diverse group of experience and skillsets to attend these events. Plus, they go through an in depth selection process, where they research the potential delegates. At a high level, they want individuals that have a strong technical background, or are working toward building that strong technical background, and have a means to communicate what they experience at the event (in my case, this will be my blog). So, once they go through that process, if you are selected, they WANT you to be there (sorry for the all caps, I try to keep that to a minimum) and the imposter syndrome should be out the window. That really resonated with me and shortly after that episode recording, I was invited to join in on NFD26. Hopefully, this will give you some encouragement to check out TFD as well, and apply to be a delegate. Stay tuned for my updates on the event!

ENCOR Journey Series – Quality of Service

Bandwidth is infinite and we never need to worry about congestion, slowness, and packet drops, right? Of course, the answer to this is “no”. Networks transport many different kinds of data, and the different types may need to be handled in different ways to provide the best user experience during times of congestion. When I state “times of congestion”, I’m referring to periods of time in the network in which the amount of traffic is greater than the amount of available bandwidth. This is where Quality of Service (QoS) becomes very important. QoS consists of a modular framework of classifying traffic, creating a policy on how to handle the different traffic classes during congestion, then applying that policy to an interface. That policy can include marking classified packets, dropping certain types traffic while allowing others, queueing traffic until bandwidth is available, and setting aside amounts of bandwidth for different traffic classed during congestion. I think it’s important to note that the actual enforcement of the traffic policy only engages when there is congestion, or essentially, not enough bandwidth to server the current amount of traffic. In the rest of this post, we will take a deeper dive into the modular QoS approach.

Traffic Classification

In order to provide proper service and experience during congestion, to certain traffic types, you need to have a way to categorize, or classify, different traffic types. From a configuration standpoint, this is where class maps come in. Class maps provide you a way to match on certain traffic. You can then call on those class maps when creating policy (policy maps) in the next step. There are many different options for traffic classification at the different layers of the OSI model. Here is a high level look.

  • Layer 1 – interface/subinterface or port selection.
  • Layer 2 – MAC address, class of service (CoS) markings.
  • Layer 3 – IP address, IP Precedence (IPP) markings, Differentiated Services Code Point (DSCP) markings.
  • Layer 4 – TCP/UDP port.
  • Layer 7 – Application recognition with deep packet inspection – Network Based Application Recognition (NBAR).

Policy Creation

Once you have the important traffic classified that needs to be handled certain ways during times of congestion, you now need to figure out what that specific handling is going to be. As stated earlier, this is where we can leverage policy maps in the configuration. At a high level, policy maps call upon the created class maps, then define policy to be enforced when needed. Here are some examples of policy actions.

  • Traffic marking – Marking is used to apply a tag to packets. These tags signify specific classified traffic and can be used by other network devices to apply policy.
    • Layer 2 – At Layer 2, these are called CoS markings and are made across 802.1Q trunk links.
    • Layer 3 – At Layer 3, packets can be marked in the type of service (ToS) field in the header and can be the following.
      • IP Precedence (IPP) – IPP is the legacy method which uses three bits of the type of service (ToS) field in the packet header. This allows for up to eight different markings and was designed to be a direct mapping of CoS markings at Layer 2.
      • Differentiated Services Code Point (DSCP) – DSCP uses six bits of the type of service (ToS) and allows for up to sixty four different QoS markings.
  • Policing – With policing, packets are dropped to conform to the set bandwidth limitation.
  • Shaping – With shaping, packets are queued during congestion so that they can be held and sent as bandwidth becomes available. Shaping is typically done at the edge of the network when connecting to service providers. It is best to be done with TCP applications that are not sensitive to latency and jitter. UDP applications such as voice and video are sensitive to latency and jitter, so we want those packets to get through as soon as possible and not be queued.

Policy Application

Alright, so we’ve classified our traffic and created a QoS policy, now what? Now, the policy map must be applied to an interface so that it can be enforced. In Cisco IOS, this is referred to as the service policy and is applied to the interface in either an input or output direction.


One of the biggest things for me to remember around QoS is that the policy enforcement only needs to kick in during times of congestion. If there is always enough bandwidth, then packets never need to be policed or shaped. QoS is there to insure that the most important traffic is given preferential treatment during those heavy utilization times when there is not enough bandwidth to service all of the current traffic.

ENCOR Journey Series – Multicast Operation

In the last post, we covered the background of multicast. What it is, and why it’s leveraged in networks. In this article, we will dig deeper into multicast operation and routing. When we think in terms of unicast, we think about delivering data toward the destination. We have a destination IP address that belongs to a single device, and we are trying to get packets to it, directly. With multicast, we think in terms of forwarding data away from the source, rather than to a destination. I know that somewhat sounds like the same thing, but I think the big difference here is that with multicast, data is being sent to a grouping of devices and that grouping can be dynamic. Multicast receivers (or clients) can come and go. So, the big idea, is to just forward traffic away from the source and rely on multicast routing to properly deliver the packets to those interested receivers.

Multicast Delivery

So, how do we get packets from the senders (or sources) to the interested receivers, leveraging multicast? First, let’s go over two multicast delivery models, dense mode and sparse mode.

Dense Mode

With Dense mode, the multicast routers assume that all clients in the network want to receive the multicast traffic. This can be thought of as a “push” model because the multicast routers push the traffic to all clients in the network. Then, the routers can “prune” (or stop) traffic if they do not have any interested clients connected to them. However, that prune period is only temporary, and the multicast traffic will get flooded again. Dense mode is really only efficient in networks where multicast receivers are densely populated in all corners of the network. Because this isn’t the most efficient delivery method of multicast traffic, dense mode is not used as much these days.

Sparse Mode

Sparse mode requires that clients request to receive multicast traffic before traffic is sent down the multicast distribution tree (MDT) to them. Where dense mode is a push model, sparse mode can be thought of as a “pull” model. Clients must request to become members of specific multicast groups so that they can pull the data from the senders. Sparse mode was designed to be leveraged in environments where receivers are sparsely located throughout the network, but it also works well in networks that have densely populated receivers. This is a more efficient method for delivering multicast because the traffic is only forwarded to segments of the network where there are interested receivers.

Tree Types

We just brought up the concept of the multicast distribution tree (MDT). The multicast source can be thought of as being at the root of the tree with the receivers attached at the edge of the branches. There are two main tree types with multicast, source trees and shared trees.

Source Trees

Source trees build paths from the true source (or sender) of the multicast traffic all the way to the receivers. Source trees are leveraged in dense mode by default because traffic the source is known by all routers. Because traffic is being forwarded to all points in the network, it flows directly from the source and everyone knows the identity of the source right away. Source trees are also leveraged in sparse mode after initial multicast traffic is received and the source is known, and with source specific multicast (SSM). We’ll get into these shortly. Multicast routers notate source trees in the multicast routing table, in (S,G) format (pronounced “S comma G”) which stands for source and group. The source is the IP address of the sender and the group is the IP address of the multicast group that is receiving the traffic.

Shared Trees

Shared trees are leveraged with sparse mode (at least initially), and build two separate paths. The first is actually a source tree from the multicast source to a central multicast delivery router called a rendezvous point (RP). Then, a shared tree is built from the RP to the receivers. It is called a shared tree because multiple groups can share that rendezvous point. Why exactly do we need shared trees with an RP? Well, multicast receivers don’t always know (or, aren’t programmed to know) the actual source of the traffic they need. They may just know the group that they need to join so that they can receive traffic. In this case the router that connects to the receivers needs to join the shared tree, or in other words, register itself to the RP to receive traffic for the specific multicast group. Multicast routers notate shared trees in (*,G) format (pronounced “star comma G”). Whenever “*” is used, you know that an RP is involved. The RP can be learned by routers in multiple ways that we aren’t going to get into is this post.

Let the Traffic Flow!

I now want to go through the high level process of getting multicast traffic to a receiver with the sparse mode model. Remember that sparse mode leverages a pull model, so the process really starts with the clients that want to receive the multicast traffic. When someone opens up an application on a device that leverages multicast to receive data, the client sends an IGMP (Internet Group Management Protocol) membership report message to the multicast group address of the group they want to join. That membership report is received by the client’s local router, which is referred to as the last hop router (LHR). It is a last hop router because it can bee seen as the last hop in the path of the multicast traffic’s journey away from the source. If a source is not specified in the IGMP membership report, the LHR needs to join the shared tree. This essentially means that the LHR needs to register to the RP that it wants to receive traffic for the specific multicast group. While IGMP is used from the receivers to the LHRs, PIM (Protocol Independent Multicast) is used between the last hop routers and the RP. The LHR creates a (*,G) entry, where G is the multicast group that the client requested to join, and sends a PIM join message toward the RP. The LHR just uses the unicast routing table to find how to reach the RP. Once the interface toward the RP is determined, the router will send a PIM join message out that interface to the multicast address of (All PIM Routers). All interfaces in the path need to be configured for PIM sparse mode (PIM-SM). This process goes on from router to router until it reaches the RP. I learned something from Tim McC’s Cisco Live Barcelona session that really stuck with me. Rather than looking at this as a “tree growing”, the IGMP and PIM join process can be seen as trenching a canal that starts from the receiver and works it way up toward the RP or the source (depending on shared or source tree). At the beginning, when the IGMP membership report (join request) reaches the last hop router, the interface that the request was received on is considered an outgoing interface and the interface which the PIM join is sent out toward the RP is the incoming interface. We can look at this as multicast traffic flowing from a sender into an incoming interface and out the outgoing interface toward the receiver that requested it. Once our canal is built, multicast traffic will then flow from the RP down that canal (outgoing interfaces of each router) until it reaches the receivers. I’ve glossed over the process of the RP building that source tree with the first hop router (FHR), but that is a necessary process as well. When multicast traffic is generated from a source, it is sent to the specified multicast address of the application and received by the FHR (first router in the path of the multicast traffic). The FHR then works with the RP to build that source tree so multicast traffic can flow to the RP, and the RP can be that central point of multicast delivery, and other routers can build shared trees with it. I think it’s important to know that while multicast has it’s own routing protocol and tables, the process still leverages the unicast routing table to find the path toward the RP or direct source of the traffic. The unicast routing table is also used to perform Reverse-Path Forwarding (RPF) checks when multicast traffic enters an interface. The router checks the source of the packet against the unicast routing table to make sure that the packet should have been received on that interface. This is a loop prevention mechanism.

The Switch-over

Alright, that last paragraph got a bit heavy. So, what happens in a shared tree model if the best, most efficient path to the source of the multicast traffic is not through the RP? Once the canal is built and multicast traffic is received by the last hop router, the actual source IP of the multicast sender will be in the packet headers. The LHR can then enter an (S,G) entry in the multicast routing table and build a source tree (leveraging the unicast routing table to find the best path to the source) to the multicast sender. This is the same PIM join process that was used to build that canal to the RP earlier, but now it is being built directly to the source. Once the new canal is built, the LHR can prune its (*,G) entry toward the RP and only use the (S,G) path toward the actual source.

Source-Specific Multicast (SSM)

While the shared tree concept works, and the switch-over mechanism makes the process eventually efficient, there is a chance that we can rely on the applications themselves to make this process the most efficient right away. If the applications (and the network) support it, Source-Specific Multicast (SSM) can be used. Basically, with SSM the applications are configured to request multicast traffic from specific sources (or excluding sources). On the network side, SSM is leveraged with IGMPv3, so the LHR would need to support that version. When the LHR receives the IGMPv3 membership report from the client, if a requested multicast source is specified, the router can build that (S,G) source tree directly to the source without needing to build the shared tree through the RP first. This makes the whole process more efficient.


I’ve said it recently that there is a lot to unpack with multicast, and I’m only scratching the surface here, but I think learning the basics can really get you far with wrapping your head around multicast. For more in depth learning, I encourage you to take a look at that Cisco Live presentation that I referenced above. There are also some good Cisco whitepapers around multicast that are out there as well. Feedback is definitely welcome. Please let me know where I got it wrong or could have explained something better. I am using this blog as a tool to help me better understand concepts. Thanks!

ENCOR Journey – Multicast Background

Unless otherwise specified, I think it is easy to go into a routing scenario or situation assuming that the traffic flows are unicast. What do I mean by this? Unicast traffic flows are what I will call the “typical” end user type communications in a network. One device needs to talk to another device. The source IP is the address of the originating host and destination IP is the address of the “far end” device (for example, server). Let’s give an example of a unicast traffic flow. A person opens up a web browser and types (I don’t know if this is real or not, this is just an example) into the address bar. As we flow down the stack (or OSI model) from Layer 7 (Application Layer), in order for the packets to be delivered, a destination IP address needs to be known. In this case, DNS will be leveraged to translate the website name into an IP address. Once that process is complete, we now have our unicast conversation, one source IP address to one destination IP address. This is a basic unicast conversation on a network. This seems fairly straightforward, this must be the way that all conversations should happen on a network, right? Well, not always.

Let’s bring another example in. Let’s say that an organization has a need to deliver real-time, streaming video to employees for “daily check-in” meetings. A server gets set up for this, and every day at 9:00 AM, live video is streamed to the organization, and employees can use an application on their devices to “tune in” to this stream if they wish. This could get out of hand in a hurry if we had to leverage unicast for this method of streaming video delivery. If unicast was used, there would be a separate session for every client from the server that requested the stream. This does not scale well as it could heavily consume computing resources on the server and bandwidth on the network. See the example diagram below that gives a visual for how this stream would look if unicast was leveraged for packet delivery.

Video stream delivery with unicast

In the image above, there are four total workstations in the network that request the video stream. With unicast, the video server receives four different requests and sends the same stream four different times. This method can be resource intensive and may not scale well in large environments. Thankfully, another option exists. When leveraging multicast, we can have a streaming traffic flow that looks more like this image below.

Video stream delivery with multicast

While unicast gives us a “one-to-one” packet delivery mechanism, multicast is leveraged to support “one-to-many” use cases. The case of the “daily check-in” meetings is definitely a scenario in which a one-to-many delivery mechanism makes sense. Well, this sounds great, let’s just “turn it on” and walk away, right!? As with most things, there is more to it than that. First off, what are the requirements to support multicast?

  • The end user applications must be programmed to request data from a multicast group address.
  • The server application must be programmed to send data to a multicast group.
  • The network must configured to support multicast routing.

In the requirements listed above, I made reference to a multicast group address. A multicast sender (server) sends data to a multicast group address instead of individual client unicast IP addresses. Multicast receivers (end user applications) “subscribe” to a multicast group and request to receive data that is sent to that multicast group address from the multicast sender. The routers in between facilitate the multicast registration and packet delivery. Multicast addresses live in the Class D IP space of, which gives a range of addresses from to Within the Class D space, different ranges are specified for different purposes of multicast.


Multicast provides us the ability for more efficient packet delivery for certain use cases. These use cases include applications that provide a “one-to-many” functionality. An example use case is a server delivering streaming video to clients. When applications support multicast, and someone opens the app to request a stream, in modern multicast deployments, the application “subscribes” to a multicast group, which registers the device to it’s local router saying “Hey, please send me traffic destined to multicast IP address X.X.X.X”. The local router then registers itself to upstream routers requesting subscription to receive multicast packets to that specified multicast group. Then, as the stream is sent to that multicast group address, it is forwarded down the multicast distribution tree (MDT) to routers that have registered. Those routers keep track of the downstream interfaces from which the multicast requests were generated, and continue forwarding the packets to their destination. There is more to come, as there is a lot to unpack with multicast! Please join me in future posts.

Automation Journey – Python: The Basics

The title of this post really says it all. I want to become a more automation/programmability minded person and to kick off this journey, I decided to start by getting a familiar with Python. As time permits I am going through Al Sweigart’s “Automate the Boring Stuff with Python” book and Udemy course, and plan to document here what I learn along way. I have started learning the very basic programming concepts and am starting to get familiar with the Python shell and basic syntax. Traditionally, I’ve definitely been a “slow and steady wins the race” kind of guy. I like to take my time and try not to get overwhelmed. Luckily for me, Al’s book and course seems to cater to that. You’re going to have to bear with me during these posts, I’m really starting from square one here. I’m hoping that seeing my documented struggles and hopeful eventual successes might help others along the way. In the rest of this post, I want to go over my explanation/take on the basic terminology that I have learned. And yes, even though this isn’t geared toward exam study, I am making Anki flashcards. Going through these cards is really helping me soak up and retain the new concepts that I’m learning. Here are some concepts that I’m learning in question/answer form.

  • What is Python? Python is a combination of a programming language along with an interpreter that reads and takes action on the code written in the Python language.
  • What are expressions? Expressions take multiple values and evaluate them down to a single value. A basic example is 2 + 1 = 3. This expression leverages a “+” operator to evaluate the two values of “2” and “1” down to a single value of “3”.
  • What is a data type? We can think of a data type as a category or classification of a value. A specific value can only belong to one data type. Three common data types in Python are:
    • integers – Whole numbers.
    • floating point numbers – Numbers with decimal points such as 17.2, 9.7, 256.3.
    • strings – Data type that leverages alphanumeric characters.
  • What is a variable? A variable is memory allocation in a program used to store a value. The variable can then be called to use later in the program. Al explained variables in a way that I really like. A variable can be thought of as a labeled box that is used to store something. In a program that “something” can be set statically or dynamically. For example, a variable could be set as a result of someone typing some input into the program in a certain spot.
  • What is an assignment statement? This is how value gets assigned (or stored) to a variable. An example is age = 23. In this assignment statement, a variable is created named age and is assigned an integer value of 23. In the assignment statement the = sign acts as the assignment operator. Variables contain one value at a time. So, after we set age to 23, if we then type age = 42 the value of 23 has now been overwritten by the value of 42 in the age variable.
  • What is # used for? The # symbol is used in Python to write comments in your code. Essentially, anything typed on a line after the # is ignored by the Python interpreter when the code is run. Here are a couple of use cases for “commenting out” Python code.
    • For documentation purposes to explain what your code is trying to accomplish.
    • For debugging purposes to “disable” certain lines of code to see if they are causing issues.
  • What is a function? A function is an “external” set of code that your program can call upon to run within your code. Functions are already built/established programs for you to use within your code. Built-in Python functions include print() and input().
  • What is an argument? In the point directly above, I explained at a high level what a function is along with its purpose. Well, an argument is a value that is passed to a function. Let’s take a look at a simple example with the print() function.
>>> print('Hi, I am Tim and I am a complete Python/coding beginner. Please Help!')
Hi, I am Tim and I am a complete Python/coding beginner. Please Help!

In the example above, I am passing the string value of ‘Hi, I am Tim and I am a complete Python/coding beginner. Please Help!’ That string value is an argument within the print() function. In other words, it is the value being passed to the function.


As you can tell, I’m learning the basics of the basics here. Also, I’m going through this on a “as time permits” basis, which probably isn’t the best approach, but building and reviewing the digital flashcards is helping retain the knowledge, at least around the terminology, which is good. If you are learning Python as you’re reading this, then I hope this helps.

Automation Journey – The Beginning…Again

It’s time. Time that I make a conscious effort to start learning concepts and leveraging tools to dive into network automation. I have heard a lot of advice for a while now that a great way to become a better network engineer is to automate repetitive tasks so that you can spend time providing value elsewhere. The concept of providing value is important to me. Whenever I struggle with thoughts around not knowing where to start, I figuratively step back and tell myself to just find where you can use your skills (and build new skills) to provide value. I also want to position myself to continue to remain relevant, and learning automation seems like a great way to go.

As you can probably gather from the title, I’ve been down this road before. I’ve dabbled for a bit and dropped it, but now is time to break the cycle. I am currently working toward achieving the CCNP Enterprise certification and do not plan on taking time away from that goal to study automation. As much as I would love to dive into Cisco DevNet and eventually work toward the DevNet Associate Certification, I do not plan to start that yet. While I work toward CCNP Enterprise, I would like to start my automation journey and find some quick wins. I feel (it might be ‘right’ and it might be ‘wrong’) that I can do that by getting familiar with Python. I have decided to start with going through Al Sweigart’s “Automate the Boring Stuff with Python” book and Udemy course. I am really excited to dig in. I plan to build upon this series and really hope that I can look back on this first post sometime in the future and be proud of my progress. I also encourage you to learn along with me. Or better yet, TEACH ME EVERYTHING YOU KNOW!

ENCOR Journey – STP Features

In the last installment of this series, I keyed in on the STP feature of PortFast. In this post, I wanted to highlight two more STP features, or “add-ons”, that I think are very important for controlling and securing the Layer 2 domains. Those two features are BPDU guard and root guard. Both features are leveraged to react to Spanning Tree BPDUs in similar ways, but in different scenarios, and for different reasons.

First, BPDU guard is leveraged on access ports configured with PortFast. BPDU guard prevents switches from being plugged into access ports and potentially causing Layer 2 loops. Remember, PortFast allows interfaces to immediately transition to the forwarding state, which is dangerous if switches are being plugged in. When BPDU guard is enabled on PortFast interfaces, if a BPDU is received on the port, the port will be placed into an err-disabled state (effectively shut down). BPDU guard can be configured either globally on all PortFast enabled interfaces, or explicitly on specific interfaces with the following commands.

  • Global
    • configure terminal
    • spanning-tree portfast bpduguard default
  • Interface Specific
    • configure terminal
    • interface interface-id
    • spanning-tree bpduguard enable

Next, root guard is a mechanism to prevent switches that should not become the root bridge, from becoming the root bridge. STP root guard is configured on designated ports that connect to downstream switches that should never become the root bridge. If a superior BPDU is received on a port configured with root guard, rather than the designated port transitioning to become a root port, the port is placed into an err-disabled state to protect the current root bridge and to prevent a topology change from occurring. Well designed Layer 2 topologies should have defined primary and secondary root bridges, and leverage root guard if necessary to protect against unnecessary topology changes due to misconfigured or rogue switches. Root guard can be enabled on STP designated ports with the following commands.

  • configure terminal
  • interface interface-id
  • spanning-tree guard root

I have enjoyed gaining a deeper knowledge of STP, including the additional features. I see BPDU guard and root guard as protection mechanisms that help promote a stable topology and assist in the prevention of unnecessary or unwanted topology changes.