In the last post, we covered the background of multicast. What it is, and why it’s leveraged in networks. In this article, we will dig deeper into multicast operation and routing. When we think in terms of unicast, we think about delivering data toward the destination. We have a destination IP address that belongs to a single device, and we are trying to get packets to it, directly. With multicast, we think in terms of forwarding data away from the source, rather than to a destination. I know that somewhat sounds like the same thing, but I think the big difference here is that with multicast, data is being sent to a grouping of devices and that grouping can be dynamic. Multicast receivers (or clients) can come and go. So, the big idea, is to just forward traffic away from the source and rely on multicast routing to properly deliver the packets to those interested receivers.
So, how do we get packets from the senders (or sources) to the interested receivers, leveraging multicast? First, let’s go over two multicast delivery models, dense mode and sparse mode.
With Dense mode, the multicast routers assume that all clients in the network want to receive the multicast traffic. This can be thought of as a “push” model because the multicast routers push the traffic to all clients in the network. Then, the routers can “prune” (or stop) traffic if they do not have any interested clients connected to them. However, that prune period is only temporary, and the multicast traffic will get flooded again. Dense mode is really only efficient in networks where multicast receivers are densely populated in all corners of the network. Because this isn’t the most efficient delivery method of multicast traffic, dense mode is not used as much these days.
Sparse mode requires that clients request to receive multicast traffic before traffic is sent down the multicast distribution tree (MDT) to them. Where dense mode is a push model, sparse mode can be thought of as a “pull” model. Clients must request to become members of specific multicast groups so that they can pull the data from the senders. Sparse mode was designed to be leveraged in environments where receivers are sparsely located throughout the network, but it also works well in networks that have densely populated receivers. This is a more efficient method for delivering multicast because the traffic is only forwarded to segments of the network where there are interested receivers.
We just brought up the concept of the multicast distribution tree (MDT). The multicast source can be thought of as being at the root of the tree with the receivers attached at the edge of the branches. There are two main tree types with multicast, source trees and shared trees.
Source trees build paths from the true source (or sender) of the multicast traffic all the way to the receivers. Source trees are leveraged in dense mode by default because traffic the source is known by all routers. Because traffic is being forwarded to all points in the network, it flows directly from the source and everyone knows the identity of the source right away. Source trees are also leveraged in sparse mode after initial multicast traffic is received and the source is known, and with source specific multicast (SSM). We’ll get into these shortly. Multicast routers notate source trees in the multicast routing table, in (S,G) format (pronounced “S comma G”) which stands for source and group. The source is the IP address of the sender and the group is the IP address of the multicast group that is receiving the traffic.
Shared trees are leveraged with sparse mode (at least initially), and build two separate paths. The first is actually a source tree from the multicast source to a central multicast delivery router called a rendezvous point (RP). Then, a shared tree is built from the RP to the receivers. It is called a shared tree because multiple groups can share that rendezvous point. Why exactly do we need shared trees with an RP? Well, multicast receivers don’t always know (or, aren’t programmed to know) the actual source of the traffic they need. They may just know the group that they need to join so that they can receive traffic. In this case the router that connects to the receivers needs to join the shared tree, or in other words, register itself to the RP to receive traffic for the specific multicast group. Multicast routers notate shared trees in (*,G) format (pronounced “star comma G”). Whenever “*” is used, you know that an RP is involved. The RP can be learned by routers in multiple ways that we aren’t going to get into is this post.
Let the Traffic Flow!
I now want to go through the high level process of getting multicast traffic to a receiver with the sparse mode model. Remember that sparse mode leverages a pull model, so the process really starts with the clients that want to receive the multicast traffic. When someone opens up an application on a device that leverages multicast to receive data, the client sends an IGMP (Internet Group Management Protocol) membership report message to the multicast group address of the group they want to join. That membership report is received by the client’s local router, which is referred to as the last hop router (LHR). It is a last hop router because it can bee seen as the last hop in the path of the multicast traffic’s journey away from the source. If a source is not specified in the IGMP membership report, the LHR needs to join the shared tree. This essentially means that the LHR needs to register to the RP that it wants to receive traffic for the specific multicast group. While IGMP is used from the receivers to the LHRs, PIM (Protocol Independent Multicast) is used between the last hop routers and the RP. The LHR creates a (*,G) entry, where G is the multicast group that the client requested to join, and sends a PIM join message toward the RP. The LHR just uses the unicast routing table to find how to reach the RP. Once the interface toward the RP is determined, the router will send a PIM join message out that interface to the multicast address of 220.127.116.11 (All PIM Routers). All interfaces in the path need to be configured for PIM sparse mode (PIM-SM). This process goes on from router to router until it reaches the RP. I learned something from Tim McC’s Cisco Live Barcelona session that really stuck with me. Rather than looking at this as a “tree growing”, the IGMP and PIM join process can be seen as trenching a canal that starts from the receiver and works it way up toward the RP or the source (depending on shared or source tree). At the beginning, when the IGMP membership report (join request) reaches the last hop router, the interface that the request was received on is considered an outgoing interface and the interface which the PIM join is sent out toward the RP is the incoming interface. We can look at this as multicast traffic flowing from a sender into an incoming interface and out the outgoing interface toward the receiver that requested it. Once our canal is built, multicast traffic will then flow from the RP down that canal (outgoing interfaces of each router) until it reaches the receivers. I’ve glossed over the process of the RP building that source tree with the first hop router (FHR), but that is a necessary process as well. When multicast traffic is generated from a source, it is sent to the specified multicast address of the application and received by the FHR (first router in the path of the multicast traffic). The FHR then works with the RP to build that source tree so multicast traffic can flow to the RP, and the RP can be that central point of multicast delivery, and other routers can build shared trees with it. I think it’s important to know that while multicast has it’s own routing protocol and tables, the process still leverages the unicast routing table to find the path toward the RP or direct source of the traffic. The unicast routing table is also used to perform Reverse-Path Forwarding (RPF) checks when multicast traffic enters an interface. The router checks the source of the packet against the unicast routing table to make sure that the packet should have been received on that interface. This is a loop prevention mechanism.
Alright, that last paragraph got a bit heavy. So, what happens in a shared tree model if the best, most efficient path to the source of the multicast traffic is not through the RP? Once the canal is built and multicast traffic is received by the last hop router, the actual source IP of the multicast sender will be in the packet headers. The LHR can then enter an (S,G) entry in the multicast routing table and build a source tree (leveraging the unicast routing table to find the best path to the source) to the multicast sender. This is the same PIM join process that was used to build that canal to the RP earlier, but now it is being built directly to the source. Once the new canal is built, the LHR can prune its (*,G) entry toward the RP and only use the (S,G) path toward the actual source.
Source-Specific Multicast (SSM)
While the shared tree concept works, and the switch-over mechanism makes the process eventually efficient, there is a chance that we can rely on the applications themselves to make this process the most efficient right away. If the applications (and the network) support it, Source-Specific Multicast (SSM) can be used. Basically, with SSM the applications are configured to request multicast traffic from specific sources (or excluding sources). On the network side, SSM is leveraged with IGMPv3, so the LHR would need to support that version. When the LHR receives the IGMPv3 membership report from the client, if a requested multicast source is specified, the router can build that (S,G) source tree directly to the source without needing to build the shared tree through the RP first. This makes the whole process more efficient.
I’ve said it recently that there is a lot to unpack with multicast, and I’m only scratching the surface here, but I think learning the basics can really get you far with wrapping your head around multicast. For more in depth learning, I encourage you to take a look at that Cisco Live presentation that I referenced above. There are also some good Cisco whitepapers around multicast that are out there as well. Feedback is definitely welcome. Please let me know where I got it wrong or could have explained something better. I am using this blog as a tool to help me better understand concepts. Thanks!