MBONE: Multicasting Tomorrow's Internet
The answer is the use of multicast traffic. Ordinary IP traffic is called unicast, which means that for one source you have one destination. When a source sends unicast packets to a series of networks, as shown in Figure 10-1, this traffic travels through a variety of routing nodes to finally arrive at its destination. Except for routing the traffic, only the destination node will grab and keep the packets.
Figure 10-1: Sample path of a unicast packet.
Another type of traffic is called broadcast traffic (see Figure 10-2). For one given source, the destination is everybody who is on the local network. Broadcast traffic is already a way to send a packet to more than one destination. The problem is that destinations that do not want the packet will still receive it and process it. Since broadcast packets are not allowed to pass through routers, they are confined to the local network and cannot be used for the MBONE. This is a fortunate feature of broadcast packets. Imagine the havoc that could be created on the Internet by a person flooding it with broadcast packets. Even a user on a PPP link would get the packet, and if the broadcast packets would be numerous enough, the user's PPP connection would be seriously disrupted. With a sufficient number of people, any kind of link could be disrupted this way.
Multicast traffic is a better way to send packets to multiple destinations. When a source sends a packet to a complex network (see Figure 10-3), the routers that support multicast routing compare the intended destination in the packet to their own member list and forward the packet to the members in the intended destination group. Therefore, a single packet is able to reach multiple hosts around the world. That is real bandwidth economy! Only the people who want the packet will receive it. Now, there is one more notion to this to understand.
Figure 10-2: Sample path of a broadcast packet.
Because the multicast packet does not have a precise destination, it never reaches one. Imagine that you are driving in a car on a trip to no particular place, and you decide along the way who you are going to visit before continuing the trip. You could go on like this for a very long time. At some point you could even go back to your point of origin, but only to continue your trip via the same roads you already used. Now imagine that there are thousands and thousands of cars on the same kind of trip. For regular people who are in a hurry to get to their destination, the roads would rapidly get too crowded for anybody to drive on them. The solution to this dilemma is called Time-To-Live (TTL). It can be compared to the gas tank of the car.
Every time a multicast packet goes through a multicast router, the packet's TTL value is decreased by one. So, if you have a packet with a TTL value of 4, the packet only goes through four multicast routers before dying. Using a small TTL, you can actually send all the multicast traffic you want, and the traffic does not burden network links that should not be burdened with that traffic.
To summarize in one sentence, the MBONE uses multicast traffic because no other way of sending packets to multiple destinations provides us with these two features:
Now, what if you are an Internet provider who wants to be a feed for customers? Or you are a commercial site that wants to provide MBONE connectivity to many areas of your business?
If your network does not support native multicast routing, then you will have to proceed with tunnels. Native multicast routing is the ability for routers to route multicast traffic. A few brands of routers can do this already (Cisco, Proteon) with special versions of the software that runs on them. More brands will certainly do it in the future. This is the best way to save bandwidth because the traffic destined to a multicast group will go through a particular link only once, whatever the number of participants.
If your network does not support native multicast routing, a program called mrouted allows you to work around this limitation, but the price to pay is more bandwidth consumption for the local network to which the machine that runs mrouted is connected.
This mrouted program runs on a variety of UNIX platforms, and it works by encapsulating multicast packets inside regular unicast packets. Two multicast routers that both encapsulate multicast packets and talk to each together are said to communicate via a tunnel.
If your network provider does not support native multicast routing, it will send you the MBONE traffic via a tunnel. And if you choose to feed customers or sites, you will feed them via a tunnel. Along with tunnels, mrouted also supports physical interfaces. A physical interface is basically the Ethernet adaptor on your machine. Now you're going to ask, "But what will I do with a physical interface?" I am glad you asked, because a physical interface is the main way to provide MBONE connectivity to all the machines that sit on a network. Defining a physical interface simply instructs mrouted to send multicast traffic to the physical interface. If you remember the travel in a car example, by being sent to the physical interface, MBONE traffic becomes available to all the machines that share the physical network attached to this particular physical interface.
The mrouted program learns about the tunnels and physical interfaces it has to maintain through the use of a configuration file. This file is normally named mrouted.conf and resides in /etc (see Figure 10-4).
At the beginning of the template configuration file that comes with the program, you can see the general syntax of the phyint and tunnel lines. The syntax here is not quite up-to-date because my configuration file is pretty old. However, the manual pages that accompany the software describe all the possible options, which are the following:
# $Id: mrouted.conf,v 1.3 1993/05/30 02:10:11 deering Exp $ # # This is the configuration file for "mrouted", an IP multicast # router. # mrouted looks for it in "/etc/mrouted.conf". # # Command formats: # # phyint <local-addr> [disable] [metric <m>] [threshold <t>] # tunnel <local-addr> <remote-addr> [srcrt] [metric <m>] # [threshold <t>] # # any phyint commands MUST precede any tunnel commands # phyint 123.456.78.90 metric 1 threshold 1 # Department of Algorythms tunnel 123.456.78.90 123.456.3.4 metric 1 threshold 8 # Faculty of Harmonies tunnel 123.456.78.90 123.456.141.16 metric 1 threshold 8 # Department of Finding New Ways to Fly tunnel 123.456.78.90 123.456.186.10 metric 1 threshold 8 # Department of Having a Look Inside Human Bodies tunnel 123.456.78.90 123.456.101.182 metric 1 threshold 8 # A very collaborative site tunnel 123.456.78.90 123.456.210.1 metric 1 threshold 16 # University of The Other Side of the City tunnel 123.456.78.90 123.457.90.90 metric 1 threshold 16 # University of Far To the East tunnel 123.456.78.90 220.127.116.11 metric 1 threshold 32 #University of Good Ole Earth tunnel 123.456.78.90 18.104.22.168 metric 1 threshold 32 # My Feed from which I get the MBONE Traffic tunnel 123.456.78.90 22.214.171.124 metric 1 threshold 64Figure 10-4: mrouted configuration file.
The threshold is a numeric value of which I specify the minimum TTL required to allow the packet to go through. For example, if you create an MBONE event with a TTL of 20, and you have the exact same configuration file I do, then all sites at the other of my tunnels with thresholds of 8 and 16 can receive it, as well as my physical interface. The sites at the other of my tunnels with thresholds of 32 and 64 would not be able to receive my event. Why did I do it this way? To better answer this, I should tell you what reasons prompted me to organize my thresholds as I did.
Figure 10-5 shows an example of an assymmetric tunnel configuration.
Figure 10-5: An assymmetric tunnel configuration.
My tunnels with thresholds of 8 are all at my own site. My tunnels with a threshold of 16 are all in the province of Quebec. My tunnels with a threshold of 32 are inter-province tunnels. Finally, my tunnel with a threshold of 64 is my MBONE feed to the United States.
This configuration allows me to have a pretty fine granularity for choosing the scope of my MBONE events. If I want to create an event relevant for McGill people only, I would choose a TTL of 7 or less. The event would not pass through tunnels with a threshold of 16 and thus would not flood the links to other universities. The same principle applies to events that I want to remain within the city of Montreal or in Canada. Only for those events that I want to send to the whole world would I choose a threshold of 64 and more. In this way you can use the MBONE internally if you have a slow link to the outside.
Something else to know is that thresholds must be symmetric. If you configure a tunnel with a threshold of 8, then the other end of the tunnel must configure the tunnel with the same threshold. The reason for this is that if you create an event and a user at the other end of the tunnel wants to participate in the event, one of you will not be able to receive the other user's data, which could be a bit annoying. For example, you could talk all you wanted and the other person would not hear you.
In Figure 10-4, one end has the tunnel configured with a threshold of 8 and the other end has it with a threshold of 16. The event was created by you with a TTL of 12. Because the TTL is 12 and the threshold on your end is 8, you will be able to talk and send video and the other end will receive it perfectly. However, because the threshold on their end is 16, a TTL of 12 is not enough to pass through the tunnel, and they will never be able to send you any data. The fix to this would of course be for them to set the threshold to 8.
You can see from my configuration file that I have nine tunnels and one physical interface. For each packet I receive, I duplicate this packet nine times: once for the physical interface and eight times for the eight tunnels from which the packet did not come. The packet has to be encapsulated and sent out for each one of these tunnels.
Now imagine that my router does that for the whole MBONE traffic. It would receive 500 Kbps from one tunnel and send it back to eight other tunnels and one physical interface. 500 Kbps of bandwidth is taken up for receiving the traffic, and because I duplicate it nine times and send it out to one local network (tunnels go through that physical network before they reach the router), this physical network will endure 10 times the traffic I receive, which is 5000 Kbps (or 5 Mbps). With these kinds of numbers, saturating an Ethernet is easy, and in fact, my multicast router is alone on its Ethernet. If my multicast router shared its Ethernet with other machines, the traffic from the other machines would disrupt the MBONE here, and vice-versa. And that traffic doesn't take into consideration the packets sent by the sites I feed.
A solution to this would be to have multiple Ethernet adapters on my multicast router. In fact, I do have two Ethernet adapters, but the second one is not yet in use and is destined to feed my own department. Later, a third Ethernet adapter will be used to feed the engineering buildings. Of course, you will need a corresponding number of Ethernet networks to which you connect your interfaces. Because routers are expensive and the number of ports on them is limited, running a big MBONE router could easily become expensive.
Fortunately, the current situation is not yet this drastic.That a combination of all my downfeeds corresponds to a participation to all the events is very rare. Remember that mrouted does support pruning and that this alone almost guarantees that I will never have to route all the traffic.
Table of Contents | Previous Section | Next Section