This is a result of an e-mail exchange I had with John Carmack, and some thinking I've done about a video over ATM experiment I helped a friend of mine run a few years ago.
Quality of service protocols worry me a lot because they seem mainly a way for the communications industry to decommodotize bandwidth so they can charge more for it. Perhaps this is cynical of me, but I believe observations of the music industry and the ever increasing price of new distribution media despite lower production costs provide a justification for my cynicism.
The only problem is, what do you do if you don't have quality of service protocols? How do you deliver your interactive video stream in a reasonable, timely fashion? How do you have a conversation over the Internet?
This is only a partial answer to that question, and I suspect quality of service protocols will have to figure in somewhere. I only hope they're designed with the goal of making it hard for telecommunications companies to charge a premium for high quality streams at the expense of having any network bandwidth left over for normal traffic.
Why does this have anything to do with the topic? Well, multiplayer 'twitch' games have very high timeliness demands. They typically require round trip delays of less than 200 milliseconds for adequate play, and preferably less than 100 milliseconds. Most people playing them though only have access to Internet connections over a modem. It's difficult to get round trip delays of less than 200 milliseconds over a modem.
That is, unless you adopt a couple of strategies. To understand these strategies, a brief description of how modern modems work is in order.
Most modems nowadays to data compression and error correction using and HDLC (Does anybody have a better link?) protocol called LAPM. This requires the modem to packetize the data it sends to the remote modem, which in turn requires the modem to buffer data before it's sent. This increases latency, which is bad for ping times. The modem typically talks to the computer at a higher bitrate than it communicates with the other modem, so this acts to reduce latency somewhat. Still, in order to properly compress, a modem should try to compress as much data at once as possible. This requires that the modem wait a short time before actually starting to send the data so that if anymore data is supposed to arrive, it does.
One obvious improvement is to communicate with the modem at an even higher bitrate. For legacy reasons, it's very difficult for most Windows programs to work with a modem that doesn't communicate through a serial line limited to about 115200 bps. Of course, since Microsoft writes the dialup networking drivers, they can write ones that will talk to a USB modem, or one connected directly to the PCI bus. Both interfaces would provide bitrates that were 10 to 100 times the rate of a serial line, which should reduce that kind of latency.
We're still left with two more avenues for latency improvements. The modem generally treats PPP data as if it were a data stream to be sent to the other modem. This means it doesn't look at the data being sent at all. If it looked at the PPP packet to determine where it ended, it would gain a valuable cue as to when to stop waiting for data to compress, and when to start transmitting. Also, it could strip the PPP headers off, since the modem is doing it's own error correction and packetization. The remote modem would then add them back before sending them to the remote computer. This would save on latency by allowing the total amount of data sent to be smaller.
One of these solutions is solving a hardware interface problem. The other though has an interesting property. A lower layer protocol (HDLC) is using knowledge about an upper layer protocol to optimize performance. We already see this in PPP with header compression. It's an interesting pattern that bears some attention.
Time sensitive streaming data is handled poorly by the Internet. If you use TCP/IP to send your stream, you end up with a lot of latency problems, especially if there's network congestion. Your OS will wait to deliver data to you until all the data preceeding it has arrived. Often, this is too late to do anything with any of the data.
This is handled by non-interactive streams by buffering up several seconds before beginning to use the data, so there's a big buffer of stuff to play while waiting for network delays and reassembly of out-of-order packets.
Interactive streams have no such luxuries. Typically, introducing more than a few milliseconds delay is going to be noticed. This is particularily true of 'twitch' video games where the reaction times of the players figure prominently into gameplay.
So, what about UDP/IP? Most twitch games have gone to use UDP because of its lower latency characteristics. UDP packets have no requirement to be delivered in order, so the OS will give you the packet that just arrived, even if one that was sent previously hasn't yet. UDP's problem of unreliability and losing packets is unimportant. If you need to retransmit a packet, it will probably arrive to late to be useful anyway.
UDP does have a problem though. A packet can arrive out of order. You can usually just ignore the packet that was supposed to have arrived before the one you just processed, but that's not the problem. The problem is the network bandwidth wasted in getting stale data to you. If the network is congested enough to delay packets, you probably want to have it drop as many of those packets as possible to relieve congestion and make timely delivery of subsequent data more likely.
This is particularily acute if your data is arriving through the thin straw of a modem connection. While that stale data was being sent, timely data might have arrived at the other end. It would've been much better to either abort the sending of the stale data as soon as the fresh data arrives, or to have acoided sending the stale data at all.
Ths imposes an interesting protocol layer problem. If your stream is an interactive MPEG video stream, there are certain rules about which packets you want to drop and which ones you want to keep. For an interactive video game, the rules are different. Link and network layer behavior need to depend on information that is only available to the application layer.
One solution is to define a new kind of transport. This transport would have the property of guaranteed order of delivery, but not guaranteed delivery. Each outgoing packet would be tagged with a sequence number the would be preserved if the packet was fragmented. This would allow routers to throw away packets with old sequence numbers. This would keep stale data from cluttering up their queues. Better to drop a frame so that the next frame can get there on time.
A further optimization would be to promoted packets with higher sequences numbers to the spots occupied by the packets with the older sequence numbers. Actually, you'd have to implement this optimization, or you might end up with every packet for the stream being dropped. New packets would arrive, cause the old ones to be discarded, and then the new ones would be queued at the rear, and if they didn't make it out in time, the same thing would happen to them.
I don't have the equipment, or time to test these ideas, so they are thought experiments. I'm not sure if the optimizations I outline would actually help, but my experience with networking suggests that they should.
If you have any ideas, e-mail me and I'll put your idea in here and give you credit.
|Eric Hopper email@example.com||My homepage|