Was USB was invented to milk users for money?

Some time ago Compaq, Digital Equipment, IBM, Intel, Microsoft, NEC, and Northern Telecom got together and decided to develop a replacement for RS-232 and other various peripheral busses used in personal computing. Although the USB initiative is to be lauded for making the resulting standard "Open and Royalty Free," there are several inconsistancies with the resulting product that make me wonder whether the interests of the end-user were actually well represented in the development of this standard.

Since its inception, USB has grown from a simple serial port fanout to a fully functional network technology, and is even used to some extent to transport IP data. The technology suite has hubs, adaptors, endpoints and is pretty much a layer 2 of its own stripe. Being from a networking background, I took a look at USB from that perspective, and could only reach one conclusion: several, much better solutions were readily available to the designers of USB. Why weren't they used, and what would we have today, had they been?

Coming down to the wire.

One of my favorite home-grown platitudes used to be that no matter how complex or high-end a network technology is, it all comes down to wires (using the term loosely to include fibers) in the end. This was something many network "professionals" seemed to gloss over, showing no comprehension of cable plant issues or the importance of anything below OSI layer 2. Despite the fact that they all basically do the same thing in modern computer systems -- move bits from point X to point Y -- there is a huge variety of wiring standards and layer 1 protocols that run on them (not to mention kludged up systems for encapsulating one protocol in another.)

When the USB group got together, one of the major choices facing them was what kind of wires USB would use. First, let's take a look at what USB was designed to replace: peripheral busses. These used the following standards:
Namemax speedMax lengthwire
RS-232115.2Kbps20 meters5-8 strands
IEEE-12843Mbps25 feet36 strands
MIDI31.2Kbps50 feet5-6 strands

gameportn/a?6-15 strands

localtalk230Kbps1000 feet4 strands
HIL2Mbps?4 strands

The standard they chose, initially, was a 4-strand design that ran at up to 12Mbps, had a length limit of 5 meters, and provided power to attached devices. Obviously this length limit falls well short of many of the above protocols. So the immediate question is, why did they not choose a standard that would meet the needs of the existing user base of peripheral components?

802.3 -- the obvious choice

At the time that USB was being designed, a popular technology called 802.3 was in use. This technology allowed 10Mbps transmission over two twisted pairs of wires (4 strands) and had a 100 meter maximum length. It already supported access by multiple devices, and integrated circuits for using 802.3 were readily available from several major vendors and very cheap. 802.3 was not a closed standard, and additions to the standard to bring the speed up to 100Mbps were just hitting the market.

Why haven't you heard about 802.3? You have. The other name for it is ethernet.

So if the USB people could have taken off-the-shelf components and used them to build a faster, longer, universal bus, then why didn't they? I'll let you answer that question yourself, as you read through the rest of this article.

USB and ethernet could have been merged

Now, two things I want understood. One is that I am by no means an "ethernet fanatic." I hate the way some people seem to think ethernet is some sort of superhero among network protocols, and try to do things with ethernet, like wide-area networking, that are much better suited to other protcols.

The other thing I need to make clear is that there were two options to consider here. The USB working group could have used the physical layer of ethernet, 802.3, without "running input devices over ethernet switches." They could have done everything exactly the same as they did, with dedicated USB hubs, special cable connectors, etc, just using the 802.3 physical layer protocol, and if they had just that one thing, USB would not have been limited to 5 meter long cables.

But let's take this a step further, shall we? What if they had thought to "run input devices over ethernet switches?" In this case, all your devices, like keyboards, mice, printers, and joysticks, would have come with a standard ethernet port on the back, and to hook them to a computer, you would just plug the computer and all of the devices into a commodity mini-hub or mini-switch and a power source. Then we would not have needed special USB hubs (unless we wanted to avoid having a separate power connector.) Nor would computers have needed special USB ports -- even older computers could have been equipped for "USB" using standard ethernet plug-in cards.

Now granted, there are some things ethernet is not good at, and one of them is "Quality of Service." USB has a feature called isocronus transfers that would not have worked to well if you had just plugged a USB device into your home or office LAN. But if all the devices on an ethernet switch were standard USB devices, this problem could have been avoided because the devices would all have cooperated, in much the same way that they do when sharing a USB hub today. So even if we couldn't have mixed regular ethernet networks with USB networks, if the USB working group had taken this route, we could today be building USB networks with standard ethernet parts.

Now think of this from the perpective of the manufacturers of motherboards. Nowadays, when you buy a motherboard, you get 1 or 2 ethernet ports built right into the boards. You also usually get 2 to 10 USB ports built in. If USB had been based off of ethernet to begin with, motherboard manufacturers would not have had to support these two different circuits, and the economy of scale for 802.3 ports would have been much better.

Moreover, users could have decided whether to use an ethernet port as a USB port, or for networking, making the motherboard manufacturer's product much more flexible. In addition, the ease of expanding a "USB" network to cover entire buildings would have opened opportunities for much cheaper solutions for businesses in need of KVM switches, security cameras, and a host of other applications.

The very existance of USB-based DSL and cable modems is a prime example of the perversity that has resulted from the way USB was pushed onto the marketplace. USB was never meant to be a networking technology, but because it was pernicious, modem vendors used it as an alternative to requiring an add-on ethernet card, back before ethernet ports started to become standard equipment on motherboards.

Plugging away

The USB folks also developed special plugs for their devices. Let us compare what they came up with with ethernet plugs.

Ethernet plugs, at the time USB was invented, were (and still are) almost always 8-connector RJ-45 plugs. Four of the connectors were not used (which leaves 4 connectors free for power.) These plugs were the same on each end, and except for cables between two hubs/switches, all the cables were the same. Anyone can make these cables with a pair of $35 crimpers.

USB decided to have two different plugs. This meant that there were three possible combinations of cables: A to A, A to B, and B to B. These cables and connectors were new, so everyone buying USB devices had to go buy USB cables special for their devices. Homemade USB cables are harder to make because they use shielded wire and are very sensitive to electromagnetic interference.

An ethernet-based version of USB would have been better for the consumer, because they could have used old ethernet cables for most things. Most ethernet cables contain all 8 wires, so power could have been delivered to devices over the spare pairs. Now granted, the Power over Ethernet Standard 802.3af had not been completed at the time USB was invented, but since this was a standards body, they could have tackled this problem instead and modern ethernet switches would have had PoE in the form of "USB support" much sooner, whereas right now it is just hitting the market this year.

So what is the deal?

I said I'd let you make up your mind as to why the USB group did not choose ethernet, an obvious candidate, for USB... well, just in case you need to be hit over the head with it, it might have something to do with your wallet.

By introducing a new protocol the USB group created a new industry for themselves, selling components, certifications, and, lest we forget, all those $15 cables. Was this just a SNAFU of the "design by committee" system, or was it a way to bilk the consumer? You decide.

A future for 802.3af?

The Power over Ethernet standard is currently hitting the marketplace in many products within the private consumer's price range including minihubs, IP phones, IP video cameras, wireless access points, and various building automation systems like RFID readers, door locks, and NTP-syncronised wall clocks (see streetprices and poweroverethernet.com to keep track of what's available.)

Whether we will see input devices move to ethernet in the future is anyone's guess. With the hold USB has over the market, it is doubtful that any major manufacturer will see a profit in developing such products. In addition, there is no plug-and-play standard defined for such a hypothetical ethernet-based peripheral bus -- all the work in that area got sunk into USB. Things do not look good, but as gadgets that use PoE make their way into the marketplace, they could well diversify to include all the types of devices currently found with USB interfaces. Also since more than one ethernet port is becoming standard on motherboards, the playing field has started to level off.

But for now, USB is still a parasitic diversion to the entire industry.

Copyright (c) October 2004 Brian S. Julin