Talk:ARCNET

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Comment[edit]

I'm getting a weird diff show up for this page. The "last" version between my edit (Oct 6, 3.33 UTC) and Maury's has a few changes incorporated which I did not do. I pressume they were Maury's, but anyway a little strange. I've refreshed pages etc to see if there was some sort of caching issue. Aldie 03:46 Oct 6, 2002 (UTC)

Does this show only the changes you made? It looks like your edit was based from the earliest version of the article, so you undid Maury's subsequent changes. Did you by any chance look at the history list, accidentally click on the last one in the list (which is the earliest chronologically), and then hit edit from there, without noticing the warning that you would overwrite later changes? It's also possible that you loaded the edit page when that first version was current, and it got somehow cached by your browser or an intermediate caching server and remained when you came back to edit it again; so you could have been presented with the old text in your edit box and no warning about old versions. --Brion 05:17 Oct 6, 2002 (UTC)
Based on the evidence, I'd have to agree with your first point. However I don't consciously recall doing that. I'll put this down as a my braino unless it happens again. I'll fix up the article now. Incidently, for the last week I've been on Australia's Optusnet ISP, so if any similar situation arises with transparent caching maybe there is an ISP specifc issue.... but I doubt it. And thanks, for your investigation. Aldie 06:19 Oct 6, 2002 (UTC)

I would have thought the type of ethenet used in modern manufacturing (process control) would be optical, to avoid electrical inteference, and by extension be switched and full duplex. This type of ethernet is exceptionally reliable, not to mention fast, cheap and can go long distances. Perhaps the article is refering to Ethernet on shared media or something? Aldie 18:21 Dec 21, 2002 (UTC)


The seventh paragraph confuses me, especially with this sentence:

Since the token is modified, no one else will "see" it.

This sentence looks like it belongs after the one it precedes; but even then, I do not understand why the nodes between the sender and receiver would not see the message.

The article hints at more features of ARCNET than it details. Per haps an ARCNET expert could elaborate on 'process control', for example; or should the people who care about ARCNET already know what that means?

Detailed ARCnet info here[edit]

(I was at Datapoint R&D before and while ARCnet was developed. Some of this may be wrong but it is more right than on the main page. -- Not having used Wikipedia before, I'm limiting my input to this page. -- nuff noise said.)


ARCnet is a star-bus networking system. The physical connection is via an extended star. Think of 10BaseT Ethernet with lots of hubs for the physical connection. The communication over the net though was token bus. The original concept of ARCnet was by Harry Pyle, Victor Poor, Gary Asbell, William Morgan (though others may disagree), and John Murphy. John designed and implemented the first system as a two board device that sat on the Datapoint processor's I/O bus. A later version was implemented inside the Datapoint 3800 (which I was involved in). Datapoint decided to keep ARCnet proprietary and it was only the need (for obvious economic reasons) to make a chip version that it become available to others. SMC (Standard Microsystems) would not build the chip unless it could make it available.

ARCnet was designed from the start to allow multiple Datapoint machines to be networked together. They were sharing hard drives (not floppies) and printers. Datapoint's Operating System (DOS) was extended by Gordon Peterson, using a mount command to add a remote drive as if it was locally connected. They operated in a peer-to-peer fashion so that any machine could share it's hard drives with any other machine. In fact, once working, with one exception, all computers in R&D were diskless, sharing space on drives in a server room. Diskless workstations were made available by rework of the System Rom (think BIOS) to boot off the network if there was no local disk drive.

There are 255 IDs / nodes max in an ARCnet, built in an extended star configuration. To have more, a computer needs two ARCnet boxes allowing it to sit on two (255 max networks), forwarding packets between them.

The ARCnet protocol does not use massaged tokens (this is I think in IBM's Token Ring). ARCnet does not add small delays to a message though this is arguable but, Ethernet also adds delays - in its case 48 or more bits/bytes preceeding each and every message for carrier sense / collision detection. The actual basic ARCnet protocol follows.

Given a machine with ID 'm' it knows which is the next logical machine ('n') in the Token Ring ('m' < 'n' base 256). It is not necessarily the next physical machine, in fact the deterministic algorithms developed by John Murphy assumed each machine was the worst possible distance away from its neighbours (through a number of hubs with maximum cable length between them). Let us assume 'm' wishes to send a packet to 'o'. Then, assuming 'm' has just received the token, the messages are as follows:

  • 'm' sends to 'o' a 'Can you receive a packet request?'
  • 'o' send back a 'Yes I can - ACK'.
If it sends back a NACK, the next few steps are of course skipped.
  • 'm' sends the data packet to 'o'
  • 'o' acknowledges receipt of the packet
If there was a CRC error in transmission it replies as such.
  • 'm' now sends the token to 'n', giving it a chance to send a packet.

This continues for every 'm' in the network until the token finally returns back to our inital 'm'.

Only one packet to one other machine can be sent when a token is received. Status bits in the sender and receiver are updated with send state information. Error conditions note if receiver 'o' exists or if the packet had CRC errors in transmission.

If an ARCnet box is set to address 0 (zero), it can see all packets but is not allowed to send or be in the token loop. This was usefull in diagnostic and performance analysis.

ARCnet was deterministic because the worst case delay between a processor requesting the transmission of a packet and its being received was known. This being the theoretical worst case time for a token to go completely around the ring. Which, I might add, was much much longer than any actual worst case time because no system could place nodes worst case distances away from each other with each node requesting packet transmission all the time.

On the other hand, Ethernet because it is CSMA/CD, as it approaches near worst case conditions actually has decreasing throughput. This is why multiple small networks connected by routers are so necessary for its reasonable operating performance.


BTW: a few years after ARCnet was first sold, a system using high powered Infra-Red diode communication was developed allowing ARCnet to communicate over large line-of-sight distances (i.e. between buildings in NY) when connection underground would have been too long. Four seperate ARCnet channels could be combined into the optical signal. There also was a voice channel for technician alignment of the transmitter/receivers.

(If you wish to discuss this, contact me soon, before this email address is no more: HenriS AT ix DOT netcom DOT com)

Ethernet - a proprietary protocol?[edit]

I think it is misleading to imply that Ethernet was a proprietary protocol until the mid 80s. The DIX consortium (Digital, Intel and Xerox) published the Ethernet standard on September 30, 1980 (The Blue Book). The intention was to open up implementation to other companies. This work was taken further by the IEEE and the ISO. It was never under the control of 3Com in the way that IBM controlled the Token Ring standard.

In the late 70s, IBM was by far the largest computer company in the world and its proprietary networking architecture, SNA, was viewed as a grave threat to the smaller companies - the BUNCH (Burroughs, Univac, NCR, Control Data and Honeywell) in the USA and Bull, ICL, Olivetti, Philips and Siemens in Europe. Because IBM controlled the development of SNA, it was always in the best position to exploit new features or to tailor the standard to IBM systems. To counter this, a rival network architecture, Open Systems Interconnection (OSI), was developed as a series of International Standards, starting with the OSI Reference Model in 1979.

Ethernet was adopted by this group as an open standard for LANs that was not controlled by a single company. Implementation was rapid - I was involved in early ICL developments from 1981. The number of companies involved in developing Ethernet created a large market that encouraged innovation and led to ever reducing costs. This is what effectively killed off ARCNET and Token Ring, IBM's attempt to dominate LANs. John of Groats 11:10, 27 May 2006 (UTC)[reply]

Theoretical and Practical Maximum Performance of Ethernet[edit]

I'm not buying the "Ethernet can collapse at high loads" and "maximum practical load was 40-60%" arguments on this page; I think it's perpetuating the "Ethernet Load Myth" that probably deserves its own page on Wikipedia. The paper that goes into this in detail is Boggs, David R. and Mogul, Jeffrey C. and Kent, Christopher A. (1988), "Measured capacity of an Ethernet: myths and reality", which is referenced from the Ethernet page. —The preceding unsigned comment was added by 125.100.126.202 (talk) 05:11, 26 March 2007 (UTC).[reply]

I can't read the paper you've referenced as I don't have an ID on the ACM portal and I'm not going to give up my email address to get one, but it doesn't matter as I am very familiar with this debate and agree that by 1988 10mbit 'flat' Ethernet outperformed ARCnet, even on a heavily loaded network. The last big flat 10mbit Ethernet I worked on was in the early / mid 90's, it was 900 nodes. 20 - 30% collision rate was a fact of life but even so, actual usable bandwidth was far greater than ARCnet's 2.5mbit. Of course ARCnet wasn't really even comparable in an environment like that as you couldn't build a flat ARCnet network with more than 254 nodes. 74s181 10:55, 26 March 2007 (UTC)[reply]
Anyway, the thing that allowed ARCnet to outperform Ethernet in the early days (prior to 1985 or so) was not the collisions themselves, but rather the latency introduced due to the time it took for the slower processors and less intelligent NICs of the day to recover from collisions. By 1988 the 80386 was replacing 80286's in servers, the 80286 had replaced most 8088 desktops, most 3c501s had been replaced, and performance was no longer a justification for ARCnet. Reliability continued to be a major ARCnet advantage until 10BaseT came into common use around 1990, eliminating all the nasty issues you got from putting a T connector on the back of a desktop PC. Even after that ARCnet continued for a while, slower, but fast enough, and much cheaper, but that didn't last. Increased demand, manufacturing quantities, and VLSI integration led eventually to the $5.00 commodity 10/100 NIC of today, Ethernet so cheap that server class machines have two gigabit NICs built in whether you need them or not. 74s181 10:55, 26 March 2007 (UTC)[reply]
I updated the article about a month ago to more accurately reflect the historical 2.5mbit ARCnet vs 10mbit Ethernet performance issues under heavy load. 74s181 10:55, 26 March 2007 (UTC)[reply]

Needing clarification[edit]

The allusion to cat 3 is misleading. Most people assume RJ-45 4-pair (or 3 pair) connectors. "Category 3" refers to data transmission quality, equivalent to high quality telephone wiring. One point in Arcnet popularity is the network cards usually had two RJ-11 (single/dual line telephone) connectors which could be daisy chained or use building telephone wiring with an active or passive hub in the telephone wiring closet. Spare the expense and permits to run plenum wiring in leased space. Had a client with TP Arcnet connecting CNC machinery, the electrical noise in the shop precluding ethernet. Another oversight may be not noting that Arcnet was generally restricted to Netbeui protocol. CNE's have told me ipx and even ip could run on it but that has not been my experience. There was a product called Promise LAN or Moses LAN that used Arcnet chips and protocols but had a higher clock.

As to speed. (Insert 60mph highway at rush hour story.) I haven't seen a larger arcnet intallation then those factories, but the machinery requires real rime controller response and there were no spoiled parts due to latency. Problem with ethernet is that unlike token passing network where the transactions continue (machine sees slower speed but network speed at maximum), ethernet machines resends causes the network to slow down which makes the machine network speed to slow even more. When I was at a software support callcenter with several hundred techs, periodically ethernet would breakdown to less than 1 Mbit on 10base and less than 5 Mbit on 100baseT. On the 10base we usually had ip/ipx/netbeui running (many unix systems were still ipx). 100base didn't overload as often.

Shjacks45 (talk) 15:44, 9 December 2007 (UTC)[reply]

cost[edit]

The article has much discussion of cost comparisons vs. thin Ethernet, some of which I suspect is WP:OR. However, none on the cost of cable. The RG/58 cable used by thin ethernet is fairly common in radio and electronics lab work, and so is often reasonably priced. (Plenum rated cable is expensive in all cases.) As well as I know (without doing any WP:OR) RG/62 is more expensive. That could easily make up the difference in other costs. Gah4 (talk) 01:19, 6 October 2019 (UTC)[reply]