Desmon wrote:
@redrumloa
If the project really is that critical, then replacing all the Arcnet cable with thin ethernet coax is gunna get to be more costly than replacing it with Cat5 AND supplying ethernet cards to all the machines.
Agreed, only again noting the special case of those 'unbaluns.' Those are pretty simple (ie: shouldn't be failure-prone) gadgets, so if the coax is in good shape, it might still be economical- if you can just find a source for enough, with enough spares just-in-case, which seems to be the hard part.
Then there's the matter of what you put at the 'top' of the network- AFAIK, industrial-strength switches can (still) be had with all 10base2 cards (though this is more of a "call Cisco" issue than a "go to CompUSA" one) - and at the edges, going with the dirt-cheap Arks I mentioned will only give shared, not switched, ethernet among their ports at each drop, so it depends what each drop will be serving. You could always plug a cheapo 10/100 switch into each hub (or just use media converters, but the Ark hubs make cheaper media converters than the media converters), if it makes sense (e.g., if most drops serve one office, and that office's computers will mostly talk to themselves anyway)...
Running a spreadsheet on costs of hardware vs. costs of labor to pull cable would be a good idea, if this is an option.
One other advantage is that the Cat5 can be run to a central switch (or hub if you prefer) without all the hassles of a peer 2 peer network.
ArcNet is a star topology, even on coax, so all his runs are heading back to central point(s). However, you'll never break 10mbit on coax with ethernet, so it's also a question of what the facility is meant to be used for, meant to be future-proofed against, etc. (You could, of course, tie various 'workgroup'ish areas at the switch end, linking various 10mbit networks with 100mbit switches, routers, etc...)
If they survived on ArcNet until now, one has to wonder about their bandwidth needs. OTOH, any 'normal' contractor would just come in, shrug, and send guys around fishing CAT5 or fiber (MCSEs because "That's what you need for computers!" and UNIX guys because they know you'll need the bandwidth *someday*

), so it's all about the cost-benefit work.
Questions to ask yourself are, obviously- Is this an organizational network (e.g., workgroups, backups, etc?), or "just" a way to get internet to a lot of people? - How cheap is wireless going to be/is it an option for the facility/5 years down the line? - Any RFI issues (welding machines, medical equipment) that make fiber look really good? - How long *are* these runs? - Does anyone want centralized backups/massively centralized file storage/anything else that'll saturate even 100mbit for minutes at a time?
Stupid thought- If these are *really* long (>200m, is it?) runs, maybe some of the cable-modem DOCSIS stuff is something to look into... (Google for "CMTS" and similar.) You'd (still) need (different- 93->75 ohm) unbaluns, for that case, a handy knowledge of RF/CATV to install it, and appropriate modems for the edges, but it might be a way to get 'modern' speeds without changing the whole cable plant. May also survive ArcNet 'passive hubs.'
Sorry for the disorganized response; just trying to raise even more issues to worry about. Just hanging around the building my father's office was in, I saw all sorts of goofy stuff- one floor with some law offices wired for 100mbit, at much cost, only to dangle a Linksys router off of.. and down the stairs, the booking/IT department for a travel agency, filling the dumpster with cable every dozen months as they went from generic UTP to CAT3, CAT3 to CAT5, CAT5 to 5e... The moral is, plan! (And if recycling the coax is fractional to the cost of rewiring the place, you could always do it just to see what the usage will really look like, before commiting to any new topologies.)
Edit: Oh yeah, and @redrumloa- No problem, and good luck!
