On 22/05/2020 15:20, The Natural Philosopher wrote:
75 ohm coax is better in terms of attenuation characteristics but not so
good in terms of power handling.
While isn't used, 30 ohm coax would be better for power handling but
offers poor attenuation performance.
50 ohm is a compromise between power handling and attention.
If I remember the history, 75 ohm was the early choice (from dipoles -
it is near to the 72 ohm of the feed point Z of ideal dipole in free
space) but, in about 1930, someone worked out the power
handling/attenuation trade off and 50 ohm became a new standard, or at
least an alternative.
It also stopped people trying to be smart arses and using TV cable instead of
the official spec yellow cable. The point here is that the DC resistance is
also important, since it is a factor in collision detection on the shared
AIUI, the bit waveforms on the cable were designed so that any no bit pattern
would change the average DC voltage level (can't remember which encoding has
that property - perhaps many do). But, if a collision was happening, this no
longer held true, so collision detection was done by checking the DC voltage
on the wire. The signal might have to propagate 500 metres, so the DC
resistance shouldn't be too high else the collision detection would no longer
We saw this at SLAC, with Thinnet, the second iteration of coax which was then
superceded by twisted pair. It was still 50 ohm, but was thinner so the length
allowance was less. Some smart arse physicists didn't really understand how
Ethernet worked, so they decided to lengthen one segment, the one in their
building, using their own 50 ohm cable - the stuff they used in their
experiment electronics - quite a long length of it. Result: collision rate
went UP and therefore overall data rate went DOWN. They got a bollocking for
They also decided they didn't like that the Thinnet cable had T connectors in
it which connected directly to the rear of the computer. They thought it would
be "tidier" to add a length of cable between the T connecter and the one on
the computer. For impdance reason I no longer understand, this caused the
cable to present as 25 Ohm either at the computer or the T, can't remember
now. Thus, many signal reflections which further degarded their segment.
Cue them calling the Operations Group, which investigated and junked their
add-ons (and bollocked them). Cue amusement, later, elsewhere in the Computer
That article doesn't actually say very much; it's bare-bones, but it does say
"On a shared, electrical bus such as 10BASE5 or 10BASE2, collisions can be
detected by comparing transmitted data with received data or by recognizing a
higher than normal signal amplitude on the bus." Of these two detection
methods, the former would be done by the transmitter, but the latter would be
the method used by listening stations.
Meanwhile I dug out this long article. In 1983 a colleague attended a DECUS
meeting, and taped a talk given by Rich Seifert on "Engineering Tradeoffs in
Ethernet Configurations or How to Violate the Ethernet Spec and Hopefully Get
Away with It". This is a transcription of the talk, which I got a copy of and
include below. If you scroll down it 180 lines or so you'll see the bit about
To me this is interesting as a historical note about how we did LANs in the
early 80s, and the state of electronics nearly 40 years ago. It's best read
using a fixed-width font.
=============The following is an approximately literal transcription of Rich Seifert's
Fall 83 Decus talk on rationale behind the IEEE 802 Ethernet spec. The
headings are based on the slides. The title of the talk was Engineering
Tradeoffs in Ethernet Configurations or How to Violate the Ethernet Spec
and Hopefully Get Away with It.
The assumption is that you know something about the spec. Now we'll see
how you can change some of the parameters.
When you design one of these there are lots of options, in fact, too
many. You have to chose how long, how fast, how many, how close
together, how far apart, what performance, how many users, how many
buildings, how long to ship the product, how much does it cost? You have
to make trade-offs among network length, speed, performance, cost,
satisfying your boss, shipping the product on time. The important thing
is to standardize all the critical parameters so everyone agrees to the
groundrules so that you can have compatibility between products.
Remember the intent is to design a network that is open, where you
publish the specs and you let everyone connect to the network and you
tell them exactly how to do it. You tell them how to design a
transceiver. You want everyone to make products. It's not that we are
encouraging competition but we want the market to be bigger for all of
us. To do that you must worst case the specs so that no matter how they
are applied the system still works. You don't want to give someone
enough coax cable to hang themselves. It's like squeezing a sausage, you
can pull in the specs one place but they pop out somewhere else. You are
trading off among interacting parameters. You can't make independent
decisions about the minimum and maximum packet size and the maximum cable
length. Or the cable length and the propagation delay. You have to deal
with all these things in one giant sausage.
II. DATALINK LAYER PARAMETERS.
There are decisions that have to be made in both layers of the ethernet;
datalink and physical. There are also decisions to be made at higher
layers as well but ethernet does not address those. Datalink layer
decisions will affect your network performance. The decisions as to how
many bits in a crc, how many type fields, how big are the addresses,
what's the minimum sized packet, what's the maximum sized packet, how
many stations allowed on the network - those are datalink layer
decisions. The datalink parameters you have to play with; the slot time
(the maximum roundtrip time on the cable, how long to wait to know
there's no collision). The longer the slot time the longer the cable can
be. But if slot time is longer you're vulnerable to collision longer and
that reduces performance. The interframe gap time (how long must
stations be quiet after sending a packet) - You want that short to reduce
idle time on the network but if it's too short it's a burden on
controllers in receive mode because they have to receive packets back to
back to back with no time in between with no time to recover and post
their buffers and unload them through their DMA engine and get status
recorded and get ready for the next packet. The frame length - I'd like
a one byte minimum frame so I can send one byte without padding but if I
do that I can't have a very long network and detect collisions. For crc
I'd like guaranteed error detection, a very robust algorithm, 32 bits or
better. But 32 bits takes up more bits than 16 bit crc and it's much
more complicated to implement, it takes more chips. In VLSI that's not a
big problem but the first products were not made in VLSI. The backoff
limits - how long do I keep backing off before I give up. If I backoff
for ever service time goes out the roof. Currently backing off the
maximum amount of time for 16 times would be 300 to 400 milliseconds.
That's not bad but it limits the number of stations on the network
believe it or not. There's a tradeoff between how far you let stations
backoff and the maximum number of stations you put on the network. This
session isn't going to discuss those datalink layer tradeoffs in any more
detail than this.
III. PHYSICAL LINK LAYER PARAMETERS.
We are going to look at the physical layer. The physical layer decisions
won't effect network performance so much except to the extent that
physical layer decisions effect the datalink layer decisions. The
datalink minimum packet size is a function of the physical length of the
network. What you do in the physical layer (cable lengths, number of
transceivers, speed of the network) is going to affect how you configure
networks. We wanted to make the system easy to configure, so you can't
hang yourself. What do you have to tradeoff in the physical layer? For
starters, the speed. Remember we are designing a new network. I can
make it one megabit and that makes it easy to design the product but the
product life won't be very long. I can make it a 100 megabit system and
be sure it will last through my lifetime but I don't want to have to
design that product.
I'd like the coax length to be long but cables have attenuation so I
don't want them too long. I can run 10 megabits per second through a
barbed wire fence for 10 miles but how much are you willing to pay for
the decoder? I want transceiver cables to be long but I don't want
expensive interfaces at both ends. I want lots of transceivers on the
network but I don't want them to be too expensive or have noise margins
go down and lose data because there are lots of transceivers and lumped
loads. I want the total network length to be as long as possible but you
get less performance and more delay. You have to decide where you want
to be - somewhere between a computer bus and a wide area network. That's
what a local area network is. It's longer and slower than a computer bus
but shorter and faster than a wide area network. There's a lot of area
in there for squeezing sausages.
Let's look at how you decide how fast to make the network. I want it as
fast as possible. I want to push the technology as far as possible and
still ship on time. I want enough bandwidth to support lots of stations.
If I have a megabit of bandwidth and want to hookup a thousand stations
(and maybe you can electrically) that gives an aggregate bandwidth of
only a kilobit per station on average. Now that may be ok for people
used to 1200 baud terminals but its not what I expect when I want to do
file transfers or run graphic applications. If I have 10 megabits and
1000 stations now I have an average aggregate bandwidth of 10 kilobits
per station and that's probably ok averaged over time but it also says
when I want that big bandwidth to move a file or fill a screen with bits,
it happens very quickly. Unfortunately I also have to design
transceivers and VLSI coder/decoders and cables and it makes that job
easier if its a slower product. I can design simpler devices if I don't
worry about fancy filtering, I don't worry about cutoff frequencies of
transistors, or cable attenuation at slow speeds. That's why you can run
one megabit DMR cables long distances - because they are a tenth the
speed of an ethernet. Those are the tradeoffs and I've always got cost
to consider. You don't want to pay a lot.
The 10 megabit per second data rate is implementable. We've done it, in
both VLSI and MSI. We have to worry about both. We couldn't have said
here's the spec - you can have it in two and a half years when we build
the chips. You wouldn't want to hear that and neither would my boss. I
can do 70 megabits too, I did the CI, but you don't want to pay CI prices
for a local area network. It is pushing the technology to do 10 megabits
in VLSI but I can do it. The encoder/decoder is typically a bipolar chip
(some manufacturers are looking at doing it in CMOS) but it's not a
problem to do it bipolar. However the protocol chip (the ethernet chip)
you don't want to do in bipolar unless you want a die the size of a bread
box. Its not easy to do 10 megahertz in NMOS, CMOS or any of the dense
technologies. That is a real consideration in getting ethernet chips to
work at speed. It's pushing the IC technology for this type of device.
10 megabits supports traffic for a larger number of stations than most
people realize. A 10 megabit pipe is a very fat pipe. We have a network
up in our Spitbrook Road software development facility with 75 computers
hooked up to a single ethernet. About 50 VAXs and 25 PDP-11s running RSX
The average utilizaton of that ethernet is under 5%. That's software
development, file transfer, remote terminals and more mail than you could
Many people said "Why did you do 10 megabits? I don't need it. Give me
one megabit and make it cheaper." There is some effort in the IEEE 802
committee on the product we affectionately call Cheapernet. The question
was, "Do you want to leave it at 10 megabits and give up some performance
or make it slower and maintain performance?" Both being ways to make it
As we try try to violate the ethernet spec and still be ethernet this is
the one thing you can't violate and still be ethernet - you can not have
coexisting stations on an ethernet running at different speeds. The
problem is the implementation of the encoder/decoders. If I'm listening
at 10 megabits and you are sending at one megabit that's not going to
work very well. Also there are design optimizations you might call them
or design characteristics that literally limit it to 10 megabits. Not up
to 10 megabits but exactly 10 megabits. There are filters in the
transceivers, low pass as well as high pass. There are delay lines or
phase locked loops in the decoder that have fairly narrow capture ranges.
They are looking for a 10 megabit signal within .01%. If you get much
outside of that you can't guarantee tht your decoder is going to synch up
in time. You've only got 64 bits to do it in. You can make a baseband
CSMA/CD network that runs at another speed but it's not ethernet.
V. COAX CABLE SEGMENT LENGTH.
Let's look at coax cable length. The maximum coax cable length according
to ethernet spec is 500 meters. That's between terminators. That can be
any number of shorter pieces of cable connected with end connectors and
barrels. Why 500 meters? There are a number of characteristics of the
cable that are going to limit the length. Number one is cable
attenuation. You lose signal voltage and current as you transmit down
the cable. It gets weaker and weaker. At 10 megahertz the attenuation
at 500 meters is 8.5 db. That means you get about a third of your
signal. If you transmit 2 volts at one end you get six or seven hundred
millivolts at the other. I can design a transceiver that can tolerate
that sort of dynamic range. I don't want to have to tolerate a whole lot
I'm limited, believe it or not, by the DC resistance of the cable. This
is separate from the attenuation which is for high frequency and is based
on the skin effect losses in the center conductor primarily. It's not
dielectric loss. The DC resistance is important because that's how I do
my collision detection. I'm looking for DC voltage to do collision
detection and if there is a lot of resistance in the cable I get less of
my voltage and I'm not guaranteed to detect collisions.
I'm limited by the propogation delay, in other words by the speed of
light. I've got some guys in research working on the warp drive and
we'll have faster than light cables as soon as we get negative delay
lines. The restriction that ethernet can't exceed 2.8 kilometers is
really the restriction that the propogation delay can't exceed 46.4
microseconds(1). If you had faster cables you could have them longer.
The ethernet cables are pretty fast as cables go. Typical plastic cables
are 66% propogation velocity. Ethernet cables are 77-80% because they are
foam. You can get a little better but over 80% is a pretty nifty cable.
It's faster than an optical fiber.
Finally, timing distortions. Cables introduce timing distortions in the
signals. It's what's called intersymbol interference in
telecommunications. But because that's an often misused term we just
call it timing distortion. When you are decoding the signal you want to
see where the signals cross through zero. That's how I recover the clock
in Manchester decoding and its how I get my data. If those zero
crossings shift too far I can't properly decode the data and I get all
-STRETCHING THE COAX
How can I stretch the cable? The 500 meter limit is based primarily on
attenuation. The other factors, timing distortion, propagation delay,
the DC resistance and the cable attenuation all will limit it at some
point. But the one that limits it for my purposes is the attenuation.
The signal gets weaker beyond 500 meters. What happens if you exceed 500
meters? You start to lose signal but unfortunately I don't lose noise.
The more cable I have, in fact, the more noise I'm going to pick up.
It's a big antenna. The signal to noise ratio will start to decrease as
I get over 500 meters and I'm going to have increased error rates. The
ethernet was designed for a one in ten to the ninth (a billion) bit error
rate in a 14 db signal to noise ratio. That's pretty good and a 14db
signal to noise ratio is pretty low, you normally have much much better
than that. So under light noise environments where you are not in a
factory or near a broadcast radio station (eg in an office) your signal
to noise ratio is going to be a lot better than that. If you don't have
100 transceivers on the network your signal ratio will be a lot better
than that. If your cable is one piece rather than many your signal to
noise ratio will be a lot better than that. Worst case if you did
everything bad, if you made it out of lots of little pieces and you put
all the transceivers on and you had 500 meters and a high noise
environment you still have a 14db signal to noise ratio and you can still
run it over 500 meters. But if everything isn't all that bad you can go
a little longer and not impair the system. But you have to know what you
are doing! Now it can't be configured by dummies anymore. You have to
understand the tradeoffs.
What else happens if I keep going? Suppose I can tolerate the signal to
noise ratio. Then the DC resistance starts to hit me. I start to lose
collision detect margin. Maybe I can't guarantee collision detection any
more when the resistance increases. The loop resistance of a 500 meter
coax is about 4 ohms plus and my limit is 5 ohms after which I can't
guarantee collision detection - unless you don't have 100 transceivers on
the network. Then you can go a little farther. This is the sausage. If
there isn't as much meat in the sausage I can stuff more in the casing.
As I keep going I get more and more timing distortion. The ethernet coax
introduces about plus or minus 7 nanoseconds of timing distortion. I've
got 25 to play with in the system - 5 for my decoder, 7 for my coax, 1 is
for my transceiver cable, 4 is for my transceiver and the rest is for
noise margin. Well you can start eating up your noise margin.
Obviously, if I go too far beyond 500 meters I pass my 46.4
microsecond(1) round trip and that's the end of the ball game right
there. I'm now no longer on an ethernet because you can't guarantee to
detect collisions. You'll detect some collisions but the stations out in
the suburbs will be running on a carrier sense aloha system with respect
to some of the stations. They will be carrier sense collision detect
with respect to those nearby and the stations in the center of the net
will be carrier sense collision detect with respect to every one. This
may not sound catastrophic but I would hate to do the performance
analysis of that system. I couldn't guarantee that the system is stable,
in fact. That's what I consider hitting the wall.
The 500 meter restriction results in, number one, collision detection and
without that you don't have a stable network. Also, adequate decoder
phase margin, I need 5 nanoseconds to play with. With a 100 nanosecond
bit cell you've got 25 nanoseconds of timing distortion before you start
strobing the bit in the wrong place. And I need 5 nanoseconds of that
for my phase lock loop.
VI. TRANSCEIVER CABLE LENGTH
Let's look at the transceiver cable length. This is the drop cable
between the transceiver and the controller. The tradeoffs here are
roughly the same. Currently it's 50 meters maximum and that is also a
function of the attenuation. The attenuation of that cable is a function
of the wire gauge and a number of manufacturers are making transceiver
cables of different wire gauge. The spec is for 50 meters and if you
make it out of 30 gauge that's not a big enough pipe for reasons which we
will discuss. The DC resistance is also important not because of
collision detect but because I'm powering the transceiver at the other
end of that cable out of the controller. If I have too much resistance I
have too much voltage drop in the cable and I can't guarantee that the
transceiver will power up correctly depending on wire gauge.
-PROPOGATION DELAY AND TIMING DISTORTION
The transceiver cable length also affects propagation delay although the
main contributors are coax length, fiber optic repeater length and the
repaters themselves. The cable also affects timing distortion but here
it's much much less significant than the coax. Even if you doubled the
length the distortion would only be 2 nanoseconds which is almost
negligible. So the same parameters apply here as on the coax but in
differing degrees. It's not so sensitive to propagation delay or timing
distortion but very sensitive to DC resistance and attenuation.
The 500 meter coax limit is based on attenuation while the 50 meter
transceiver cable length is based on DC resistance. The cable is spec'ed
to have maximum loop resistance between power supply and transceiver of 4
ohms. The cable is about 3.5 of those ohms and that's if it's 20 gauge
wire. If it's 22 gauge wire I don't believe you can run 50 meters so you
must be careful at least on the power pair. As you increase the cable
length the transceiver may not power up or it may blow a fuse.
Transceivers are negative resistance devices. As you decrease the
voltage it increases the current and blows the fuse (up to a certain
If you extend the length and remain functional say by having more power,
then you hit the signal to noise ratio and your error rates go back up.
It's the same situation as with the coax cable. But you have this other
nasty thing called a squelch circuit. Communications systems designers
always design squelch circuits. Any time you are at the end of a long
communications channel you don't want to do anything unless you are sure
what you are hearing is signal and not noise. You don't want to turn on
the amplifiers for noise. On the coax that's fairly easy to do since my
signaling is DC. I'm unipolar signaling and I can simply have a DC
threshold detector. There's no such thing as DC noise. If there was we
could tap into it and turn on the lights in Chicago. Noise by definition
always has zero average, there's just as much positive as negative -
something about entropy. So on the coax I'm using DC levels and I can
just pickup the DC levels and my squelch is very easy. On the
transceiver not so lucky. Since I'm transformer coupled (that's where
isolation is done) I can't send any DC through the transformer. I have
to use the signal itself to determine if there is any signal. That's
like chasing your own tail. What it says is that I assume I am going to
get at least so much signal at the end of the transceiver cable in the
worst case. If I don't, I don't even turn on the receiver in the
transceiver. If the attenuation is too great in the transceiver cable
not only do I have a worse signal to noise ratio, I might not even have
enough signal to turn on the squelch circuits which again makes it
non-functional. With the ethernet spec numbers in the worst case with
transceiver cable with 4 db loss, etc you are guaranteed plus or minus
400 millivolts at the far end of the cable. That's not a whole lot.
Less than that and I can't guarantee the squelch circuit turns on.
50 meters of 20 gauge wire will result in more than 9.4 volts DC
available to power the transceiver. The transceiver has got to power up
with that. It does not have to power up with 9.3. If you've got half an
amp and 4 ohms that's a two volt drop that says you had 11.4 at the
sending end which is
a 12 volt supply minus 5% which is what most of you have. 50 meters
maintains my signal to noise ratio and degrades beyond that. I keep my
decoder phase margins and I'm guaranteed of detecting collisions (the
last wall, the one you can't run through). It's harder to stretch the
transceiver cable than to stretch the coax because stretching the coax
only gets you more errors (only if you are worst case) whereas stretching
the transceiver cable makes you non-functional. That's where you have
least flexibility but it can be done if you use heavier gauge wires or
design your own transceiver or retune the squelch circuits or have more
than 12 volts to power it.
VII. NUMBER OF TRANSCEIVERS ON A SEGMENT.
There's a limit of 100 transceivers on a cable. The number is limited by
the shunt resistance of the cable (not the capacitance.) Each transceiver
is a resistor across the cable. The resistance should be as high as
possible, the spec says at least 100K ohms. The DEC H4000 is typically
250-300K minimum. Each one of those shunt resistors bleeds off a little
of that DC current I'm using to detect my collisions so I don't want too
many. Also the number of transceivers is limited by the input bias
current. When I'm powered on I'm drawing a little bit of current out of
the cable partly due to the resistance and partly due to the electronics
since I can't perfectly back bias a diode. Diodes are leaky, they have
leakage. I'm allowed 2 microamps which is not much but when multiplied
by 100 transceivers, 200 microamps is starting to be some real current.
So that limits me for collision detect reasons and for no other reasons.
Also I've got a tolerance on the drive level. I'm driving it (I think)
with 64 milliamps but I can't hold that very accurately. I defy anyone
to design a 10 megahertz, 25 nanosecond slew rate limited high frequency
current driver that's held to very tight tolerances. I can make the
receivers pretty good but it's hard to make the drivers that accurate.
So I've got weak transmitters and strong transmitters and I've got to
detect collisions between all of them. In the worst case I'm a strong
transmitter and 500 meters away is a weak transmitter and I have to make
sure that even in the presence of my own strong signal I can hear his
weak signal. So that all ties into it and into DC resistance on the
VIII. PLACEMENT OF TRANSCEIVERS.
Where I can place the transceivers is limited by the shunt capacitance.
This is the 2.5 meter rule. I don't want big AC loads on the cable
because I get big reflections. This is the exact same problem, by the
way, as with boards in a unibus backplane or in any backplane. You don't
want all the loads in one place. You want them distributed so that
there's time between the reflections.
So how can we get around all this? The 100 transceiver limit is primarily
based on the shunt resistance. Remember we were limited by shunt
resistance, bias current and transmit level tolerance but the first wall
you hit is shunt resistance believe it or not. That 100K ohms when
multiplied by 100 is 1000 ohms and that is a lot of leak, it's a leaky
pipe. If I lose more than that I lose my assurance of detecting
collisions. The placement is limited by shunt capacitance and it assumes
the 100 transceiver limit. Actually it turns out the worst case for
placement is not 100 but 30-40 transceivers. In fact, 100 is better than
30-40 because some of the reflections start to cancel each other out.
This can be done in simulation and we can prove this. As you increase
the number of transceivers per segment over 100 I start to lose collision
detect margins and can no longer guarantee collision detection and that
blows me out of the water. If I vary from the 2.5 meter placement rule I
get reflection increases. This doesn't stop things from working. It
introduces more errors into the system. If I have error margin ie, I am
not using 500 meters of cable, I am not in a high noise environment and I
haven't segmented my cable in lots of little pieces, then I can start to
violate the 2.5 meter rule. I can lump a few transceivers here and some
there, etc. The 100 transceiver limit will hit the collision detect
limit. The 2.5 meter spacing hits the signal to noise ratio all other
things being equal.
IX. TOTAL NETWORK LENGTH.
Now let's look at total network length. I'm allowed 2.8K meters or 46.4
microseconds(1) end to end. That doesn't mean I can't have more than
2.8K meters of cable in the system. Within a particular topology with a
maximal end to end path I may be able to stretch, say, a 500 meter
segment but I would have to take some off somewhere else to stay within
the 2.8K limit. For example I could have 2 1000 meter segments connected
with a repeater if I could live with the signal to noise ratio. But I'm
back to doing some engineering on the system.
We wanted ease of configuration. If you stick by the rules (500 m coax,
50 m transceiver cable, 1000 m fiber, no more than 2 repeaters in the
maximum path and no more than 100 transceivers spaced at 2.5 m) and don't
violate any of them you can hit the limit on every one of those rules in
one network and we guarantee the configuration works. You will have
adequate noise margin, detect collisions all the time, you will not have
excessive timing distortion, you will not exceed the propogation delay
limit and you won't have excessive reflections. It's a worse case design
system. If you need to break any of those rules it's no longer a worst
case design system. That doesn't mean that it won't work in your
configuration, only that in the general case it won't work.
We wanted the minimum frame length to be as short as possible. In fact,
there was a movement while we were writing the spec to cut it from 2800
meters maximum down to 1500 meters in order to shorten that minimum frame
length and get a little more performance out of the network. But you
turn out to be on the knee of the curve at that point. You don't get
that much more performance and we concluded the extra kilometer was more
than worth it. The 2800 meters is a tradeoff between your need for a
long network and performance. With the specs it is impossible to exceed
the round trip delay constraint.
You can squeeze the sausage, in conclusion, if you know how strong the
casing is, if you know where the limits are and what happens when you
start hitting those limits. If you try and stuff too much in the sausage
the casing will pop out. How? Maybe it won't work. Maybe the
transceiver won't power up, maybe too many errors, maybe you won't detect
collisions all the time. If you exceed the 2800 meter limit you won't
guarantee collision detect by stations in the suburbs. To exceed the
2800 meter limit you have to violate something else. The configuration
constraints are based on real physical limits; speed of light, electronic
circuits, timing distortion, phase locked decoders and cable design.
Thorough engineering went into the design of this system. You can break
the rules of the system if you can understand and do over again the
engineering. You can stretch the cables, put it in high noise
environments, put on more transceivers or closer together, just make sure
you know what you are doing.
As always there's no free lunch. The free lunch they serve you here
you've paid for already. There are two kinds of free lunches in
engineering, those you have already paid for and those you have not yet
At this point I'll open the the floor to questions.
X. QUESTIONS AND ANSWERS.
Question: What's the two repeater limit? Can you go to three, four?
Answer: The spec says no more than two repeaters in tandem between any
two stations. The reason is to keep from violating the 2800 meter limit.
If I allow three, even sticking by all the other rules you could violate
the 2800 meter rule. Other nasty things start to happen when you have
more repeaters. It turns out that the interpacket gap shrinks when you
go through a repeater. It's virtually time dilation. When you go
through two properly designed repeaters the 9.6 microsecond interpacket
gap you started with shrinks to about 6.3 microseconds. If you want to
go through more repeaters you will shrink that gap more in the worst
case. You have the possibility that stations will not be able to receive
back to back packets. If you can live with that you can use more than
two repeaters. Clearly, you don't want to shrink the gap to less than
zero. If you use more than two repeaters the 2800 meter end to end
restriction still holds. A repeater in itself with its squelch circuits
and its buffers and its read time is the equivalent of about 200 meters
of cable. So to have a third repeater you are giving up 200 meters of
Q: Experimentally what coax cable lengths have you successfully
transmitted with, say, a dozen transceivers on the cable?
A: I haven't. Folks at 3COM have run transceivers over almost 1000
meters without a repeater. But that's a fairly lightly loaded net, a
small handful of transceivers. I generally don't violate the rules
because I don't trust myself. I don't want to have to go through the
analysis every time I configure an ethernet. Doing an ethernet I want it
to be expandable. I want to be able to hang 100 transceivers on it when
I want it.
Q:Presumably you could add a repeater when you hit the limit adding
A:Absolutely. But how do you know when you've hit there? Do you have the
maintainablility primitives built into your system to detect when you are
no longer able to detect collisions? How do you know when you are not
detecting collisions? How do you know when the errors you are getting are
too many? Or what they are caused by? It's very hard to maintain a system
that's breaking the rules because you can't tell whether the system is
not working because you've broken the rules or because something is not
Q: You mentioned the antenna effect. Can you comment on interference
generated by ethernet cables and the susceptibility to noise interference
in, say, the environment we have where the cable runs close to closed
circuit TV cables?
A: The ethernet system with a little good engineering easily meets FCC
requirements. Current tests show that it in fact may meet Tempest EMI
requirements. The cable shielding is unbelievable. You've got quadruple
shield on the coax and triple shield on the drop cables. I've tested the
ethernet system under 5 volt per meter and, in fact, 10 volt per meter rf
field strengths up to a gigahertz and get absolutely no errors under
those conditions. I've tested it under static discharge up to 20,000
volts directly discharging to the shield of the coax. In fact, I was
able to draw St Elmo's fire off one of the terminators and not only did I
not blow up any of the equipment, I did not get any CRC errors. It's
Q: My question deals with the H4000 transceiver. Should you be able to
plug in a transceiver to the DEUNA with the system running and not cause
a transient power surge that would pull DC low down on you?
A: The DEUNA has a bulkhead assembly that comes with it which is
specifically designed to limit surge currents into the transceiver so you
can do exactly what you described. There's an SCR/RC surge limiter and
circuit breaker built into that bulkhead for exactly that reason.
Q: I'd just like to comment that I thought this was an excellent
A: Thank you.
Q: What type of throughput could I expect given maximum packet size?
A: The throughput of an ethernet is much more a function of your higher
layer software than it is of the data rate of the ethernet. You have to
know what other stations are trying to use it, what your packet formats
are, what the controllers are doing. What I'm saying is, "That's not an
easy question to answer." The answer is, "Under what conditions?" Using
DEUNAs? Under what load? What software? It's not something that can be
answered with a number. I wish I could. It's the kind of thing that
Q: Can you give an approximate answer for only two systems?
A: We have run VAXs with DEUNAs through the ethernet at continuous
transmission and reception of between 1.2 and 1.5 megabits per second.
And that clearly doesn't use up a whole lot of the ethernet. There's
room still for two more VAXs to do the same thing. That's limited more
by the DEUNA than anything else.
Q: What effect does a DELNI have on all these considerations?
A: Good point. The DELNI is another piece of the physical channel. You
can have transceiver cable between the DEUNA and the DELNI. You can have
transceiver cable between the DELNI and the H4000. The DELNI has some
propogation delay and another copy of the squelch circuits. The net
effect is that on an absolutely maximally configured ethernet you can't
put the DELNIs on the very very ends of the horizon (the suburbs) because
of the additional delay. The configuration guidelines in the DELNI
documentation describe all that.
Q: 3COM is pushing this thin ethernet cable for their IBM PC connections
and they say all I'm giving up is maximum coax length. What kind of
trouble am I asking for if I use that in a network with real ethernet
A: Well, in fact, what 3COM is doing is prereleasing the Cheapernet
product. I was out there a couple weeks ago talking to Ron Crane who, by
the way, is one of the developers of the ethernet. They are pretty smart
people. You are giving up a little more than the maximum cable length,
you are giving up signal to noise ratio. The design center for the
Cheapernet is two orders of magnitude worse than for ethernet. The basic
error performance of the channel is significantly worse.
Q: Am I going to get bad reflections at the point at which I switch these
cables? In particular, one's tempted to run thick cable for long runs
then hit a cluster of offices and have a little bit of thin cable then
switch back to thick. Would it be bad to do that three or four times?
Q: Every picture we see of legal ethernet has one point to point segment
in it exactly 1000 meters long. Is it really the case that I can have as
many of those point to point segments as I want coming off of a base rib
as long as the total of the two longest ones is not more than 1000
A: Yes. You can have 100 repeaters connected to the center segment each
with 500 meter fiber links to a separate segment and it works just fine.
Q: Thank you. Would you tell that to all the people that are selling
ethernet for DEC?
A: The reason that the charts are drawn that way is because it's easier
to draw the charts that way.
Q: The problem is that the local people don't seem to understand that
and keep telling me that's not a legal ethernet.
A: I promise the next ethernet presentation I give will have that chart
with multiple fiber optic repeaters.
Q: Variation of the same question. What is the likely impact of optical
technology on your strategy for the introduction of routers in an
internet context? I presume they are not bound by the same attenuation
A: Right. Routers do store and forward. As such they are not part of
the 46 microsecond collision detect. Fiber technology is super for long
distance point to point links. In fact, that's why we specify it for the
long distance point to point link. They are suitable for even longer
distances. The phone companies use them regularly. The problem with
fiber is it's special. You have to pull it between buildings or where
you are going to be routing. Most people aren't prepared to do that.
They want to use leased lines or X.25 or some publically available
channel. If your are willing to pull fiber it should not be a problem
having high speed links between routers using fiber. That's a technology
we are very interested in.
Q: I can infer from that the distance consideration will be considerably
A: Absolutely. That's right.
Q: I need to hook up an ethernet between three different buildings and I
hear that there can be potential grounding problems. Could you talk a
little about that?
A: Sure. You generally want to avoid grounding a piece of wire in two
separate buildings because there can be differences in the ground
potential between those buildings and you'll get lots of amperes flowing
through the shield of that cable. If the voltage difference between the
buildings is low enough and the source impedance is high enough you can
ground the cable at both ends. Where it enters the building is the
preferable place to do that for lightning protection. If you can't tell
by measurements (I would find a qualified technician or electrician to
make those measurements) I would ground it in one building and put a
lightning arrester in the other building to prevent the lightning hit
from propagating through.
Q: You connect to one of the barrel connectors and physically ground it
A: That's correct, I would put a barrel where the cable enters the
building and ground it to the frame of the building where it enters.
That's exactly what I did in a number of the field test sites for
Q: How about the ground rods most computer rooms have?
A: I would do it where the cable enters the building for maximum
Q: We are getting a VAX cluster and hope in the next year to plan for
ethernet. I would like to know how a VAX cluster fits on to the the
ethernetwork and what its growth potential is.
A: The VAX cluster doesn't directly connect to the ethernet. You can
connect VAX cluster interfaces to the VAXs and you can connect ethernet
interfaces to the VAXs and the DECNET will rout between the VAX cluster
and the ethernet. The VAXs will be acting as routers but there is no
direct connection between the VAX cluster and the ethernet. However, if
anyone has noticed, the cable in the VAX cluster is the ethernet cable.
It's the same coax.
Q: Is there any plan to put anything into the star coupler or the HSC
A: No. The reason is that it's an enormous amount of difference between
the two. You've got speed conversions and protocol conversions. The CI
datalink is very very different from the ethernet datalink because it
was designed for a different application and the speeds are seven to one
difference. There would have to be a computer in there.
Q: This may not be a proper forum to ask this question but how has DEC
violated the rules in its own installation?
A: I have a number of ethernets in my lab that have the transceivers too
close. I have some that have transceivers that exceed their own specs.
I generally don't stretch the transceiver cables beyond 50 meters. I have
hooked up five repeaters in tandem, two of them being fiber optic.
Q: How about cable length?
A: I usually don't go beyond 500 meters because I usually don't need to.
(1) The Blue Book says 44.9 microseconds.
: So it dosen't work, but why not? I don't have anything with the output
: circuit of an 10B-2 Ethernet card around, but I'm sure the answer lies
: there. Maybe someone familiar with what goes on the coax can explain the
: problem the higher load impedance causes.
Since I'm the guy who selected the cable impedance for Ethernet in the
first place, let me explain the rationale. (I have been asked this
Earlier posts have properly explained that 75 ohm cable will not
perform correctly, due the the design of the collision detect threshold,
however the more basic question is valid--why 50 ohm? It would have
been possible to design Ethernet to use 75 ohm cable in the first place.
There are a few reasons for 50 ohm:
1: For a given outside diameter, a 50 ohm cable has lower DC resistance
than 75 ohms. This is because the impedance is basically a function of
the ratio of the outer-to-inner diameter. For a given outer diameter, the
inner conductor will be larger for a 50 ohm cable.
(Zo = (138/sqrt(e)) log (D/d), where e is the dielectric constant of the
insulator, D is the outer diameter and d is the inner diameter.)
Thus, a 50 ohm cable has a larger center conductor, and a lower DC
resistance. The collision detect budget is critically dependent on the
DC resistance of the system; the high-frequency attenuation is much
2: The lower DC resistance also significantly decreases the effective
"rise-time" of the cable. Coax, when used for digital signals such as
in Ethernet, is a "skin-effect limited" cable. Thus, the step response
at the end of some length is limited by the skin effect resistance.
Again, a larger center conductor reduces the skin effect resistance and
the rise time. This is an important factor in the round-trip propagation
delay, which affects slot time, minimum frame lengths, etc.
3: There are available, low-cost, off-the-shelf, constant-impedance
connectors for 50 ohm cable. The conventional 75 ohm connectors
(F connectors, as used in CATV, or UHF as used by hams) are not 75 ohms,
and would cause small reflections. The Type N connectors used in 10Base-5
and the BNCs used in 10Base-2 are 50 ohm connectors.
4: The signal budget for Ethernet is affected by the shunt impedance
presented by attached transceivers. Each transceiver presents some
lumped capacitive load, and some resistive shunt as well.
For a given shunt impedance (less than infinity), the effect will be
less when shunting 50 ohms than 75 (e.g., 8 pf degrades the signal in
a 50 ohm system less than in 75 ohms, since the time constant is 1/3
lower). Thus, a 50 ohm system can support more devices in parallel on
5: Most lab test equipment is designed for 50 ohms (signal generators,
spectrum analyzers, scopes, etc.). It is much easier to make measurements
in a 50 ohm system.
That's why I chose 50 ohms... We considered all of the options at the time.
Rich Seifert Networks and Communications Consulting
firstname.lastname@example.org (408) 996-0922
On 23/05/2020 12:13, The Natural Philosopher wrote:
I want you to not accept what an expert has said, but explain WHY what
he says, is true?
Especially since I can find no other reference anywhere to 'DC' or
'cable *resistance* in any paper on CSMA/CD systems. Cable impedance,
yes, resistance no.
And today's systems still implement CSMA/CD but are universally coupled
in with transformers. That don't pass DC..
And DC won't propagate any faster than someone else's pulse train anyway.
AIUI collision detection is dome by sensing that the output on the wire
which you have 'grabbed' is not exactly what you are putting on it.
Cable *impedance* and of course attenuation matters but as long as they
are within tolerance that's OK, what is crucial however is that you
don't have excessive propagation delays and that is what screws you with
long cable runs. Two stations - one at each end - can transmit, see
their packets go clear and un buggered before detecting each others
transmission if the cable is too long.
- impedance is determined as your article says by ratio of inner to
outer conductor diameter, modified by the dielectric in between - air
cored polythene usually.
- Attenuation is a function of resistance, which given the above is a
function of cable core circumference and material - usually silver plate
on quality cables, or just copper. (fat cables lose less).
- propagation delay is determined by the dielectric and the cable length
- the nearer a vacuum the nearer the speed of light the signal travels.
As it happens for all cables and electronics in use then and now,
propagation delay is the limiting factor. Because you have to have a
line that is clear of all other travelling waves to transmit on.
Collision detection is a simple matter of receiving stuff you did not
No DC involved at all.
Same as wifi today. No DC involved at all.
No Apple devices were knowingly used in the preparation of this post.
Yes, at least indirectly.
It is due to the Velocity factor changing with frequency and Vf is related
to the dielectric constant of the insulation. Vf = 1/(Er)^0.5
As a result, in for example a square wave (composed as you know of odd
harmonics of the fundamental), the different frequencies are propagated at
different speeds and the shape is distorted. (A trivial example but easy to
On 23/05/2020 12:13, The Natural Philosopher wrote:
So the guy who actually made the decision on the cable type for use on
ethernet, world renowned IEEE expert member, who on several occasions
the Task Force chairman, and editor of 802.3x standards document,
explains why the DC resistance is an important parameter for the CD
and some decades later, TNP says:
What a dilemma, who should we believe?
Well I am sure he will be pleased to have your validation.
But I imagine the important part of the impedance is the imaginary (*)
part - due to effects of L and C of the cable - rather than the DC
resistance. Maybe I'm wrong.
(*) The part that's multiplied by i / j / square-root-of-minus-one. I
remember getting into a heated discussion in a pub quiz which asked "what
letter is used to denote square-root-of-minus-one?" and they would only
accept "i" and not "j" - the latter being used in electronics because "i" is
used for instantaneous current. I won my point, but it was a hard fight -
Google to the rescue! I worked with a guy called John William Taylor, aka
Bill Taylor but referred to universally as J-Omega (as in the algebraic term
j-omega-t that occurs all over in electronics).
in terms of maintaining wave shape, it is, but in terms of attenuation
it's the actual (skin) resistance that counts.
I remember doing that ghastly calculus of an infinite number of Ls in
series and Cs in p[parallel and showing that in the limit it looked just
like a resistor...
“Ideas are inherently conservative. They yield not to the attack of
other ideas but to the massive onslaught of circumstance"
No. The impedance is real, although brought about by Ls and Cs. But
the real energy passed into this impedance is not dissipatet in the
(hypothetically and practically) negligible resistance but propagated
away until it gets to the matched resistive load at the end of the
 If the cable has not got a matched load at the end then you *don't*
see the characteristic impedance at the beginning. But the impedance
you see with a matched load *isn't* the load itself but the cable
HomeOwnersHub.com is a website for homeowners and building and maintenance pros. It is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.