Mixing cat5e and cat6 sockets? X-post

En el artículo , John Rumm escribió:

+1. Also anyone who backs up to a NAS.
Reply to
Mike Tomlinson
Loading thread data ...

I think it must be a good 10 years since I initially upgraded the NAS box and my win2k box (within 2 feet of each other) to Gbit ethernet (with a 5 port Gbit switch in between) specifically to get remote mapped drive transfer rates more on a par with the internal drives.

The rest of the CAT5 network was upgraded to Gbit working over the following years as and when the other PCs were upgraded (or I decided to upgrade the LAN ports with spare PCI Gbit adapters after the arrival of the VM superhub with its Gbit lan ports).

Most of the older kit only saw a mere 60 to 90% speed boost by adding Gbit PCI adapters. Initially, with my own win2k box, I only saw a 2 or

3 hundred percent boost until I upgraded the MoBos in both the NAS and win2k boxes to ones with built in Gbit lan ports 4 or 5 years back.

Even so, I'm only getting a disappointing 50MB/s read 64MB/s write performance out of the current setup (I blame the current fashion for eliminating the speed boosting jumbo frame working option for this lacklustre performance).

The sad truth of the matter is that the ethernet speed boosts scale very poorly compared to CPU performance. If it had followed the same scale as for CPUs, we'd have seen Gbit performance using MoBos powered by nothing more powerful than an 80486SX33 cpu (10Mbps ethernet links could easily be saturated on 8MHz clocked 80286 AT PCs of the mid

80s).

It's a sobering thought that it takes something like a millionfold boost in cpu performance to support a mere hundredfold boost in ethernet speeds. I have great hopes that my next round of MoBo upgrades sometime during the next 12 months will at last allow the TB sized HDDs to fully saturate the Gbit ethernet links.

Reply to
Johny B Good

Keep in mind that there are plenty of hard drives about that won't sustain data rates much above those figures anyway (assuming you are talking megabytes and not bits)

You can hope ;-) Not sure I can saturate the gig ethernet with a feed from a SSD since the receiving end usually can't write it that quickly.

Reply to
John Rumm

En el artículo , John Rumm escribió:

It doesn't have to be writing to storage; it might be streaming video, perhaps to several devices, for instance. That'll easily saturate gig ethernet.

Reply to
Mike Tomlinson

I was referring to megabytes per second. All the HDDs can top out around the 10 to 120 MB/s mark in CrystalDisk's large sustained transfer rate tests.

In fact, when I run CrystalDisk on the remote drive mappings to test the server performance I see a fairly consistent 75 MB/s read _and_ write performance regardless of whether there's only 1 or 2 percent of free space left or 30 or 40 percent. The 4K random speeds are a cross between SSD and HDD local disk performance figures.

I'm guessing the curiously slow 50 odd MB/s read speeds are due to a limitation in the win2k box's disk write performance when the source is the ethernet port. Either some crazy win2k limitation[1] or a MoBo defficiency.

Even the 50MB/s read speed is a sevenfold improvement on Debian's rather indifferent 6 to 7 MB/s over fast ethernet performance that I saw compared to win2k to win2k back to back transfer rates of 10 MB/s over the same 100Mbps network over a decade ago.

Even today, on contemporary hardware with dual core CPUs, Linux only achieves half the speed of the FreeBSD based N4F server box with SMB transfers.

[1] The precedent for this 'throttling' effect by the OS was amply demonstrated with Netware 3.12 when, even after upgrading the cpu power a thousandfold, it refused to go above 1MB/s read speeds over fast ethernet links and only maxed out at 4MB/s on writes until the ram cache buffers filled up. There seemed to be a deliberate speed limit built into the OS.
Reply to
Johny B Good

En el artículo , Johny B Good escribió:

The CPU isn't the bottleneck, it's the PCI bus. Motherboards with "integrated" (=onboard) gigabit such as the older ones you mention still have the NIC hanging off the PCI bus. Gigabit is quite capable of saturating PCI, try using a PCIe adapter to see a boost.

Your quoted speeds of ~50MB/s are pretty damn good for a PCI adapter, so I don't see why you are complaining. What are you using to measure it?

More modern motherboards have the NIC integrated into the chipset, where it hangs directly off the system bus or PCIe. Those can achieve very high data transfer rates.

If you buy one with a NIC connected to PCIe or a chipset-integrated one, yes.

Reply to
Mike Tomlinson

Pretty much the same story with PCIe adapters.

Both Mobos are PCIe with integrated NICs. I use a stopwatch to time how long it takes to move or copy gigabyte sized movie files.

Even PCI could max out in excess of 100 MB/s (but only one adapter at a time - there was no point in having both a GBit ethernet adapter _and_ a PCI SATA adapter fitted to an older PCI MoBo).

In that case, my hopes are dashed. The current MoBos in question are both PCIe boards with SATA 2 built in as well as built in Gbit lan ports which either hang off the PCIe bus or more directly off the chipset (testing with add in PCIe adapters that still have jumbo frame support showed very little performance difference (if anything, a slight drop off).

I can't see SATA 3 making any noticable improvement even though most of the disk drives support 6Gbps. Hell, even when the server was somehow only choosing to use 1.5Gbps for the 3TB Cool Spin drive, it made no discernable difference to the benchmarks (stopwatch timings _or_ CrystalDisk).

One of the things that worries me is the rather piss poor performance of Linux with SMB shares since I'm planning on installing some version of *nix as the host OS with whatever windows versions I feel the need of installed into VBox VMs.

I don't expect any MoBo driver support for win2k and loathe the idea of entrusting my sanity to the tender mercies of the later more deeply flawed windows versions since Microsoft's pinnacle with win2k.

Perhaps I should look to somthing based on FreeBSD (it works superbly as a NAS OS when it comes to SMB network performance - Linux ime has demonstrated the same lacklustre performance whether running as a server or desktop client).

Reply to
Johny B Good

weird. I get lightning fast NFS transfers on Linux. SMB not so good I agree

May be in the TCP as opposed to UDP layer.

When I used to do that crap for a living, we used to play around with various tunables.

Made a huge difference.

Reply to
The Natural Philosopher

You MAY find that adjusting packet sizes works some magic.

Reply to
The Natural Philosopher

snip

IME 50-60MB/s is pretty good for home kit, especially if that includes small files.

The best I've had is 30MB/s average. That's dropped to 15MB/s now that everything's connected (Cat6/mid-NAS).

Even so, 1GB/min isn't the end of the world, especially for chucking a lot of large film files about.

Reply to
RJH

That's just about the speed I can achieve on a good day with the old Acer laptop kitted out with a cardbus Gbit adapter (about a 60% or so boost on its internal fast ethernet port - the laptop is now getting on for 8 years of age, IDE HDD upgraded from 80GB to 250GB).

I got a similar boost fitting a Gbit PCI card to a desktop PC with a

12 year old MoBo. The 50 or 60 percent boost might seem a rather pathetic performance boost considering the tenfold potential of the Gbit adapters but, as they say, "Every little helps." :-)

I much prefer the 3 to 4 GB per min I can see between my own desktop and the NAS box. It would be even nicer to see the 4.5GB/min implied by the CrystalDisk benchmarks but that's never going to happen.

This, of course, is for large file (half GB or more) transfers. As always, the rate drops significantly when transferring sub MB sized files in any meaningful quantities due to the extra FS processing overheads. That's an inescapable fact of life and a good reason to pack such file collections into larger compressed archive files.

Reply to
Johny B Good

Oh great! I have just put in a load of CAT6 cable into out garage in preperation for the builder starting its conversion into a habitable room. Sounds like its going to be a pain when I come to connect it up to the face plates!

Alan

Reply to
AlanC

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.