Cat5e or what?

Makes sense.

Like 'Winmodems' ;-)

Ok, good tip, thanks.

They may of course ... but what are the chances of an integrated NIC (even an Intel one) being as capable (on a desktop board specifically) as an add-on card, in the same way onboard video is rarely as capable as even the simplest add-on video card (demonstrated by the size (lack of?) of any heatsinks on the on-board video solutions)?

OOI, is there a utility that is good for doing such network throughput tests or is it more 'real world' to transfer a largish block of data (as I believe you mention previously) and just time the result?

Cheers, T i m

Reply to
T i m
Loading thread data ...

Gigabit is pretty mature technology here. It doesn't take much silicon area, or much processing power these days: the link is likely to be the limiting factor. I think it's approximately a cut and paste of the silicon IP from the external NIC chip onto the motherboard chipset. (Bearing in mind the ex-server 1G NICs you might buy on ebay are probably 10 years old, but still do the job)

If you're talking 10/40/100G then smarts in the NIC make a bigger difference. You don't get those integrated (some mobos have 10G, but it's an extra chip). That's also where thermals make more of a difference.

I don't know of a tool, but in networking the relevant number is packets per second, not Gbps. Most of the overhead is on dealing with packet headers and so on, rather than shovelling data out the door. Lots of small packets are more work than a few large ones.

Theo

Reply to
Theo

And you feel that is still the case between most on-board offerings and server centric add-on cards Theo? I wonder why they sell such things (over and above basic add-on NICs where people want to replace a faulty on-board or add another port etc).

Ok.

Ok.

I didn't consider 'thermals' and NIX though, just video cards.

Understood. I ask because I haven't any real idea what the throughput is here but all I know is I can generally do what I need to do without waiting too long, Inc moving a few G's worth of .iso images about. ;-)

Alternative NICs are academic here as this PC is a Mac Mini, my server an Atom board with no spare slots and only an onboard NIC and the rest are phones, laptops and tablets that connect via WiFi in any case.

The server I built running WHS V2 to replace the old WHS V1 is on a more std motherboard so there could be room for a better NIC in there (so I'll check out the cheap Intel NICS you mentioned). ;-)

Cheers, T i m

p.s. The only problem is I don't know if I would recoup the power consumed by the onboard LAN if I disabled it and what the add-on NIC might consume extra and to what advantage in general?

Reply to
T i m

That ought not be an issue with anything made during the past decade (NICs had been forced to utilise DMA ever since the advent of 8MHz clocked 80286 processors - that Novell server box was built on such a system board and was able to max out the 10Mbps cheapernet link with ease, so much so that the older 286 machines donated to my children with pre- IDE HDDs fitted could load up the Doom game faster from the server than they could from their own local HDDs (300 to 450 KB/s HDD transfer rates way back then, circa 1997).

All NICs since those days of ISA slot cards use DMA. However, transfer protocol techniques have gotten a lot more sophisticated with the advent of Fast and Gbit ethernet adapters which may well add to the cpu overhead in the cheaper brands compared to the use of more sophisticated silicon on the Intel adapters (somewhat similar to the use of a host controllerless modem being way better than the s**te 'winmodem' but not quite as good as a standalone external modem).

When network speeds were limited to 10 and 100Mbps, the then current 'wisdom' was that you didn't need much by way of CPU 'grunt' in a (file) server box (now referred to as a NAS box) since the long established use of DMA for both HDDs and NICs offloaded the I/O 'donkeywork' from the CPU (which even then was several thousands of times more powerful than the humble 80286 such local network server technology had started out with).

Indeed, when I was upgrading my 'server' (now to be known as a NAS box) back in 2010, I rather thought the 2.2GHz clocked Semperon cpu was 'serious overkill' for the task in hand (I hadn't counted on the abysmal scaling performance of Gbit ethernet adapters in regard of CPU requirements) hence my underclocking in order to trade performance for reduced power consumption courtesy of undervolting the core voltage.

These days, any modern 'entry level' MoBo + dual/quad core cpu + 4 to 8 GB (or more) of ram with an unheatsinked built in graphics chip, even when using software RAID, should have ample reserve with regard to I/O throughput (at least as far as a SoHo home server box in a Gbit LAN system is concerned). Let's face it, even a 6 year old micro-ATX with SATA2 ports and its own built in Gbit LAN port hardware seems to be quite capable of this trick (at least when it's blessed with a dual core Athlon

64 - and that test with a decently specced win7 client machine suggests that even a single core cpu may have sufficed - I didn't bother testing this at the time).

That's an interesting site but I think the real problem with alternatives, which have equal on-chip support for checksum offloading and VLAN support and so on, is the rather spotty driver support in the

*nix based NOSes used in a lot of commercial NAS boxes (as well as "The Usual Suspects" in home brewed NAS/server boxes - Debian Linux (and derivatives) and NAS4Free/FreeNAS and so on). With an Intel adapter, you're more or less guaranteed good driver support, with other makes, less so (none or broken driver support ime).

There doesn't seem to be the same issue with HDD interface driver support, at least not for the more mature chipsets. The latest 'bleeding edge' stuff will always be problematical with the open source based OSes, usually resolved in time but sometimes never, especially if it's a short lived 'transient' "Fad" like technological development which can be tough if you had the misfortune to buy into what seemed to be the latest last word in technological development in MoBo hardware only to discover it was merely a short lived intermediate step towards an even better longer lived technology (RIMM anyone?).

NAS (Network Attached Storage) boxes are even more orientated towards offering 'services' than the earlier breed of 'Servers' than their title would suggest. If your main concern is 'file serving', you'd be well advised to make sure all unnecessary 'services' are disabled (DLNA/UPnP, iTunes/DAAP, Dynamic DNS, Webserver, SNMP, Unison, FTP, TFTP, NFS and so on to quote just half of what's built into a NAS4Free box.

Of course, if you plan on using the NAS to act as a media server, you'll need DLNA/UPnP and its ilk but don't make the mistake of enabling any transcoding features unless your NAS box has a higher spec than your desktop workstation - just make damn sure your chosen media streaming client is capable of handling your chosen media file types without needing such a 'crutch' (or else avoid the more obscure, less well supported media formats in the first place).

Anyone who uses torrent sources, enabling the BitTorrent client on the NAS is a no-brainer (assuming the NAS is left to run 24/7). The torrent client demands very little use of system resources, even on an underpowered NAS toy. Indeed, anyone heavily into tying up their desktop PC overnight, accumulating torrents, can justify the use of a toy NAS, with a built in torrent client service (pretty well all of them), on this one feature alone.

Harking back to the topic of cabling choice, I'd have to say that CAT5 or CAT5e is the correct answer in the OP's case. Gbit ethernet is likely to remain viable for the next decade, by which time fibre optic kit should become cheap enough to consider using the CAT5 cables as 'draw strings' to pull optical fibre cable through, especially if the CAT5 was installed with this in mind in the first place.

Reply to
Johnny B Good

Apart from gaming or CAD machines (with a PCIe graphics card or two) there is unlikely to *be* a separate graphics chip these days, the GPU inside the CPU has reached the point of being good enough ...

Reply to
Andy Burns

Last time I played with vlans at home I locked every computer out and couldn't get access to the router to sort out my mistake without a factory reset IIRC.

Another case of too much knowledge with too little experience can cause big problems at the press of a button. :)

Another reason I take router config dumps before fiddling with the networking dark side... (most of the time)

Reply to
www.GymRatZ.co.uk

I've been buying servers this week, as it happens. Every single one has a 1G port (or two or three) on the motherboard. The higher end ones have 10G ports on the board. Some of those also have modules for 40G.

You don't buy additional 1G NICs for these servers because they're 'better' than the mobo, you buy them because you want more ports. If the mobo manufacturer put on a lame NIC, that's a good sign to skip that board.

Another factor here is remote access. Often an onboard NIC can be used for remote management/KVM over IP of the motherboard, possibly selectable. That doesn't work with a third-party external NIC.

At 10G it's a different question, partly because your choice of NIC depends on your cabling (copper, optical, SFP+?), your driver stack and your application (are you going to offload any work to the NIC). But at 1G you don't need to worry: even an Atom should be able to keep up.

1G is just so slow.

Theo

Reply to
Theo

Ok.

Ok.

Wow.

Ok?

And that was my point Theo. An onboard NIC for a basic desktop may well have a 'lame' NIC AFA Server duties might be concerned but it could also be perfectly adequate for a std desktop role?

So that was the point ... if we are taking basic desktop hardware and turning them into servers (as I and others here seem to have done),

*could* we be limiting the maximum output from the server by not fitting a 'better' NIC?

I'm not stating that all desktop motherboards don't have the same performance network interfaces as servers or motherboards focused on server roles, I'm asking, because you can buy add-on NICs that are for desktop and server rolls, why are there two types if there wasn't a difference? You may have already answered this by saying 'there isn't a difference *today*' and that would mean there would be the exact same on-board NIC on desktop or server solutions supplied today.

I though I had seen an add-on NIC supplied with a small cable that allowed it to do that (or maybe it was for something else)?

So, 10 workstations steaming video from a real server and the same streaming from a basic Atom board will see the same throughput on the 'server' NIC (genuine question).

Replacing the on-board NIC on the Atom board with a server specific NIC wouldn't improve matters at all (assuming the data bottleneck wasn't elsewhere)?

Cheers, T i m

Reply to
T i m

There are some different cases:

Recent Intel CPUs have an onboard Intel NIC. That's no point replacing that.

Older boards may have a discrete NIC chip from a random vendor 'Embedded' stuff may have a discrete NIC (Atoms, Microservers, etc) because they don't have one on-die, and the board vendor may have chosen a cheap one Extra NICs may not be Intel (eg a recently purchased 'gaming' board has one Intel and one Realtek)

If you have heavy workloads, or are worried about drivers, then replacing make may sense in the latter group. But what I'm getting at is it doesn't, in general, make sense to buy a new PC and immediately replace the NIC. Unless you picked a particularly lame PC to begin with.

Some are, some use one of the existing ports via the BMC CPU (usually an ARM). Sometimes you have to add a 'key' with firmware or a licence code, but the functionality is already on the mobo.

For simply pushing packets out the door (eg as a router), 1G is probably the bottleneck. If you're doing other work (like fetching that stuff off disc, encoding, etc) then sooner or later you'll run into headroom issues, but they likely aren't due to the network stack (on *nix anyway, I don't know about Windows).

It may help - don't underestimate how bad it can be made due to cost-reduction. But that's different from the original question of swapping the onboard Intel NIC for an add-in Intel NIC - there's no point.

Theo

Reply to
Theo

I'm not sure that was actually my original question (or it wasn't meant to be). It was more, 'is there any point replacing an onboard NIC (Intel or otherwise but I'll go with Intel for these purposes) with a more server orientated NIC when considering a (home) server?

Now there may not be such a thing as a 'server specific NIC' these days in which case I could see how there would be no advantage. However, there do seem to be such things offered by the likes of Intel themselves and to offer high performance but maybe you are right and it's just a sales exercise (these days)?

Cheers, T i m

Reply to
T i m

You buy them for servers when you need more ports than are on board. Have a look at a recommended setup for a hypervisor with no single points of failure. That's the market - it's not a sales exercise.

Reply to
Clive George

So, you buy a 'std' card or a 'server' one?

Cheers, T i m

Reply to
T i m

I just get the ones the supplier sells with the server, thus avoiding problems with people denying responsibility for the things working.

Intel don't appear to sell cards not described as "Server Adapters", so if you're buying Intel, that is the standard one.

Reply to
Clive George

Yes, I was wondering whether or not to 'shoehorn' that reference in. :-)

I gave up figuring how to and just left it out.

Reply to
Johnny B Good

That's not a bad plan, if you aren't building your own etc.

Right, so that doesn't really help us determine if there is a difference between std desktop and server grade NIC's then.

Unless any other manufacturer still makes both types and if they do I'd still question the differences?

Cheers, T i m

Reply to
T i m

En el artículo , Clive George escribió:

They do/did a range of "desktop" adapters too. I suspect the difference is just in the naming and, perhaps, they use the same controller chip and disable some of the more advanced features in the "desktop" version drivers.

I have a "Pro/1000 PT Desktop Adapter" (i82572) in my bits box. This is a superb PCIe adapter. I would be using it but the motherboard I have has an i217-LM chip on it, and I haven't been able to find out if it would be worth replacing with the Pro/1000 - suspect not.

In the earlier days of Linux, it was a relief not to have to battle with the awful drivers for 3Com cards and just ram in an Intel 82556/82557 based card - they Just Worked (tm).

Reply to
Mike Tomlinson

I was looking at older versions of those, and the different was the Desktop adaptors were PCI, and the Server adaptors PCI-X. So there was a performance difference, but only because servers had a faster slot (that wouldn't take PCI cards).

On the current ones the difference seems to be virtualisation, multiple ports and fibre options - all hardware features that the average home user doesn't care about.

Theo

Reply to
Theo

En el artículo , Theo escribió:

Can't have been, because I have a Pro/1000 desktop adapter right here that is PCI-e.

I also have a few HPaq PCI-X quad-port server adapters - those use 4 x i82557 controllers.

Reply to
Mike Tomlinson

PCI = traditional 32 bit PCI, shared parallel bus PCI-X = PCI expanded to 64 bit shared parallel bus, different connector PCI Express (PCI-e) = point to point serial links, 1/2/4/8/16 lanes

Pro/1000 covers PCI, PCI-X and PCI-e. All modern cards are PCI-e. The list I was looking at had things like the Pro/1000 MT Desktop as PCI and the Pro/1000 MT Server as PCI-X - I assume this is a general trend, but I didn't check every model.

Are you sure? The 82557 is a 32 bit PCI chip, so would be wasted on a 64 bit PCI-X board. I suppose they might have done that if your server only has a PCI-X slot, or they couldn't route all 64 data lines on the board.

Theo

Reply to
Theo

En el artículo , Theo escribió:

I do know the difference, thanks :) and the one I have here is a "Pro/1000 PT Desktop adapter"

formatting link

Appears to be a fairly basic PCI-e gig adapter for desktops, but gets very good reviews for performance.

The two letter suffix indicates features available on the card - TCP offload, CPU balancing, compatability with hardware virtualisation, etc. "Server" cards have more features.

I've used several Compaq NC3134 4-port 64-bit PCI-X cards in Proliant servers. Those used Intel i82559 chips.

ebay 361048145403 shows one with and without the daughterboard.

Apologies, it's the i82559, not the i82557, but it's still a PCI device.

Not really - if four of them are used and active, the extra bandwidth of

64-bit 133MHz PCI-X is required. I can't remember if some form of bridge or load balancing chip is used.

Proliant DL380s, with three PCI-X slots. The rules for fitting cards are a bit arcane - if you fit a 33MHZ PCI card, it reduces the speed of all three slots to that speed.

There is only one 133MHZ slot - the other two were 100MHz.

I used LSI Logic U320 PCI-X SCSI cards in them to drive external 16-bay SCSI/SCSI disk arrays, and later a SCSI/SATA 24-bay array. This one:

formatting link

a very good card which worked well with Linux. An Adaptec 29320A-R intermittently locked the machine hard under heavy load.

Think that's enough ancient history for one day :)

Reply to
Mike Tomlinson

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.