ATX motherboards.

Notice some recent ones have an 8 pin power connector on the board (CPU?) Older ones more usually 4. Which explains why my fairly recent PS has an 8 pin that can be split into 4+4

But it also has an additional 8 pin (PCI?) connector which looks similar, and can be split into a 6+2 But by the wire colours, could cause a rather big bang if mixed up with the other one. They do seem to have shaped pin shrouds, but I'd guess that wouldn't stop a determined push.

Reply to
Dave Plowman (News)
Loading thread data ...

For some reason reminds me of a Russian rocket fail where an accelerometer had been fitted upside down, despite being shaped to only fit one way.

The crash report noted from the engineers that it took "some effort" to get it into place :)

Reply to
Jethro_uk

Yup, so called standby power...

Normally to feed extra power to a graphics card.

They are in theory keyed / polarised to prevent that - but as they say, you can never make anything foolproof since fools are so ingenious!

Reply to
John Rumm

The extra 4 pins are just in parallel with the first 4 pins of the ATX12V connector, and only become necessary with the more greedy CPUs (10, 12, 14, 16 core) the really high power servers use different sockets.

Again the 8pin vs 6pin are only needed for top-end GPUs, though they're often fitted 'for show' on graphics cards that don't come close to needing 2x 8pin power cables.#

Reply to
Andy Burns

About 75% down this page, is info on the 4+4.

formatting link

The 2x2 part, with two 6 amp wires per rail, gives 12V * 12A = 144W of power transfer. If the VCore converter was 90% efficient, this is sufficient to run a 130W processor.

If using all eight wires, then that would cover power needs up to 260W of actual CPU power.

Modern processors (you're building a system today) can stretch past TDP. If you check ark.intel.com , the power number there may not be sufficiently informative about power, to make selection of 4 versus 8 very scientific.

And there are some CPUs up at the 250W level, so that 8 pin capacity could get used in such cases. Maybe an AMD Epyc or so.

The wires take 6 to 10 amps. The wires can carry less current safely, when there are more Molex pins side by side. The main connector has a 6 amp rating on the wires in it. The smaller connector could carry more current, especially if the wire gauge was beefed up a bit. So when using 260W of usable CPU power from VCore, there's probably still room for a bit more there. The 260W rating is probably conservative.

At idle, Intel processors can use somewhere around 13W (12V @ 1.1A or so). That leaves plenty of headroom, on the ampacity of the wiring on that connector when the machine is idle.

"Standby power" is the +5VSB pin on the main connector. Usually power supplies range up to 3A on that pin. When the computer sleeps, a four memory slot computer uses 5W (5V @ 1A). If charging an Apple iPad, at an additional 2A, that could use all the available

3A of power on +5VSB. And the fan does not spin while the power supply is making standby power (convection cooled). If the computer has eight DIMM slots, the power used during sleep might be 7.4W (a different generation of RAM using less power). And then you'd have less current for device charging. The power used by the DIMMs, is for auto-refresh while the computer sleeps.

If the industry had used static RAM, such as the MK4028 in some old pinball machines, static power with no clock on a device would use zero power. The denser DRAM devices, we pay for the convenience of extra capacity, in the form of power usage while in S3 sleep. I get my power numbers in instances like this, by using a Kill-O-Watt meter used just by the PC (it allows estimating stuff without opening the computer case).

And USENET posters claim to have forced wrong things into holes, but I think they're pulling my leg :-) One individual for example, claimed to be stupid enough, to take a DIMM into the shop, and cut an extra slot in it, so it would fit the socket on the board he wanted it to fit in. The slots for UDIMM versus RDIMM, where the keys are adjacent to one another, make it theoretically possible for a clever sort, to be claiming to be making changes to the DIMM "so it'll fit". As if the factory "made a mistake". Ow, my leg.

At one time, the 12V rails, like 12V1 and 12V2, adhered to the low voltage rating limit "per rail" of 12V @ 20A. Modern supplies can have a single AC transformer making available up to 70A of DC, and you'd say "well, what happens if all 70A goes down X wires on a short?". There should be current flow monitors on various looms, to still enforce some current flow limitations, to prevent a gooey mess of melted wire insulation or any glowing wires. They could, if they wanted, monitor the current on the 4+4, pretend it's two separate rails, and cap the current at 20A for each 4, or 40A at most. The entire 70A should not be able to flow, without the PSU shutting off perhaps. You can test these behaviors, on a professional ATX load tester (Anandtech or Tomshardware or JonnyGuru has one too). It would be more difficult to do this on your own lab bench with some ceramic resistor banks.

The supply in that example, would have a 70A limit enforced somehow, as an overall limit. But it also has monitors on separate looms (12V1, 12V2, 12V3 and so on). Only a few brands bother with a diagram showing details like that. The 12V1, 12V2, 12V3 all start from the same place, so it's not possible for currents to flow from 12V1 to 12V2 if you mix them. But if you're clever, you could partially defeat any 20A limiters in the path. And the limiters aren't exactly

20A either, they could trip at some higher level such as 26A. Only a bench test (preferably with a real ATX tester on someone elses bench) would dig out more exact details. An examination of the distribution PCB (on the modular supplies) might hint at how current limits are enforced.

Paul

Reply to
Paul

This also happened to the stardust Mission when the last chute did not deploy, as the drogue deploy should have activated the accellaromemeter and would have if it had not been put on upside down. Brian

Reply to
Brian Gaff (Sofa)

Thanks - I'd sort of guessed that.

Never had one that needed it. ;-)

But if I were designing such a device, I'd make sure you couldn't inadvertently fit a connector with the wrong polarity. Surely there are plenty connectors out there which could have been used? Even more so given they fitted a different connector for SATA power.

Reply to
Dave Plowman (News)

I did Google it and lots of hits.

Understandable if a replacement PS had an 8 pin and 4 pin connector, and one not needed for the graphics card. The natural reaction would be to use the 8 pin one.

However, with a dead short across the 12v rail, I'd expect a decent PS not to start up?

Reply to
Dave Plowman (News)

SATA was designed with enterprise servers in mind. It allows making slide-in/slide-out drive installation, and the connector mates without a fuss.

The cabling for desktop computers, to work with SATA, was an afterthought. The first generation of desktop cabling had no retention feature and the connectors would fall off. Something they would have noticed if they were awake.

Paul

Reply to
Paul

Or thermal cycling means that all of a sudden the PC starts logging all manner of funny disk errors, that are fixed by removing and re-inserting the cable on the drive and M/B. They don't even seem to bother with gold flashing the contacts on the cable now either.

Reply to
Andrew

My new one arrived yesterday.

Was hoping to use my old graphics card. It has the same PCI slot fitting, but on switching on got multiple bleeps and nothing. Not even the Bios page.

Other thing is multiple boot isn't as easy. I fitted a new HD for Win 10 and was hoping to be able to boot to the old Win7 one as an alternative too. Something to do with EFI.

Reply to
Dave Plowman (News)

Is this a PCI Express video card ?

The Playtool site has information on AGP.

formatting link

Vanilla PCI has a 3.3V slot, a 5V slot, and if both are present, the card could be universal and work with either kind of motherboard. Most of the time the key in the PCI slot is the 5V one.

formatting link

PCi Express, there were some old revisions that wouldn't work with newer systems because the parties could not auto-negotiate properly. A workaround at the time, was some sort of "flashing" exercise that would force the video card to run at the lowest rate. Bypassing the negotiation process, in an attempt to make things work.

*******

Windows 7 doesn't support Secure Boot as far as I know. To do a dual boot with UEFI, I think I had to set the BIOS to "Other OS", which roughly translated means "Disable Secure Boot". While that setting is normally discussed in the context of Linux dual boot, in some cases Windows 7 is also in that situation.

Paul

Reply to
Paul

Yes. Asus EN7300TC512. But this new mother board says it has PCI Express

  1. No clue if earlier - 1 or 2 are compatible.

Thing is my KVM switch and monitor are DVI. As is this computer. I've found an HDMI to DVI lead which sort of works, but the picture from the MB HDMI nothing like as good as of old.

Something else I've have to investigate.

Reply to
Dave Plowman (News)

Yes they are.

Does BIOS/UEFI have a setting to prefer PEG (PCI Express Graphics) or IGD (Integrated graphics device)?

Reply to
Andy Burns

The problem was inserting the graphics card turned the MB into a bleeping wreck. Couldn't even get to the Bios page.

After some experimenting with it unplugged, changed the PCIE bifurcation support from auto to PCIE x8/x8, so it at least now boots. Just got to find the drivers for it now. ;-)

The old MB had a nice simple Bios page. This one looks designed to keep geeks happy.

Reply to
Dave Plowman (News)

Back to square one. It's showing the 'VGA' fault LED on the MB again, and refusing to start.

Could it be a PS thing? it's a relatively recent one and 500w which the MB booklet says should be OK.

The MB has an 8 pin socket for 12V to the CPU and another 4 pin one along side it. My PS only has the 8 pin one. My DVM shows all those sockets just wired in parallel.

With everything plugged up and working, I'm seeing 5.2v and 12.3v at a spare Molex.

The other odd thing is the thing won't power up with no monitor connected to the onboard HDMI connector (no video card in place)

Reply to
Dave Plowman (News)

You mentioned the GPU model, but not the M/B model as far as I can see?

Sounds like it's not certain if it should be using PCI or onboards graphics ...

Reply to
Andy Burns

It's a Gigabyte Z490 Vision G. Intel core i7 10700. 32Gb HyperX Fury. Graphics say Intel UHD 630

The internal graphics should be OK for my purposes - expect that my other bits (KVM) and monitor are DVI. Which is for this computer. Hence wanting to use the DVI graphics card.

I've got several HDMI to DVI leads, but the HDMI output on the MB doesn't like them and throws up the warning light.

Reply to
Dave Plowman (News)

I hope you're doing all this fine testing *without* the KVM in the circuit.

Test first with monitor direct to passive adapter cable.

I can't suggest a DP to Dual Link DVI-D in this case, because for an item which is actually an active device, the price they ask can be almost as much as a video card. The cheap ones (with "Active" printed in white paint on the housing), are obviously not active and are a scam. You can do DP to HDMI to single link DVI digitally and passive, at a guess, and that could be what the cheap ones are scamming. (The only way to get true dual link DVI, is with an active conversion. That gets say, 2560x1600.)

Your old card EN7300TC512 may be DVI-I. That gives two possible ways the monitor can (eventually) pick up a signal, through your old cabling and KVM. The DVI is single link. That also means the HDMI to DVI passive (which is a single link), should work from a resolution perspective. Because that's what you were driving out with before. You could check your cabling to see if it's DVI-D only or DVI-I (capable of either) on the output of the KVM side.

Video features Maximum digital resolution: 1920 x 1200 Maximum VGA resolution: 2048 x 1536

If the monitor was actually using the VGA signal, the KVM would probably place a ruination on such a signal, and cable reflections tend to make a mockery of VGA at extreme res. At 1920x1080, DVI-D and VGA are about equal. Above that, the VGA might look a bit worse.

A KVM can sometimes interfere with DDC/CI and the reading of EDID.

I'm sure the dialing in of this setup, is coming soon... :-) You have the materials. Knowing what crockery hides inside the KVM is part of the fun. If the signal looks like shit, maybe the KVM is doing DVI to VGA ?

I have a HDMI to VGA and a DP to VGA here (active devices), and both of them look fine on VGA. But the older stuff might not be as good. Adapters like that are more reasonably priced than DP to Dual Link DVI-D (for no particular reason).

You should have put the video card in the slot closest to the processor, and that would have removed the dramatics of the bifurcation logic. It should "just work" in slot 1.

I don't know why you're getting a VGA warning when it can't detect a monitor. The Intel graphics, like any other video card, should be able to do impedance sensing and know some 100 ohm loads are on the diff pairs for the RGB signals. It should really drive out a VESA signal, even without EDID. The impedance tells it a monitor is there, and the monitor should be "safe" with 800x600 or 1024x768 on it.

Paul

Reply to
Paul

At the moment it's set to auto.

They are just cables. Some seem to have all the DVI pins, some only some of them.

None work via the KVM switch. Which worked just fine on the DVI output of the original graphics card - hence wanting to use it now.

Reply to
Dave Plowman (News)

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.