PC boot time

Managed to trip the workshop breaker today which crashed out the old PC running Win7.

When I restarted it, it said 'resuming Windows' and got to the desktop far quicker than normal. Why?

Reply to
Dave Plowman (News)
Loading thread data ...

Hybrid sleep?

When enabled, if PC goes to sleep, it also writes state to disk in case power is lost, if that happens it's like waking from hibernate, rather than booting from scratch.

Reply to
Andy Burns

Its in safe mode? Brian

Reply to
Brian Gaff

The PC was already in a low power hibernate mode with session saved to disk when the breaker went.

If the PC had been active when the power went off suddenly then it would have needed to rebuild some files and take an age or two to do it.

They boot even faster if you swap spinning rust for solid state.

Reply to
Martin Brown

Amen to that. I invested £30 and my old Dell now boots in seconds

Reply to
stuart noble

I think i am down to 7 seconds, two of which are bios checks, and three are setting up the desktop and X window env. post logging in.

CPU bound really, as the laptop is a lot slower.

The miost incerdible boot is Windows XP in a VM. Its about a second to 'resume' from image, and about 5 from cold.

Reply to
The Natural Philosopher

It has got an SSD. Can't say it is noticeably faster.

Reply to
Dave Plowman (News)

Something is wrong if it isn't at least a factor of 2 faster and normally nearly an order of magnitude faster (depending on how extensive your default power up BIOS checks are). Some BIOSes these days offer a save working ram image to SSD option during controlled shutdown so that you can quite literally restart from exactly where you were before.

My SSD practically maxes out a 6G SATA link whereas my spinning rust disk barely scrapes past 10% of the bandwidth of a 3G SATA.

Reply to
Martin Brown

Well, with my laptop when I first got it (thanks Mr Rumm) and fitted the SSD, it was extremely fast to boot. But with normal use and all the Windows updates it has slowed down considerably.

This workshop PC is a lot older, but running the same OS. Processor is an Athlon 64 3500+ 2.21GHz. And can't say the boot time improved noticeably when fitting the SSD. Although it has another HD too, with XP on it.

Reply to
Dave Plowman (News)

Years ago, well the early 2000s I heard that some buses would only run at the speed of the slowest device, so if you had a CD/DVD drive in the same chain (IDE) then you're HDs would run at the same speed as the CD drive so not very fast.

Reply to
whisky-dave

I just leave my laptop on all the time as it sometimes won't recognise the HDD on bootup. The screen goes black after 5 minutes.

Reply to
Max Demian

Things like additional programs loaded at startup can have quite an effect. If you run "autoruns" from live.sysinternals.com you can see exactly what is being loaded...

Things like virus scanners can have a big hit on boot performance.

There are several components of boot time, including how long it takes to get through the various BIOS initialisation stages before it even gets to start loading stuff from disk. Some older machines can be quite slow there (especially if they have other hardware that needs initialisation - like one of my machines that probably adds 10 secs just doing the SCSI init and bus scan).

CPU performance will also have an effect on booting times, since a windows boot will load lots of separate executable images that need to initialise and run. On a modern processor the boot will be mainly IO bound, but on older ones, the lack of processor oomph becomes far more noticeable - SSD upgrades will often then just shift you a little further along the road to the next bottleneck.

Lastly WinXP can have a number of issues on SSD drives. Not supporting the trim command can mean progressively slower write operations (although that should not effect boot too much).

Also the XP disk preparation routines did not align partitions to suit drives with 4K sectors. That's bad news for modern drives of all types since they are then forced to do a read / modify / write cycles on two sectors for every single sector written by the OS creating a "write amplification" effect. That's another big performance hit on writes, and a smaller one on reads (mostly on random reads). Its also an additional problem on SSDs since it burns though the flash page write cycle limit more quickly, and gives the drive's wear levelling algorithm more to do. (having said that the write cycle limit on modern SSDs is pretty high - so you will be hard pushed to actually wear out a SSD).

Reply to
John Rumm

That's not why an SSD is faster.

With a conventional spinning rust disk to read something off the disk you have to

1 Send it a command 2 Have it work out what you mean 3 Move the heads across the disk to the right track (like selecting a track on an LP) 4 Wait for the disk to go around to the right position 5 read the data off the disk 6 transfer the disk to the PC.

The old PATA CD slowdown thing affected #6 only.

On a spinning rust disc 3 & 4 will both take several milliseconds, and are the slowest part of the whole process.

On an SSD steps 3 & 4 are missing from the sequence.

SSDs _do_ have one problem spinning rust discs don't, but you rarely see it - if you write a _LOT_ of data they can run out of spare blank space to write to, and they have to slow down while they clean a bit more. The trim command (hi John!) is designed to let it know about free space it can erase in advance.

Andy

Reply to
Vir Campestris

This is more or less bollocks.

SSDs can only write ENORMOUS blocks at a time.

So changing one bit on a file will, in the end, result in maybe 10K bytes of write., normally to a fresh block to minismise wear which is all down to writes.

This is not down to how MUCH data is written, but simply to how often it happens.

In practive modern SSDS have a lot of cache RAM inside, to minimise writes, and of course modern operating systems (even Windows) will also cache writes in RAM.

What this means is that lost of data does not slow down SSDS at all. Only if all the cached writes in the disk and the operatinsg system get full will the disk actually write anything at all, and SSD writes are FAST.

Since they are done in HUGE chunks.

Reply to
The Natural Philosopher

I never said it was the reason and I didn;t say anyhting about teh speed of SSDs.

No what happened I think it was APTI ver. 4 where the max bus speed of 66MHz went down to 33MHz if a CD drive was on the same bus.

Irrelevant and I think those that use the term spinning rust might not know what they are talking about.

So.

They don;t have to slow down, what happens with SSDs is that the data can't be overwritten (currently), so what you need to do is actually erase the data rather than just the link to that file like you did with HDs.

Not really a problem nowerdays

Reply to
whisky-dave

No, actually its a pretty good explanation.

Only if you think 2K to 16K is "enormous".

Typical page sizes on modern NAND flash devices range from 2K to 16K. These are arranged in blocks of typically 128 or 256 pages per block.

10K is "unlikely" (in the extreme, think powers of 2)

For modern advanced format drives 4K is the standard sector size used by the OS. So 4K is the smallest write size the OS will support. How that maps to flash pages will depends on the physical page size of the devices being used.

The write will usually be to a fresh *page* (not necessarily a fresh block). There are only limited modifications you can do to a page once written, and you can't erase a single page at a time. So page writes are typically to a fresh page within in a block. Only if there are insufficient pages free in a block, then the drive *may* have to copy all remaining valid pages from the current block to a new one. (and when doing that, it would much rather not have to erase an existing used block full of invalidated pages first)

*Mostly* down to writes, however flash does suffer from a "read disturb" characteristic that means the typical bit error rate for a block will tend to increase with the number of page reads performed on it. Modern flash controllers will also tend to keep count of this, and reallocate a whole block of pages when the limit it reached.

That makes no sense at all if you think about it. A single 1MB file write will take more flash write operations, than twenty 1K file writes even though the latter is more "often"

Handy for random IO, but does not have much effect for larger sequential writes. Ultimately you can only safely cache in RAM for a few seconds.

The writes are fast - when there is a free page in a block and / or a free block to copy the existing block with modified pages into. The difficulty comes when there are no free blocks and the drive then needs to do a garbage collection and a block level erase. Its the block level erase that is slow.

Newer drives can mitigate this somewhat with background garbage collection - (that tends to help more on typical workstation workloads, than server workloads).

Trim support at the OS level helps keep the drive aware of which pages in any given block are actually valid. Thus it will reduce the time taken to garbage collect, and also reduce the number of pages that need to be copied to a new block when that time comes. That in turn reduces the requirement for new writeable blocks, and hence reduce the number of times a page erase operation will need to be done in the middle of a disk IO operation.

That's not necessarily the case, see above. The write speed for those "huge chunks" will vary enormously depending on the circumstances. Its why new fresh drives tend to perform better than more heavily used ones.

Reply to
John Rumm

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.