Defragging SSD's under Win 10, Yes or No ?? Confused, plus Recovery partition

I finally got round to installing a WD Blue 500Gb SSD in my £25 charity shop HP AIO PC. I just removed the two torx screws holding the multi-plug onto two pillars connected it to the chassis, flattened out the SATA/power cable and attached the SSD to the chassis with a plastic cable tie passed through a series of small holes in the chassis.

I was expecting to have to get into the UEFI but it just booted from the Media Creation USB drive and got on with it.

After about an hour I safely removed the USB drive and noticed that the metal outer shield of the USB thumb drive was really hot. Is this normal with USB3 because I haven't noticed this with USB 2 devices or connectors ?.

As per other folks advice I have unclicked the box that enables a regular 'optimize', but I have found this article with a reply from Microsoft. Who do I believe ?.

formatting link
Should an SSD be 'optimized' or not, and how often ?

Windows created a small EFI boot partition at the start of the drive which I assume is where the EFI boot code lives ?. Then we have C: drive taking most of the space followed by another partition called Recovery. What is the latter for ?, do I need it ?. I intend to create system image disk anyway stored on an external drive.

Reply to
Andrew
Loading thread data ...

Yes, USB 3 runs faster and some drives take more power. In the form factor there's not much heatsinking so they can get hot. It's not a problem.

It doesn't really matter. Defragging won't take much time or cause too many spurious writes. So let it do whatever it wants to do.

Yes.

Recovery may be so you can boot into something if Windows is broken. I expect it's small, so do you really need to recover that extra few GB?

Theo

Reply to
Theo

The IO runs at 5GHz, which contributes a small amount to the heating. In some cases, an electrical defect can cause heating like that, only it gets even hotter. (Sometimes a tiny bypass cap is a dead short, making it obvious the USB stick was never tested at the factory.) Note that some DVB tuner sticks, get roasting hot, but that's because the chip in the middle of the stick is just too hot to be confined to a plastic tube like that. Some of the older silicon tuners ran scalding hot.

Historically, in the newer Windows, Microsoft has had a lot of trouble figuring out which devices are SSDs and which are hard drives. We'll assume to start with, it identified the device correctly (for a change).

Hard drives are defragmented, as their style of optimization. Only smaller files are defragmented. Files larger than 50MB are not defragmented (but a third party defragmenter will do them if you want).

The optimization for an SSD is called TRIM, and unused (white) space on the drive, the driver tells the SSD which parts of the drive are not really used. These are passed to the free pool, and made ready to be used.

An optional behavior for SSDs, consists of checking for Shadow (VSS) copies on the drive. These allow "versions" of C: to be frozen, for activity such as backup programs. However, a side effect of versioning, is the drive slows down. To fix the side effects of VSS (the slowness), the OS can actually defragment the SSD. This represents an unconventional reason for defragmentation (normally, SSDs don't need to be defragmented).

So most of the time, you should see a very short TRIM operation and then the SSD drive entry in the table will indicate the housework is done. However, if you had the right kind of backup software loaded, you *might* see a defragmentation session happen on the SSD. And this helps return performance to normal levels on the SSD.

The EFI boot partition is FAT32. You can use some tricks, to examine the contents, and see the Microsoft folder and the Ubuntu folder in there. The boot materials will be in each folder.

The Reserved partition is a "hidden NTFS" of type 0x27, and it contains

600MB or so "WinRE.wim". That's Windows Recovery Environmwent image, and it is a bootable OS. If the main C: partition is damaged and won't start, WinRE.wim may be booted while doing the "three pass repair" thing.

This command, should point to the WinRE.wim container:

reagentc /info

That will print out a few details. Mine is "Partition 4" 646MB partition.

Windows Recovery Environment (Windows RE) and system reset configuration Information:

Windows RE status: Enabled Windows RE location: \\?\GLOBALROOT\device\harddisk0\partition4\Recovery\WindowsRE Boot Configuration Data (BCD) identifier: 964e2ef1-3a60-11ed-81d3-5cf3707d2fda Recovery image location: Recovery image index: 0 Custom image location: Custom image index: 0

REAGENTC.EXE: Operation Successful.

The program "testdisk.exe" , a third party program, allows snooping around inside the hidden partitions.

On OEM machines that don't have a 15GB copy of the OS, you can make a USB stick and it uses WinRE.wim as part of the construction. But that thing is relatively useless, as it puts back an empty OS and none of your programs are there. It is just as easy to download a copy of the OS, from Microsoft, if you want an empty OS. The service offered on a new machine, isn't of the same quality as the older machines.

If you bought a brand-new Windows 11 machine today, you would use a backup program and just image the machine, so you have a "factory" snapshot of contents. Then, if you feel the need to "nuke and pave", your backup image is your source for that operation. Since the manufacturer could not be bothered to do a good job of that, for you.

Paul

Reply to
Paul

Yup USB 3 has the bandwidth to better exploit the performance of a SSD, and so they can run hotter. Usually less noticeable with a thumb drive, but you can see quite significant heating with a NVMe M2 SSD in a USB3 caddy.

Leave it alone and it will generally work fine. Not all optimisation is necessarily defragmentation.

Recovery will can contain a recovery image used for auto repair should the thing fail to boot.

Reply to
John Rumm

Nope, don't get that with mine.

Reply to
Rod Speed

There is no point in defragging an SSD.

Reply to
Rod Speed

It is perfectly capable of doing that all by itself. And does so every time you access it.

'defragmentation' is a technique used by physical hard drives to minimise seek time between logically adjacent data sectors.

SSDS have no seek times. Physical data sectors are in any case not directly accessible by the operating system. And are reallocated on a daily basis by the disks internal firmware.

Any 'degfragmentation' would almost certainly be ignored by the disk, or actually shorten its life and make no difference to performance.

Reply to
The Natural Philosopher

Normally run automatically by a decent OS on a regular basis

*might* see a >defragmentation session happen on the SSD. And this helps return performance >to normal levels on the SSD.

Utter dangerous bollocks

Reply to
The Natural Philosopher

They don't need defragging due to the way the filing system addresses them, spinning disk do because if the data is together on the platter it can be read in a continuous stream so is much quicker. What Windows does by default is run the "Trim" function once a week which is entirely different:

formatting link

Reply to
Jeff Gaines

Now ssd and USB drives are different. No way should you need to defrag an internal ssd. Occasionally it can be good to compact the registry,but nothing more. On thumhb drives once again, the more times you write memory locations the shorter the life will be. As for speed, well if these genuinely are usb3 devices, I'd suggest you do not defrag at all. How are they formatted? For maximum compatibility, use fat, you can get away with fat 32, but ntfs tends to restrict their use, for say playing audio in an audio player, as they seldom support that format.

In my view, the weakest link with USB drives are the usbA plugs. Sadly although you can get USB C sticks now, if you need to use one in a player of audio, you will need an adaptor lead with its associated connection issues!

Brian

Reply to
Brian Gaff

Not your strong point, tact. Right, you really should not defrag SSDs, as I said, the Erunt and similar suites can compact the registry and you get a bit of a speed increase, but unless there has been some catastrophe on the machine, about 1 time a year is enough. White space will be reclaimed naturally by the drives housekeeping and you should never notice it. Note that most registry compactors only remove duplicates and know nothing of the recovery partition, and nor should they, since you want it pristine. Brian

Reply to
Brian Gaff

Am 01/04/2023 um 18:34 schrieb Andrew:

Unless you use good old fashioned FAT16/32, the answer is no.

Reply to
Ottavio Caruso

Yup, classic TNP :-)

And as has been pointed out the regular "optimise" option means different things for different media types. For a conventional HDD, windows will indeed run a traditional defrag. For SSDs it will instead run a regular "trim" operation - something that is required to avoid performance loss with SSDs.

Paul's comment was specifically about Volume Snapshot Service copies. Having VSS copies floating about can cause a performance hit on access to the SSD, since the kernal mode VSS process can interrupt and pause disk access processes to handle the processing of shadow blocks. The performance hit is incurred in the kernal rather than in the physical access to the SSD. So backup apps (that make heavy use of VSS to manage backup of open files, shared files etc) need to do some housekeeping on VSS blocks cached to maintain good performance on the SSD.

Not sure what registry compaction has to do with any of this?

Reply to
John Rumm

The answer is yes - since in the context of a SSD, optimisation means running a trim process, not a defrag.

Reply to
John Rumm

SSDs have some seek time.

<pu71dr$1fc8$ snipped-for-privacy@gioia.aioe.org>

formatting link
formatting link
On the fragmented file, the SSD reads it at 229MB/sec. On the unfragmented file, the SSD reads it at 383MB/sec.

When a file is fragmented like that, a single file requires multiple $MFT (Master File Table) entries, and having to read the $MFT once in a while, has a cost.

For the unfragmented file, the $MFT entry is 1KB and the list of LBAs is a contiguous one (does not require lookups, while the read is happening).

NTFS only supports a finite number of fragments on a single file, and you can have a "write fail" if the file is no longer "representable" via the $MFT.

To do studies like that, the free PassMark fragmenter comes in handy. That's an easy way to chew up a file. When SSDs are involved, you run Passmark on a RAMDrive first, then clone the fragmented pattern to the SSD, to reduce wear and tear. Then it is ready for a read test.

Even the RAMDrive shows a small amount of slowdown.

Paul

Reply to
Paul

Not beyond the time needed to access any data anywhere on it.

There is NO RELATION WHATSOEVER between the logical position of an SSD 'SECTOR' and where in NVRAM it actually is.

The disk itself holds a table of logical to physical mappings and these are constantly shifting according to the wear levelling algorithms.

Attempting to access the raw sector information via the disks interface is a notion that only someone who has never thought about whet is going on in an SSD would think was either possible, or worth doing

Pure conicidence.

Reply to
The Natural Philosopher

There are two situations I can think of were an actual defrag *are* required on NTFS volumes, even on SSDs.

While NTFS reduces the likelihood of fragmentation occurring at all (using similar tricks as EXT4) you will still get some fragmentation on SSDs as well as spinning rust. Conventional wisdom maintains that the performance effects of it on SSDs are insignificant unlike those felt on spinning rust. This is almost true, but for the fact that the filesystem has to add meta data that keeps track of the fragments. Processing this meta data uses storage space (in the MFT) and has a small performance impact. The show stopper however, is there is also an upper limit on the total amount of fragmentation meta data you can store in the MFT. Once this is reached, the volume as a whole has reached its fragmentation limit, and no further new fragments can be written.

The second situation is when the Volume Snapshot Service is active (not always enabled on windows desktop these days unless you enable restore points), but often used on servers to facilitate open file backup and on shared volumes to eliminate concurrent access block collisions. There the "copy on write" mechanism is noticeably slower on a fragmented volume due to the extra OS overhead blocking drive accesses.

The windows periodic optimisation will look out for these cases, and if necessary do some actual defragging to eliminated the build up of fragmentation meta data - this will be necessary on NTFS volumes regardless of the actual storage technology.

Reply to
John Rumm

You are looking at some of this from the wrong perspective. Sometimes this is not a physical access problem, but a file system problem. If you run out of space to track fragments in the file system, then you have a write failure.

Even on read access, if the OS must now process multiple additional lookups to follow a fragment chain, that takes the OS longer than with access to a contiguous LBA chain with no additional lookups beyond the first access to the file record.

There are hardware performance impacts from fragmentation on SSDs with heavy write requirements since at the flash level there is an "erase on write" requirement (and flash erase is block level and slow). Scheduled trim operations can keep on top of this in balanced read write workloads or read heavy ones, but write intensive operations force the block erase delays into the IO process.

Then you get into the deeply complex and subtle interplay with VSS - especially on server workloads. If the fragmentation means that the OS must implement many extra blocking copy on write activities into a shadow copy to maintain the illusion of temporal record integrity, that requires both extra physical hardware writes, and blocking IO operations at the OS level.

Denial ain't just a river in Egypt...

Reply to
John Rumm

He is a little hard of understanding sometimes.

Exactly. NTFS is based on the VAX/VMS ODS-2 file system, which had similar problems. Fragmented files would fill up their 'file headers', which held the addresses of the fragments. You could add another file header, but these were limited by the size of the 'index file' in which all file headers lived. If there were no more file headers, why not extend the index file? Oh, there are no file headers left to extend the fragmented index file? Oops.

Reply to
Bob Eager

What you fail to understand is that he SSD itself cant be got into such a state by user intervention

Reply to
The Natural Philosopher

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.