OT: memory sticks

Not true, in my experience. A heavily used NTFS partition can end up with m ost of the free space fragmented. This results in writing being slow.

A classic example is when there are apps which write log files by just addi ng a block at a time. (OK - an allocation unit is added to the file.) Typic ally, a log file in a poorly thought through application. The file can end up very fragmented and interwoven with other files. Even if you delete the worst-case file, the free space can be spread close to each AU being random ly located.

Try writing to that - even when it is a file copy with nothing else going o n.

Nice big AUs help. But you have to have realised they are sensible before y ou get into this position.

Defragging done as a defrag process might work but is very slow. I have oft en done a "poor man's defrag" by, for example, moving the largest file to a nother partition. Then moving the next largest files by copying, renaming, and deleting the originals. Finally, copy the biggest file back. Usually fa r quicker than a defrag process. (Easier still and much quicker, reformat, of course.)

I agree with SSD - do nothing other than use the damn thing.

Reply to
polygonum_on_google
Loading thread data ...

Because they never wrote them sequentially in the first place. The free space is fragmented from the time the second file is created. It will be e.g. in the middle of the biggest free space. AIUI EXT and HFS dont have one big free space, but lots of smaller fragmented free spaces so MOST files will have spare sectors to extend them, adjacent to them. DOS FAT goes back to CPM and floppy disks, its really 1970s disk write technology, but MSDOS and Windows got stuck with it.

Reply to
The Natural Philosopher

I am talking about an SSD. Not about NTFS

And I never mentioned NTFS - I mentioned EXT (234) and HFS. These do not use free space in the same way. Files are not extended by taking the next free block unless it's adjacent. They are extended by taking the middle of the biggest free space

So every file has potential to grow until it runs out of its local free space.

On an SSD everything is deliberately randomly located.

CAN I REPEAT THAT ON AN SSD WHAT TRACK AND SECTOR YOU THINK YOU ARE WRITING BEARS *ZERO* RELATIONSHIP TO THE NVRAM BLOCK ACTUALLY WRITTEN

ON an SSD a whole block will probably be erased somewhere on the NVRAM and the whole file be re-written in to it.

CAN I REPEAT ON AN SSD WHAT TRACK AND SECTOR YOU THINK YOU ARE WRITING BEARS *ZERO* RELATIONSHIP TO THE NVRAM BLOCK ACTUALLY WRITTEN

CAN I REPEAT ON AN SSD WHAT TRACK AND SECTOR YOU THINK YOU ARE WITING BEARS *ZERO* RELATIONSHIP TO THE NVRAM BLOCK ACTUALLY WRITTEN.

AUs are simply irrelevant on an SSD.

Reply to
The Natural Philosopher

Other operating systems did some little tricks like looking for the smalles t contiguous free space that was large enough for what was being requested. That tended to preserve the largest contiguous free spaces for the files t hat most benefited from being contiguous (at least, that was the theory).

Reply to
polygonum_on_google

NTFS hype didn't deliver on its claim to never need defragging. For all the theory and explanations, a heavily fragmented NTFS partition will improve markedly after a defrag.

There's additional theory concerning block address mapping (LBA) which says defragging has no value as the logical map is not the same as the physical layout on the hard drive (and there are mapped out defective sectors) -- but practice trumps theory.

I agree about SSDs which is exactly why I posted that it's totally crazy of me to do it.

Reply to
Pamela

You don't have to bypass CCleaner - just deselect its 'secure delete' option. You might also consider copying the files you want to keep then formatting the stick, then copying your files back.

Reply to
Dave W

Actually, it is for reading on the same machine as created it (64 bit Windows 10). Does this change your answer?

Reply to
Scott

That would probably be fine. In the past, when I had work related files on the machine, I wanted secure deletion of any files deleted from the 'D' drive, just in case. This is probably no longer needed though my preference would be for the 'D' drive to be deleted securely but not the SSD or memory sticks.

Reply to
Scott

2.61 GB / 1738 files / 167 folders / FAT32
Reply to
Scott

[snip]

I may have partially answered my own question. The 'Permanently delete' option seems to speed the process along.

Reply to
Scott

Why wouldn't "dd" work ?

That's a sector by sector exact clone.

*******

Check the section here on RMB=0 versus RMB=1. USB stick products are available with both options. When one user claims he was able to do a thing, and another user claims it doesn't work, the difference can be the RMB value the manufacturer assigned on the stick.

formatting link

A Sony stick might be different than a Sandisk.

*******

dd.exe is available for Windows users.

Try this one. Start with dd.exe --list

formatting link

User manual/background.

formatting link

There is one bug to watch for (in 6b3), and it's likely to arise while cloning a USB stick. Normally you would do

dd.exe if=\\?\Device\Harddisk2\Partition0 of=\\?\Device\Harddisk3\Partition0

That would copy the third item in Disk Management to the fourth item in Disk Management. The "dd --list" has sizes, such that you can correlate known size info to double-check the disk identifier.

You could *easily* overwrite C: if you're sloppy and use the wrong identifier or make a typing mistake.

Normally that command would copy 512 bytes at a time, from one device to the other, until the source runs out of blocks. THose are *not* particularly good choices for flash devices in any case.

But the "end detection" is broken for USB storage on that version of dd.exe . You measure the size to be transferred and include it in the command. Replace the XXX and YYY with the same long identifier string as before. This is just to make the other new options stand out. Now I'm including a block size and a block count, so there's no possibility of the command doing any wheel spinning of its own.

dd.exe if=XXX of=YYY bs=1048576 count=1000

And that would be an example of transferring a gigabyte or so, from one stick to the other. By specifying a block size and a block count, the exact transfer the user wants is in plain sight. So the command cannot go nuts and try to transfer non-existent blocks.

If you are absolutely certain the material to be cloned is of limited length, you don't have to transfer the whole stick. If I make a Ubuntu boot stick for example, I only have to clone the ISO length put on their originally, to make my exact copy. Other live file systems, you must copy the whole thing - only a fractional cylinder at the end could be discarded.

There is no resize capability there (not like on windows cloning softwares which work at partition level). The destination stick has to be the same size or a larger size, for such transfers to work.

*******

You can hide stuff on storage media. So when a person claims " oh, this utility will copy *everything* ", what they really mean is "except if an HPA prevents it".

formatting link

When you use DBAN to erase a hard drive, an HPA can prevent complete erasure. Thus, in such a case, you can never be certain that some sensitive information has escaped or not. Unless you remove the HPA and erase again. An HPA is a way of, in effect, changing the drive size declaration. It's sufficient to fool lots of softwares.

This is not likely to be an issue with your USB sticks, but in general it's something to keep in mind as a "detail" you'll run into the odd time. Some Dells have used those. And there is one European OEM who has used every evil feature that the ATA spec supports at one time or another. There will be occasions when you need to know about those sorts of things.

The Linux "dd" doesn't have the running-off-the-end issue. I can copy sdc to sde this way, copying the whole stick. As long as sde is the same size or is larger than sdc, no information gets accidentally lost.

sudo dd /dev/sdc /dev/sde

Paul

Reply to
Paul

th most of the free space fragmented. This results in writing being slow.

adding a block at a time. (OK - an allocation unit is added to the file.) T ypically, a log file in a poorly thought through application. The file can end up very fragmented and interwoven with other files. Even if you delete the worst-case file, the free space can be spread close to each AU being ra ndomly located.

re you get into this position.

often done a "poor man's defrag" by, for example, moving the largest file to another partition. Then moving the next largest files by copying, renami ng, and deleting the originals. Finally, copy the biggest file back. Usuall y far quicker than a defrag process. (Easier still and much quicker, reform at, of course.)

You actually wrote:

"No OS apart from DOS/windows running FAT ever needed de fragging anyway."

I completely agree about SSDs - as I put in my reply.

Reply to
polygonum_on_google

NTFS is fine for Windows-only, but you won't see much if any performance gain.

Memory sticks are slow for random access and are connected by a (relatively) slow bus, so operations involving many items are slow.

Reply to
Chris Bartram

I should have mentioned that formatting only writes a new index - the 'deleted' files are untouched and can be found again. So if you're worried about that you should overwrite the free space during or after formatting. That should be quicker than doing it with each deletion.

Reply to
Dave W

That's also how I read what TNP wrote. It did seem rather exaggerated but that's his style. Now he's trying to defend it.

Reply to
Pamela

Good to know.

Reply to
Scott

Ah, so that's what it does. I've always wondered, and never looked in to it.

Of course this means that if you write one big file it'll occupy the middle to the end of what was the biggest free block, then quite likely the beginning of it up to the middle. It also means that if you write two file on the disk they'll be nowhere near each other, whereas the MS algorithm would put them both near the beginning. It would be interesting to try that - if I had a Windows and a Linux system, both with spinning rust, write a couple of files of a meg or so then read them alternately.

These days of course they'd just fit in the cache. And I don't have the systems, and I don't care that much...

Elsewhere you wrote > DOS FAT goes back to CPM and floppy disks, its really 1970s disk write technology, but MSDOS and Windows got stuck with it.

DOS FAT was a new invention for MS-DOS. CP/M had its own file system which was very different - one advantage of it was the handling of sparse files which could be truly sparse, while FAT had to write all the sectors in between. I could tell you more if you want, it's one of those things I can't forget.

Still 1970s though :)

Andy

Reply to
Vir Campestris

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.