O T: Defrag

+1. Macs are s**te if you step off the Shining Path.
Reply to
The Natural Philosopher
Loading thread data ...

That's why windows defrags in background too. If you turn it on. Its not on by default as it increases power used as the drives don't sleep as much.

Other than turning it on windows users don't have to worry either.

Reply to
dennis

Well they might need to. It only defrags what it feels like so it may not do a very good job. There are defraggers available for OSX and linux so some people think that a better job needs doing.

Reply to
dennis

That's marketing, you should have bought Apple products and not expected it to work with real stuff.

Reply to
dennis

How does that help?

Reply to
Tim Watts

Interestingly, NTFS was inspired (not dure how much) by the featureset on the VMS native filesystem Files-11/ODS-2. Where I was at uni, they used to run a regular defrag on that with a tool called RABBIT.

Reply to
Tim Watts

Well they needed to get the maximum performance out of the machine and defragging made disk access quicker. It makes disk access quicker on OSX, linux and windows too but people probably don't notice on most tasks these days.

A defrag will certainly make boot times quicker and is probably the only time an average user will notice.

Defragging used to be more effective too, at one time you actually knew how the data was arranged on the disk so you could move the data about to specific areas, now you don't know and can only guess.

Reply to
dennis

Nice and clearly put.

So the mac defrags all the time, when you're trying to do something else, rather than you firing it manually when your machine is idle. I guess it suits some people.

For those who claim to have a non-fragmenting file system:

Imagine you create 10,000 small files.

Then delete every other one.

How do you _not_ have fragmented free space?

Andy

Reply to
Andy Champ

Which was produced by the people who used to (not sure, now) produce an NTFS defragmenter.

There is a *lot* of commonality between ODS-2 and NTFS. I have documents somewhere on the internals of both.

Reply to
Bob Eager

Not document, but I've got the Helen Custer book somewhere, DEC foisted a copy on me after some NT3.1 AXP demo

Reply to
Andy Burns

There were more subtle reasons. Heavy fragmentation of the file system was a show stopper. Fragmented files often needed extension headers in the index file (what became the MFT on NT) to describe all of their data blocks. Eventually the index file filled up, but it couldn't be extended because it needed an extension header to be allocated - in the index file.

Reply to
Bob Eager

And ZFS?

Got bitten by that last week - 40 million files in a filesystem, 15% free space = sick machine. Snapshots taking *hours*, write performance - well, there wasn't really any

Cleared out a few GB of space and it's sorted but Solaris isn't completely immune to fragmentation :-)

Darren

Reply to
D.M.Chapman

Quite. But in reality it seems to be files which start small and regularly increase in size that seem to be the worst offenders. Classics being log files (e.g. every time something happens, a message gets added to a file).

In a previous existence I spent quite some time trying to optimise the performance of files and file systems. On hardware of that generation, files could be accessed using multi-block transfers. And fragmentation put a major brake on how often that could be exploited. A number of tactics proved helpful.

But the biggest single improvement was obtained by formatting disc space using appropriate allocation unit sizes for the files being stored. You do only waste an average of around half an allocation unit per file. Temporary files in particular can have huge allocation unit sizes without actually 'using' any extra disc space (so long as they disappear). This particular technique is available under Windows. And can be quite effective. After all, double the allocation unit size and you have half the maximum possible fragments.

Reply to
polygonum

In Linux, that's where you START - with a fully fragmented disk.

The key is the disk access system is geared to make that almost a non problem.

By spreading files across the space. each one has lots of extension capability. Its only when the disk gets really full that finding the best slot to extend to gets sub optimal.

Back that up with clever disk caching and enough RAM and as long as you are only working on a few files at a time, its like greased lightning.

where it falls over is when you have lots of users or processes all accessing files, a full disk and not enough RAM.

Which is where specialist disk access daemons come in like those used to access databases on large systems.

Reply to
The Natural Philosopher

It uses sparse files everywhere by default.

Reply to
Huge

Exactly. Once the disk getsup to 85% or so then it starts to fall part..BUut at that usage level, you should be looking at more disk space anyway.

Thats when we started to look at the '4 times bigger disk to tack on the system and work out which directory trees to move to it.

I used to run a usenet news server. The news files get..very big. And in the end we ended up with the data on one disk, the history on another, and the operating system on a third..well till we RAIDED it and then the RAID took care of all the smarts..lord knows where the data actually was

- on whatever disk was nearest I suppose..it was full of its own RAM and CPU and sorted itself out ....

Reply to
The Natural Philosopher

Who's they?

There's also "virus software", f*ck alone knows why. Doubtless the vendors sell bridges too.

Reply to
Tim Streater

Ok - I did not know that ;->

I'm looking forward to SANS supporting that new SCSI command that can unmap a block...

Reply to
Tim Watts

I seem to have acquired two copies of that - and never paid for either of them! I did go on a lot of DEC courses though - especially the device driver ones.

Reply to
Bob Eager

Can you suggest a size for that Max/Min setting, for a computer running XP with 1 Gb RAM, used mostly for Internet browsing, e-mail etc, i.e. not heavily into games, CAD, music or large databases?

Reply to
Chris Hogg

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.