OT: Windows 2000 Pro to XP Pro upgrade without having to reinstall applications?

You can do quite well if you put the code in a STREAMS device driver. There are still some bits of kernel code that disable interrupts but they are actually quite short in SVr5. Quite a bit of the code in System X runs in such a manner.

Reply to
dennis
Loading thread data ...

Only if it becomes a "hotfile" it will stay fragmented if it doesn't, unless you run a defragger.

Reply to
dennis

MM wrote: [snip]

More proof that you're an idiot. The current AMD processor range is woefully slow and sucks power.

Reply to
Steve Firth

Because te algo goes broadly like this (or one of the does)

Create first file at outside of disk Create second file in middle of disk.

To create any new file, stick it in the middle of the largest available space.

This ensures that by and ;large files are always contiguous.

Then of course there is read ahead caching.

If getting part of a file, get a bit more as well, and stuff it in memory in case its wanted'

And the block write daemon. When writing a file, dont update the disk, shove it in a buffer. When you are near the part of the disk that a bit of file to be written exists, write it. Otherwise leave it there until you have a spare moment.

The genius of later versions of Linux is that it trends to say 'any spare RAM is diskk buffers' so provided that you are not opening a brand new file,. it tends to be the case that you never write directly to or read directly from the disk anyway. You do it from, memory. even temporary file policy. all log files and other temporary files are rotated daily and archived, or if truly temporary, are erased on reboot. Or under a timer if you set it up.

er no. SSDS have limited write lifetimes. You need even more careful strategies to avoid erasing them as much as possible.

Reply to
The Natural Philosopher

No, OS X just moves files around for you to keep things unfragmented. Never any need for a defragger, or even to think about the matter.

Reply to
Tim Streater

This algorithm minimises the cost of an individual file being fragmented at the expense of ensuring that any collection of files written at the same time are scattered widely all over the disc. I bet we can find benchmarks favouring both approaches.

All of the above could equally well be written about Windows, with minor variations.

Er yes. The SSD itself deals with all this rubbish, it is totally invisible to the OS.

Andy

Reply to
Vir Campestris

Because sensible file systems don't try to write each file In sequential blocks starting at the first sectors on the disk.

Fragmentation on file systems used with Unix like operating systems only suffer from fragmentation when disk usage goes over 80% (ish).

SSDs have their own issues.

Reply to
Steve Firth

There are some specialist Linux kernals that have better responses. However there is still a market for "proper" real time OSs like VxWorks, OS/9, QNX etc.

Reply to
John Rumm

I usually try not to get too sucked into these "my is better than yours" discussions, since they are fairly pointless, and there are perfectly valid reasons for selecting any one of them that will apply in different circumstances. (not to mention that the actual quality and capability of the OS itself has little to do with its market penetration - else we would all be using BeOS, or AmigaDOS etc in place of Windows or OS-X!)

All of them are getting better, and in reality none of them are actually that bad any more (as distinct from those you dislike with a passion!)

It does seem that current trends are going to have a big destabilising effect on the status quo though, so we are probably past "peak windows" and a percentage of installed systems (unless they decide to start giving it away - which I suppose is a possibility as MS see their future selling SaaS).

Linux will continue to grow in the server, cluster, cloud and super computer market since it now often leads the development cycle in these things and the commercial *nix vendors are often playing catchup.

Casual users will migrate away from traditional PCs anyway, since there were not into content creation in a big way, and their needs are being increasingly met by tablets, phones, TV, games consoles etc.

Linux on the desk top may or may not gain substantial support in the business community. Much will depend on where the software developers go, how the total cost of ownership works out, and how much sway legacy code maintains ("lots" being the normal answer). However in a sense it does not matter, as there is now far more different types of technology in peoples hands than at any time in the past unless you include the pre-pc early micro market where a good many of us first met computers no doubt.

Reply to
John Rumm

IME windows is growing in the server market. It makes me want to cry, but MS are doing a good job of getting themselves ingrained in business.

Reply to
Clive George

Until fairly recently I did all my work on Windows 98SE. That had lasted me for years past the "best before" date by which many other millions of users had already switched to XP. Then I used Windows 2000 for several years and that was fine. Now I'm using XP! I bought a license (from Microsoft) last week for Windows 7 Ultimate and built a rack so that I have it on the back burner for when XP gets too long in the tooth. I shall not bother with Vista or with Windows 8. Therefore, I expect to be using Windows 7 as the last version I shall ever use, given that I'm 68 and had a triple heart bypass in March! I just don't need the hassle of Linux, and I already sacrificed hundreds of hours on it -- until Ubuntu changed it at version 10.? from what seemed quite a usable system at version 8.?

MM

Reply to
MM

That's because their products are far superior in many ways. Business needs that. It doesn't need Fisher-Price concepts like "Dapper Drake".

MM

Reply to
MM

And a windmill could power up to a thousand homes. With additional hardware and software, and apple II could sens a man to the moon.

etc.

The point is that Windows does end up in a tangle eventually. Linux does not. .

I wonder what operating system the SSD is running then :-)

Reply to
The Natural Philosopher

I quite fancy a Chromebook. Google rules the world anyway, so might as well go the whole hog

Reply to
stuart noble

It doesn't move every file, it only moves "hotfiles". The OS uses some usage pattern to decide what a "hotfile" is.

Reply to
dennis

I was under the impression, possibly entirely mis-guided, that there are two features.

Hot File Clustering currently available only on boot volumes

and

On-the-fly Defragmentation

Further, the 1000 file idea would be causing fragmentation of free-space

- a related but different problem to fragmentation of files.

Reply to
polygonum

Have a look here:

formatting link

Reply to
polygonum

The Novell network file servers I used for the past 15 years never needed defragging, I don't think there was even an option. The system slowed down if it ran short on RAM and disk space so I always assumed on-the-fly rewriting was occuring.

Reply to
AnthonyL

The same ways that Windows dows what? Avoid fragmentation? NTFS is renowned for being poor at avoiding fragmentation.

Reply to
Huge

TNP opens his gob and rams his foot into it, up to the knee. Again.

Reply to
Huge

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.