Replace partitioned hard disk with SSD

Why on earth spread your data round so many partitions. Its pointless. It simply spreads the data out and on a spinning disk ensures the worst possible access times by making sure programs and data are as far appart as possible and splits the free space into small chunks.

Personally I would just clone to an SSD as is. If you can afford get one bigger than 500gb I would. If you keep data on spinning disk when Windows re-builds the search indexes, which it does after all big updates, you still have a PC that goes no faster than a bog snorkler or a trabant up the gross glockner.

Stop pratting around and get it all on SSD.

My personal recommendation would be a crucial MX500 1Tb.

I have several MX500. The clone tool has worked faultless. The management tools let you monitor usage and life. They are a good price.

Dave

Reply to
David Wade
Loading thread data ...

The only real point was to make sure that overflowing one area did not impact on another

And of course with multiple disks its has been traditionally hard to create a single flat virtual disk

Once on SSD all the tuning tricks for disk speed become irrelevant. Seek time is essentially zero. Fragmentation doesn't matter and cant be affected by defragging anyway since the SSD wear levelling creates a RAM structure that bears absolutely zero resemblance to the track/sector model that the computer understands

I'd second that. Well not the make - have zero experience of it - but if you can afford it, go SSD.

Reply to
The Natural Philosopher

Do they actually move data around? What if an SSD (or flash) is nearly full: won't all the writing and deleting be in the same place?

Reply to
Max Demian

What wears out HDDs? Is it the spinning, the head movement or the writing to the coating?

Reply to
Max Demian

Since the most constantly updated partition is the C drive, that's not the case. Most of what is on the E partition is write once, read many, so I don't believe it makes any difference. Programs are read into memory and generally stay there until I close them. This isn't a PC hammering away at huge volatile databases or games. Mostly the only things that I use is a browser and foxit and occasionaly Star Office for spreadsheets.

The only thing I cannot use this PC for is looking at the Apple website at the blurb for their new iMac. That utterly brings my PC to a halt. The rest of Apples website is fine.

Utterly pointless capacity for me. I barely use half of the existing

500Gb hard disk. The *only* reason I have started to notice is the amount of time the Win10 updates take to install. Apart from that I am quite happy with what it does.

Reply to
Andrew

No. the writing will always be to the least used block. Deletion never happens - the data is retained until the block is needed again. Then of the free heap the least used block will be erased and rewritten.

Or at least that is my simplified understanding of how they do it

So on a nearly full disk some data that hasnt been changed since forever will be moved to a overused block, the original block erased and overwritten with rapidly changing new data.

This does mean two writes instead of one, but one is local to the disk, so it isn't clogging the bust to the disc. AND it is probably dine in background by holding te block contents in RAM until the original write is complete.

So: request to write becomes a new block to modify or write. Select least used block, and if full of data read into RAM, and erase it, and write it with new block data. If spare blocks, erase the least used one and write old block to it else use the only spare block.

I am not sure, but I *suspect* that blocks are also shuffled around from time to time even when no other activity is pending. SSDs have a CPU and an operating system and run a while sure of background processes

here is my oldest ssd

Model Family: SandForce Driven SSDs Device Model: KINGSTON SV300S37A120G

ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x0032 095 095 050 Old_age Always - 0/170227111 5 Retired_Block_Count 0x0033 100 100 003 Pre-fail Always - 0

*never had a read error 9 Power_On_Hours_and_Msec 0x0032 040 040 000 Old_age Always - 53046h+44m+08.470s

  • its been on 6 years continuously and is still in perfect condition*

12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 555 171 Program_Fail_Count 0x000a 100 100 000 Old_age Always - 0 172 Erase_Fail_Count 0x0032 100 100 000 Old_age Always - 0 174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age Offline - 103 177 Wear_Range_Delta 0x0000 000 000 000 Old_age Offline - 96

That shows that some blocks have been much more used than others. I think

181 Program_Fail_Count 0x000a 100 100 000 Old_age Always - 0 182 Erase_Fail_Count 0x0032 100 100 000 Old_age Always - 0 187 Reported_Uncorrect 0x0012 100 100 000 Old_age Always - 0 189 Airflow_Temperature_Cel 0x0000 035 045 000 Old_age Offline - 35 (0 235 0 45 0) 194 Temperature_Celsius 0x0022 035 045 000 Old_age Always - 35 (0 235 0 45 0) 195 ECC_Uncorr_Error_Count 0x001c 120 120 000 Old_age Offline - 0/170227111 196 Reallocated_Event_Count 0x0033 100 100 003 Pre-fail Always - 0 201 Unc_Soft_Read_Err_Rate 0x001c 120 120 000 Old_age Offline - 0/170227111 204 Soft_ECC_Correct_Rate 0x001c 120 120 000 Old_age Offline - 0/170227111 230 Life_Curve_Status 0x0013 100 100 000 Pre-fail Always - 100 231 SSD_Life_Left 0x0013 096 096 010 Pre-fail Always - 0 233 SandForce_Internal 0x0032 000 000 000 Old_age Always - 26634 234 SandForce_Internal 0x0032 000 000 000 Old_age Always - 10826 241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age Always - 10826 (Linux has log files and a lot of code and data that never gets read! I never made any huge attempt to not use it for log files. )

242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age Always - 5826

And that's what SMART tells you. Compared with a 1TB spinning rust disk that is in there just in case, but was retired from active service cos it had too many read errors, its perfect after 6 years, whereas the rust is dying at 8 years powered on. With 108 million read errors and 106 million seek errors!

Now its true that the SSD is only 120GB and not a TB, but its run this desktop for 6 years and looks like its in perfect nick

Reply to
The Natural Philosopher

Not entirely, they have more actual capacity than their official capacity, so there are more pages available than will be allocated when the drive is "full". Also they can move and reallocated less frequently accessed files to free up relatively lightly written pages, or to partially "retire" more heavily used ones.

Reply to
John Rumm

YES!

spinning wears the bearings. worn bearings give disc wobble, means seek errors and read errors. head movement is also on a bearing and wear there means again seek/read errors.

writing to the coating does not really have a problem. unless the head contacts the coating....

so on an HDD reading and writing and power on wears them out

On SSD, writing *alone* wears them out which is why you never defrag them

As I said this PC is in daily use, the disk is 6 years old and looks set to do at least another 6, whereas my spinning rust is almost dead at 8.

(power up hours).

The nonsense about 'fail without warning' is simply not true: But you need to look at them every year or so using SMART tools

that gives you all the info about ageing an 'wear' you need to know

Even my SSD that died with some sort of actual HW failure (not wear) could be read to get the data off once it warmed up.

Reply to
The Natural Philosopher

You must be doing something wrong if none last more than 10 years. My main is still spinning rust and is older than that.

Reply to
Alex

I assume you don't let MS index this. If it does index it it is true when MS is building the search index. The search index is on "C" the data is else where so lots of seeks as its building the index.

Then if you let windows Index your data get a 500 gb and clone the whole disk. Otherwise you get one that will hold your "C" drive but given that on Amazon the price difference between a 250gb and 500gb is less than a fiver I think you would be better served with a 500.

You might want to do a bit of googling on the windows search indexing tool. It can cause a major slowing after an update.

Dave Dave

Reply to
David Wade

I have never had a hard disk failure (touch wood). Current WD 500 green is ?10 years old, but I have lost the Novatech documentation for it so I can't be sure. I bought an upgrade package of M/B, RAM and Win7Pro OEM in 2011 and used the existing IDE disk for a while before getting a SATA drive instead. Does the serial number indicate the date of manufacture ?.

The 500Mb 2.5 inch bus-powered Samsung drive purchased in 2013 for my HD FOX-T2 stb was hammered for 8+ hours a day until 2019 because I had catch-up enabled so it was constantly recording the live program until I changed channels, when it then looped around. You could hear the heads constantly clicking. Now I just use it for recording to give it a rest.

Reply to
Andrew

Even where there is an adequate airflow they still fail.

Either way you haven't provided details of what "wears out".

Reply to
Fredxx

There is actually an annealing effect at high temp.

When you write your new flash drive, while it has gotten itself all hot and bothered, two things happen.

1) Write quality is degraded. That's what the manufacturer is concerned about. If you later read back what was written, more ECC operations are required to correct all the errors. Your 500MB/sec SSD slows down to 300MB/sec. The SSD drive has 50 bytes per 512 byte sector payload, to compute an error corrector polynomial. In a sense, each sector is 562 bytes, 512 of which belong to the user. 2) The write is "less stressful" to the cell. If we were counting write operations per cell, and waiting for each cell to be written 3000 times, it's possible we could now take the drive to 3000.5 writes. The more annealing we do, the longer the drive could last. Obviously, the amount of annealing is minimal, but it does count as a "less stressful write".

formatting link
Lee Hutchinson - 11/30/2012, 1:29 PM

"The modification is a complex one and required substantial engineering, but the results are impressive - a brief and restricted jolt at 800C appears to "heal" the flash cell, removing its retained charge. Macronix estimates that this can be done repeatedly as needed, leading to a flash cell that could potentially last for 100,000,000 cycles, instead of the roughly 1,000 cycles that current 21nm TLC flash cells are rated to last. "

The other [undocumented] innovation, is the invention of "enterprise NAND flash". Which has extended the wear life past 3000. This is where some of the [announced] Chia SSD drives have come from. They're based on enterprise NAND. I'm still waiting for a story, to tell me how this was done and at what price in terms of production cost. It would be useful to understand why all SSDs could not be made from enterprise NAND.

Paul

Reply to
Paul

It is entirely down to usage.

My server is on 24x7. Only power cuts or a new kernel or software reinstall takes it down.

That age is actual spinning hours. As I said a friend who not only was running a machine 24x7 but was using his disk as virtual RAM of around 500GB being constantly read and written, didn't even get a year.

I seem to start getting increased error rates after about 5 years on the server.

Disk failure I don't need: I replace at that point

Obviously if you just fire up a pc once a day its different

Reply to
The Natural Philosopher

The Natural Philosopher snipped-for-privacy@invalid.invalid wrote

Nope.

So is mine.

Only power cuts take mine down.

So is mine.

He must have been doing something wrong too.

The massive great hard drive farms do a lot better than that.

I don?t.

I don?t, its on all the time. Always has been.

Reply to
Alex

That's actually a correct estimate.

If you place a modern hard drive, in a situation where the head is constantly seeking at the maximum possible rate, the drives average one year of operation to failure. You can ask people who maintain hardware for web sites, what kind of numbers they get.

The reason a BackBlaze drive lasts longer, is their usage pattern is more sequential. There might not be an excessive number of seek attempts, each and every second.

The home-usage pattern for hard drives is so non-threatening, as to be almost non-existent. The drive in this example, never spins down either - it has no power save, which makes this result all the more unbelievable. I have six or seven other drives at this capacity, that are not nearly as healthy. Somebody forgot to put their personal dandruff into the cavity of this drive. One of the reasons for keeping this drive online, is purely to see how long it will last before the FDB motor goes out on it.

formatting link
Paul

Reply to
Paul

What is its read error rate?

Reply to
The Natural Philosopher

As the (paraphrased) saying goes, there are two types of people, those who have had a hard drive failure, and those that are going to.

Crystal disk info or just a look at the smart data will tell you the number of hours it has been running.

(I recently had a 4 year old 6TB drive fail in my NAS (just outside of warranty natch!), and I also just retired an 11 year old Hitachi 1TB drive that was still working but was reporting a reallocated sector).

Something to watch out for on the WD green drives is that they frequently unload the heads from the drive to spin down when not in use, but they only have a fairly limited max load/unload count (about 600K cycles IIRC). In intermittent use applications those can get eaten up fairly quickly.

Not usually, but the DATE or DOM field on the label will normally give you a date of manufacture.

Reply to
John Rumm

2
Reply to
Alex

Err, where do I find the 'smart data', please ?.

I switch my PC on from cold almost every day and then generally leave it running until sometime in the evening and shut it down, so hopefully it just spins away doing very little for hours.

Reply to
Andrew

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.