SSD or HDD for video writing

I am currently using an old HDD to record video from x4 cctv cameras, although not continuous .

I used to use a Samsung SSD, but that just suddenly failed after about

18 months / 2 years.

After doing a bit of googling on this subject, I haven't found anything yet which goes against an ssd. I have a Samsung EVO 860 250Gb which i could use, as I would rather not spend money on a HDD if I don't have to.

So what do you suggest would be the best : HDD or SSD.

Reply to
RobH
Loading thread data ...

I did a fair amount of research on this as SSDS are now very available and not too dear.

Two things appeared to be more or less true.

  1. A small percentage of SSDS fail unexpectedly totally and for no discernible reasons and you are just as f***ed as you would be using a hard drive.
  2. Conversely the wear levelling algorithms that spread write operations across all the NVRAM cells more or less evenly actually result in
*longer* life than a hard drive that is being thrashed.

Certainly all my SSDS (one failed almost immediately) are in fine fettle and showing no sign of wear at all

And, as with hard drives, use of SMART interrogations reveal when things are *starting* to go bad.

The one area where they are possibly inferior to spinning rust is data retention in an unpowered state.

So an archive needs to be powered up from time to time.

Another point is that while a spinning rust disk has a mechanical lifetime after which bearing wobble and so on limits it no matter how much its written to, you can achieve extremely long SSD life by having a very large capacity most of which is *not used*. The wear levelling will ensure that the writes are spread out over all the blocks, such that any given block will have many less writes than the total number.

In your application, I would unhesitatingly go with SSD.

And as big as you can afford.

Reply to
The Natural Philosopher

I just connected up an SSD that I haven't used for 3 years, and it just 'worked'. I read a few files, and all seemed good.

Reply to
GB

Thanks for that, and as I mentioned, I have a 250GB EVO ssd . In the machine I plan to use the ssd, there is a 120Gb spinning rust disk , of which there is almost 90gb free space. So with a 250 GB disk there would be about 220 Gb free space, which is not used by anything

Reply to
RobH

That is useful info

Most people are saying 'be worried after ten years'

Reply to
The Natural Philosopher

Sounds like a plan, then.

Reply to
The Natural Philosopher

that 250 GB HDD o SDD can't hold that much footage off 4 CCTV cameras?

I have 8 cameras and 20 TB of storage and that gives me 83 days.....

So my 8 cams are producing almost 250 GB of new data a day so 4 cams would give you just 2 days before overwriting?

Hope you check your CCTV every day while you're next on a week's holiday! :-)

Reply to
SH

Ah, I see. So, SSDs are unsuitable for very long term archival storage, say. Is the issue that the charge simply leaks away over time? So, maybe the SSD will still work, but the data won't be readable?

Reply to
GB

Yes, but remember as soon as it gets power, my *understanding* is that it will 'refresh' and be good for another ten years. I might be wrong on that, though

Reply to
The Natural Philosopher

I wondered about that, if any bits are lost, your filesystem is not trustworthy, maybe you could reformat it and bring it back to life, but what if the drive itself stores internal metadata or firmware inside its own flash? It could be bricked ...

Reply to
Andy Burns

I suspect its not organised like that. But I don't have time to find out right now. But most filesystems will recover from the odd lost bit.,

>
Reply to
The Natural Philosopher

I think that's a reasonable assessment.

One person at least, took a 16 year old NAND flash-based device, and could not read it. Which means some critical data table likely failed (like the mapper table).

SSDs are NOT engineered as time capsules. The gate-drain behavior at ten years, is the basic physics of the thing.

TLC cells can become mushy in as little as three months, and if you see the read speed drop, that means an ARM core is doing error correction on every sector being read out.

You can see this, even on "brand new" TLC drives. If you HDTune bench the thing, before doing a "freshen, write from end to end" procedure, the read-rate seen could be 1/4 of the speed rate on the tin. Don't panic when that happens, and use the drive a bit or write it, to improve the benchmark.

I've never heard any "squeaking" about gate-leak behavior having been defeated or extended in any way. It is possible to anneal a NAND flash cell, and "make it last forever" in terms of write life. That would take my current 4TB drive from 2400TBW, to infinity. However, we don't know how to implement annealing at an atomic level, which is why wear life is not going to get extended that way. But the "leaky-gate" problem, that's more or less a constant of the thing. That's why they're not expected to be data capsules. We even get hints at this, with NOR flash in motherboard BIOS, where a byte or two will corrupt in the chip and prevent POST. That could happen past the ten year mark.

An Intel employee "hint" mentioned here, indicates that Optane has even-higher wear life than NAND (duh), but the time capsule behavior is no good, and he allows that a NAND based device might be better. The reason we would check and look up Optane, is because the storage mechanism doesn't use a floating gate. At $3000 per drive, these are not items we will be burying in the ground anyway.

formatting link
The upper limit on "convenient" storage, is likely MDISC and the estimated lifespan of carefully preserved polycarbonate for the discs. The chemical doing the recording might have an extremely good life, but the polycarbonate survival needs to be taken into account. So when they say "a thousand years", they mean "if the polycarbonate actually lasts that long".

*******

There are rotating drives, intended for video recording. And they will have a cache behavior suited to a fixed pattern of access. That's a "purple" drive, versus the "red" drive used for a NAS. Both drives stay spinning and don't park. The NAS assumes there is no pattern to the application, while the "purple" drive has probably been studied in recorder equipment, to handle the pattern. Maybe it means the file system can't have journaling, to simplify where the writes go.

A four camera setup, should be easy for a "Purple" drive to handle, even if you put the recording on the wrong file system. Then you have to decide how big of a drive to buy. An 8TB purple would be Helium, a 6TB purple might be an air breather. What's the room air quality like ? If high humidity, I might go with the Helium. A Helium drive is "guaranteed" <cough> to hold the gas for five years. Some Helium drives actually have a pressure sensor, but the details of the SMART entry seem to have only been obtained by observation and reverse engineering. And if you drilled a hole to "let the gas out", well, now the drive is dead, because the flying behavior is fouled up (wrong flying height).

If the Helium drives had a "gas refill port", I might give them the benefit of the doubt.

I have one 500GB drive here, which has lasted over 50,000 hours, and little sign of degradation. And brothers of that drive, which are showing issues at 5000 to 10000 hours. There's no predicting which drive will be a champ.

Within the last year, I purchased three 1TB WD Black drives ("Made in china"), and one of them would not spin when plugged in new. That's my *first* infant mortality on a hard drive here! So rather than "s**te in the year 2000", that's s**te from 2023. As a data point. That's not supposed to happen, as the drive has to be commissioned before it can be shipped, and the motor has to work. The boxes the drives ship in, are of good quality. We can't blame dropping the box, since the drives have a 300G rating, and the box materials are soft enough, no box-drop can achieve 300G.

Paul

Reply to
Paul

SSD drives have critical tables, just as HDD do.

The external LBA presented to the drive, is the virtual address. The mapper table, gives a physical address.

Sector 0 is stored at 0x12345678 and Sector 1 is stored at 0x3579ABCD.

For wear leveling, "fresh" blocks come from the free pool, and the drive "fragments" as time passes. Thus, after a period of time, the mapper table almost looks random. You might say to me, why isn't the table like this ?

Sector 0 is stored at 0x00000000 and Sector 1 is stored at 0x00000001.

It might be at first, but if any sort of non-uniform or random write behavior comes along, that helps randomize the mapping.

Losing the mapping table is deadly for storage, and is one of the reasons scoping a NAND flash chip and "trying to suck the data out of it", is going nowhere fast. You need the mapping table, to even think about harvesting data from raw chips.

A hard drive has mapping tables too, but they're used for reallocations. At one time, an IBM document claimed

1MB of their 8MB cache RAM chip, was used for a remapping table, for fast access to remap info.

Some hard drives used to die (predictably) roughly

30 days after purchase. This was due to a firmware bug in a critical data table. Which might be prevented with a firmware flash before that happens. It turns out that SSDs have not been the only devices to suffer at the hands of "funky" firmware.

The reason SSDs recover from mushy sectors, is the size of the error corrector syndrome, compared to the size of the data. A 512 byte sector now has 51 bytes error syndrome. That's a 10% overhead. Maybe there's a Reed Solomon being used there. Whatever the algo is, the corrector is implemented in firmware, and there seems to be no interest in a wire-speed dedicated error corrector logic block. This is how we can tell "something is wrong", when it reads at 530MB/sec new and reads at 300MB/sec three months from now. Turn off the power for three months, do an HDTune bench, and see. Modern drives are TLC or QLC, and the "mush" behavior was first observed on TLC.

Paul

Reply to
Paul

I suspect it may need go through and rewrite all the data. ie it's not just a case of powering it up for 5 seconds then powering it down again, it needs to be left powered. Then it's a question of whether the flash cells will refresh themselves, whether the controller will tell them to do it as part of regular management tasks it does in the background, or whether they need external hints to do that. Maybe you need to do a full drive read so that it fetches every block and then rewrites it in the process? Or maybe you need to explicitly ask it to rewrite the block?

I'd want to understand this better before relying on it.

Theo

Reply to
Theo

I said it was only my *understanding*, not guaranteed fact!

There may well be a way that flash RAM re-charges itself without a write cycle Remember it's the erase cycle that is on a block basis and therefore slow.

It may be that a background process rewrites data to the cells that have '1's' ...

Reply to
The Natural Philosopher

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.