OT: SSD drive prices have dropped drastically this week.

That's a deal...

Out of curiosity I checked regular laptop HD prices:

$80 for a 2.5 1TB Seagate momentus

formatting link

Decent hardware is getting really cheap. As cheap as the junk.

Yeah, tempts me, but it has to go at the end of the list.

It's a lot of laptop drive!

>
Reply to
pentapus
Loading thread data ...

Seagate has been around for a long time, so to speak. Have you had any luck with that brand? I have had two computers with a Seagate drives, one about 25 years ago and one as recently as 2 years ago. Both failed and those are the only hard drives that have failed IIRC. I have heard that this is not unusual for a Seagate drive.

Reply to
Leon

I routinely run linux VM's with 8GB hard disks, and that leaves plenty of space to spare.

This is my domain email server (MX, DNS, IMAP server) for example:

$ df -h Filesystem Size Used Avail Use% Mounted on /dev/root 7.0G 2.1G 4.6G 32% / /dev/vda1 99M 20M 75M 21% /boot tmpfs 186M 0 186M 0% /dev/shm

Reply to
Scott Lurndal

I'm not in the hardware business, so I can't really say, but my Seagate Barracuda ST2000DM001 2TB 7200 RPM is running fine. So is a friends bought a couple years ago. Others aren't so happy with theirs. I have the crucial SSD and 4 Gigs of RAM so the drive is just handling data.

Seems like HD companies have their ups and downs. Both Maxtor and WD went through a really bad patch and I went with them on that. I've had drives from both where not only the original failed, but the replacement also! I've heard really bad things about the current Green WD.

Looks like Hitachi is the current champ:

formatting link

It's been said that good judgement comes from experience. And experience comes from bad judgement. You don't always know until after the fact!

Reply to
pentapus

True.

But I have an Android 4.2.2 which lives happily inside a few GB. The Raspberry PI is even smaller and the, I believe its Ubuntu, on Beagle Bone Black does fine in a few GB. All of them can run a browser and Open Office or something similar, and a photo editor. Most people don't need much more.

What amazes me are the Android apps. They download and install in seconds. How small is that?

Windows seemed to be more concerned with piracy and they built an obtuse code collection with a giant registry which is a near model for obfuscation.

I write software, mostly in PHP, and I can tell you that 360K of source code is enormous.

Reply to
pentapus

That's quite arguable. IMO, DOS was a file system and program loader. It was not an operating system at all. It didn't manage memory or do much else than an operating system does.

No, the NT variants (of which everything after Win2K is) are/were not based on DOS.

It's arguable whether Windows3x was an OS or not, too. You're right, all it did is slap a GUI on top of DOS. WinNT, Win2K, XP, and all, are a very different thing.

Reply to
krw

Windows NT 3.51 predated Win2K, and Win2K itself was based on the NT4.0 source base which I was working with at the time (circa 1998/1999).

Both were based on Dave Cutlers new (vms-like) operating system.

I believe 98 & /ME were the last DOS-based releases.

Reply to
Scott Lurndal

Still a lot larger than a 360k floppy. ;~)

But where does one get an 8Gig HD????

Reply to
Leon

My last Seagate was external strictly for data and would spin but would cause my computer to freeze daily. I was unaware that was the problem until I started seeing that data was occasionally not obtainable. Finally I unplugged after 6 months of aggravation and the problem was solved. Hummmmmmm ;~)

Good to hear, that is what my new primary drive brand is.

Reply to
Leon

Now I know. But Win 95 maybe 98 too and all versions up to that point loaded after DOS.

Somewhere around 1990`1992 I ordered a new Gateway computer with Win

3.0. IIRC during boot up it asked if I wanted to load Windows or go to the c:/ prompt. I also recall that I could get out of Windows to run PCTools, a very good utility and back up program. At the C:/ prompt I could load PCTools and literally back up every thing on the HD. When done I could reload Windows. If I tried to do a back up with PCTools while in Windows the program would ask to exit Windows.
Reply to
Leon

My biggest problems were with the IBM drives quite a number of years ago. I had to buy multiples for my raid setup, and those drives failed.

I was treated poorly by the IBM people (kids), and it was awful. They finally sent new drives ( I was still under warranty) and those failed quickly. Long after that I saw a class action for those same drives. How PC mag rated them the best is confusing.

I have had luck with both WD and Seagates.

Reply to
woodchucker

Thanks for the history lesson but did you have a point.

I just said that.

Reply to
krw

The (in)famous "DeathStars". The IBM drives before those and after were quite good. Pretty much all of the innovation in disk drives came from IBM. Yes, they even lost the recipe for a while in the '90s. There was no money to be made, anymore, so sold the whole deal off to Hitachi.

They've had troubles, through the years, too. No one is immune when margins are that thin and the technology changing that fast.

Reply to
krw

Right. Win95, 98, and ME were all shells on top of DOS (though to varying degrees). All of the NT varieties (NTx.xx, Win2K, XP, Vista,

7, and 8) are quite different animals.

Yes, Win3.x *was* just a shell on top of DOS. The later versions (Win95 etc.) still had DOS in there but more and more for legacy reasons. That all went away with Win2K, which was really NT (that really worked ;-). (Just to be clear, NT was *not* DOS based, either)

Reply to
krw

You are absolutely correct. Cost is the be-all and end-all. Writing tight code is hard and even harder to verify. That's why techniques like "object oriented programming" are used extensively.

Reply to
krw

----------------------------------------------- Mike Marlow wrote:

------------------------------------------------ Le>>> While I agree, it was simple, it was an OS. I was only pointing >>> out

Mike Marlow wrote:

----------------------------------------------------

Le>>

------------------------------------------------------------ I know I'm a Neanderthal but I still run programs that were designed to operate on DOS-2.0 and haven't been updatede since 1990.

Have my bank programs set up as database programs, some of which go back to the early 80's and are less than 70K.

Biggest PITA is to remember to use to get into full screen mode.

My customer file is a custom database file the contains 1,000 records with 50 fields and is only 200K.

I'd call that pretty tight code. What do you call it?

Lew

Reply to
Lew Hodgett

I call it bullshit. What sort of customer record is 4 bytes long? His initials + age? LOL!

Reply to
krw

In their case, it is not cost, just the way they think. In my world, performance is the biggest issue. Generally it is not thought about and ignored, until its too late. Then they go back and have to figure out how to make it run faster, which means identifying the bottle necks and then re-reading their code. And making changes, testing, pushing to production... (costly)

That's more expensive. When you do it right the first time you are thinking about it all the time. You do it because you know it will be an issue later.

I interviewed the other day for a small company, my first in many years. They are at this place now. That's why they need someone like me. As we were discussing things, I realized they never thought about perf at all. So their small customers are OK, now they are in big places, and it's failing to keep up... When I heard some of the things they did, they never made it scalable, and they never considered performance. I can already tell much of the things they want me to do, will be pushed back to them to make changes. They won't like that, but I can't solve their problems with a magic bullet, I have to educate them to do it on their own. I do this so many times after the fact... its cheaper to teach them how to code for performance and get it done when they write their code to begin with.

B4 the breakup of the bell companies, they identified some simple problems in IBM's logic.. they were doing the most common logical in reverse order. The most frequent case was being tested second, the least common first. Bell identified the problem and presented it to IBM. They made the change, and now the system was flying. Logical tests matter as far as order. The lower in the code o/s the more important. Certainly it doesn't matter if the code is rarely exercised. But if it is....

Reply to
woodchucker

That's only if you use ASCII!

Reply to
Bill

snipped-for-privacy@attt.bizz wrote in news: snipped-for-privacy@4ax.com:

Better check your calculations again. 200Kbytes / 1000 records = 200 bytes per record, not 4.

4 bytes is the average length per *field*, not per *record*.
Reply to
Doug Miller

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.