Just looked, I've got about 10GB free, there's 20GB of data on it
somewhere which includes a swap file. Win 7 pro, MS Office 2010 full,
video and photo editing software, assorted browsers and a handful of
other programs. 2 or so years old. Data is on a 2 TB which is filling
up, video and images mostly.
Seems like most of what fills up a HD is crap, I clean it out every few
months. MS will stuff a HD if you give it a chance. And so will
With that said, I wouldn't mind a bigger boot drive, but it all still
fits and the i5 still flies.
I suppose if I did all my work on it, it would be stuffed, but the
entertainment has switched to the tablet and the light weight stuff is
on the laptop.
MS could learn a thing or two from Android, computing is trending smaller.
Deleted temp files and gained about 9GB, so I'm about 2/3 full. Not that
I'd suggest buying a 64 now. I remember running windows on a 20MB
drive, 80 was big! Along the way windows has surely and steadily gotten
to be a pig.
But I have an Android 4.2.2 which lives happily inside a few GB. The
Raspberry PI is even smaller and the, I believe its Ubuntu, on Beagle
Bone Black does fine in a few GB. All of them can run a browser and Open
Office or something similar, and a photo editor. Most people don't need
What amazes me are the Android apps. They download and install in
seconds. How small is that?
Windows seemed to be more concerned with piracy and they built an obtuse
code collection with a giant registry which is a near model for obfuscation.
I write software, mostly in PHP, and I can tell you that 360K of source
code is enormous.
multi-megapixel displays and photo and movie data and the software to
Modern data and its processing leads to larger software, memory and
storage needs. Not to say that software inefficiencies haven't grown,
but the hardware has improved faster than needed to outpace those
I routinely run linux VM's with 8GB hard disks, and that leaves plenty
of space to spare.
This is my domain email server (MX, DNS, IMAP server) for example:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 7.0G 2.1G 4.6G 32% /
/dev/vda1 99M 20M 75M 21% /boot
tmpfs 186M 0 186M 0% /dev/shm
On 6/6/2014 12:00 PM, firstname.lastname@example.org wrote:
Now I know. But Win 95 maybe 98 too and all versions up to that point
loaded after DOS.
Somewhere around 1990`1992 I ordered a new Gateway computer with Win
3.0. IIRC during boot up it asked if I wanted to load Windows or go to
the c:/ prompt. I also recall that I could get out of Windows to run
PCTools, a very good utility and back up program. At the C:/ prompt I
could load PCTools and literally back up every thing on the HD. When
done I could reload Windows. If I tried to do a back up with PCTools
while in Windows the program would ask to exit Windows.
Windows NT 3.51 predated Win2K, and
Win2K itself was based on the NT4.0 source base which I
was working with at the time (circa 1998/1999).
Both were based on Dave Cutlers new (vms-like) operating system.
I believe 98 & /ME were the last DOS-based releases.
The (in)famous "DeathStars". The IBM drives before those and after
were quite good. Pretty much all of the innovation in disk drives
came from IBM. Yes, they even lost the recipe for a while in the
'90s. There was no money to be made, anymore, so sold the whole deal
off to Hitachi.
They've had troubles, through the years, too. No one is immune when
margins are that thin and the technology changing that fast.
On 6/7/2014 11:25 AM, email@example.com wrote:
In my world, performance is the biggest issue. Generally it is not
thought about and ignored, until its too late. Then they go back and
have to figure out how to make it run faster, which means identifying
the bottle necks and then re-reading their code. And making changes,
testing, pushing to production... (costly)
That's more expensive. When you do it right the first time you are
thinking about it all the time. You do it because you know it will be an
I interviewed the other day for a small company, my first in many years.
They are at this place now. That's why they need someone like me. As we
were discussing things, I realized they never thought about perf at all.
So their small customers are OK, now they are in big places, and it's
failing to keep up... When I heard some of the things they did, they
never made it scalable, and they never considered performance. I can
already tell much of the things they want me to do, will be pushed back
to them to make changes. They won't like that, but I can't solve their
problems with a magic bullet, I have to educate them to do it on their
own. I do this so many times after the fact... its cheaper to teach
them how to code for performance and get it done when they write their
code to begin with.
B4 the breakup of the bell companies, they identified some simple
problems in IBM's logic.. they were doing the most common logical in
reverse order. The most frequent case was being tested second, the least
common first. Bell identified the problem and presented it to IBM. They
made the change, and now the system was flying. Logical tests matter as
far as order. The lower in the code o/s the more important. Certainly
it doesn't matter if the code is rarely exercised. But if it is....
I know I'm a Neanderthal but I still run programs that were designed
operate on DOS-2.0 and haven't been updatede since 1990.
Have my bank programs set up as database programs, some of which
go back to the early 80's and are less than 70K.
Biggest PITA is to remember to use <ALT><ENTER> to get into
full screen mode.
My customer file is a custom database file the contains 1,000 records
with 50 fields and is only 200K.
I'd call that pretty tight code. What do you call it?
This email is free from viruses and malware because avast! Antivirus protection is active.
HomeOwnersHub.com is a website for homeowners and building and maintenance pros. It is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.