OT Win7 updates

When I interviewed for my current job one of the questions was prefaced with 'assume you have unlimited memory'. I flashed back to an 8049 project when I would have killed for a byte.

I enjoyed working with microcontrollers. You had a very good idea of what the thing was up to at all times, how many cycles each instruction took, and what was happening with the peripherals.

Reply to
rbowman
Loading thread data ...

It looks like those Z8000 add-on chips for memory management and paging became available in the early 1980s. That puts it not in the era of the 8086, which was the time period of Don's claim, but much later. By that time the 286 with built-in memory management was out.

Reply to
trader_4

From what I see, the memory management and paging chips there, the Z8010, Z8015, were introduced by Zilog in the early 1980s. That puts them in the timeframe of the 286, not the 8086, which was introduced in 1976. Your claim was relevant to the 8086 and years matter. Just a few years later, the 386 was out with memory management and protection onboard, where it really needs to be to work effectively and the rest is history.

Reply to
trader_4

Yes. And "software inertia" is REALLY hard to overcome as it costs so much!

Good that you are aware of that! I was completely stumped by the whole Y2K thing: "How could people NOT realize the '19' would be changing to a '20'?" Likewise: "How can people NOT realize the upcoming rollover in 2038?" Yeah, this *product* might not survive to that date but the *code*/algorithm will! It would be like designing a math library that magically stops working due to some FORESEEABLE event...

I spent a lot of time thinking about my build environment. I've got a *huge* code base (my RTOS alone is a bigger piece of software than most folks would write in their entire career!). Not only do I have to worry about what hardware it will run on but also whether the tools will remain viable as well as the hosting environment.

My solution was to adopt technologies that are all open source -- so I can archive the sources for the tools themselves instead of having to archive *just* binaries (which will only run on a particular OS -- which means the binaries for the OS would have to be archived; which would only run on a particular hardware platform -- which means archiving the hardware platform; etc.)

And, the incentive to document the hell out of things -- the design is the biggest portion of a project ("coding" is just a tiny portion). Much less effort building a house from a plan than setting out with a hammer and some nails and a "dream" :-/

Reply to
Don Y

In my early projects, we would count *bits* and make maps of which bits of which bytes were being used by which routines (at which times): "Ah, b3 of Foo need not be preserved at this time! So, I can use that as a flag..." (you never wasted more than a bit on a boolean! When you only had a few hundred bytes TOTAL to work with -- including the stack, etc.)

[You would annotate each subroutine with a note as to it's maximum stack penetration. So, you could figure out exactly how many bytes of stack were required, worst case, for the design. Can't afford to waste bytes that will never be accessed. Nor can you tweak the size of the stack after you've sold the product!]

I can recall being a few hundred bytes "into" needing another ($50) EPROM on one project. We went through the code and tabulated how many times each subroutine was invoked. Then, mapped the most commonly referenced (not "frequently" but "commonly") routines to shorter calling sequences (lengthening the time it took to invoke them but saving a byte on each invocation instance). We were thus able to shrink the size of the code.

And prayed we would never have to make any changes that ADDED to the code (yeah, right... when has code ever SHRUNK??)

That's considerably harder, nowadays. Pipelines are deeper, there's more magic between the CPU and the memory bus, etc. I.e., with support for newer memory technologies (DDR*), it's hard to predict how long a memory access will take as the memory interface is pulling in chunks of memory instead of individual locations.

Plus, compilers have advanced to the point where you can't even be sure the code will execute as you wrote it! All of the mealy-mouth words that you thought you could safely ignore in the language definition suddenly represent real possibilities that the compiler writer can "legally" exploit.

And, hardware is so cheap and tiny that you can now "afford" to do things in ways that previously would have been "mainframe class" implementations! E.g., my current project started out with lots of dumb "motes" that just allowed the various "interfaces" to be distributed around the house (instead of running thousands of signals back to "one big computer", I could process a dozen on a mote and ship the *data* they represented off to that "big computer").

But, you still need the interface logic, signal conditioning, some sort of MCU, network interface (to talk to the "big CPU"), local power supply, etc. At *each* of those "motes".

And, as the application grows, the "big computer" has to grow significantly; applying the design to a household with four occupants and to a commercial establishment with 400 employees are vastly different problems -- the "motes" design doesn't scale well because there is no "processing" done in the motes; they just perform data acquisition! So, the "big computer" has to get vastly larger, faster, etc. (imagine trying to carry on

400 different conversations simultaneously in addition to sensing and controlling all of the I/O's associated with that facility and those 400 employees!)

But, if you put some "capabilities" into those motes, then you can distribute the PROCESSING, as well! So, the "big computer" ends up being distributed instead of centralized. I can put a quarter of a gigabyte of memory and a 500MHz (32b) processor on each node for just a wee bit more than I could put a dumb microcontroller (remember, the microcontroller still needs to be capable of talking on the network, still needs power, signal conditioning, etc.).

Well, if you have those kinds of resources available "locally", you can put in a richer execution environment (e.g., demand paged memory, capabilities based protocols, etc.) that's more supportive of migrating parts of the application into that node! AS IF it was a piece of some "big computer".

And, as the deployment grows, the increased number of nodes that are required (to address the larger facility and increased number of users) automagically grows the (distributed) "big computer".

But, that means the nodes are now executing in environments where cache performance is an issue (not pertinent to a dumb microcontroller), where you can't infer how long something will take to execute just by "casual inspection", etc. And, as they are now parts of a bigger whole, you have to address the sharing that would be expected in that larger "entity"; how do I share CPU cycles if part of the CPU is "way over there"? how do I share memory if the only "free" memory is locate in another room?

It's sort of the *best* of both worlds -- and the *worst* of both worlds!

[Given how cheap and sophisticated hardware is becoming, I expect this to be the approach for larger projects -- esp IoT -- in the future. It's just impractical to deploy a single "big computer" to manage the tens of thousands of I/Os you might encounter in a facility!]
Reply to
Don Y
[snip]

There must have been a lot of progress since then. I booted Ubuntu CD about 5 years later, and was looking at a web page a few seconds after booting (It helped that I already use Firefox). It was still awhile before I started using Linux regularly (and not Ubuntu, I never liked Unity).

For most of those problems, I found help on the web.

I think the documentation that came with my first Linux explained about the root thing. There's a way to fix it if you want to use the old way.

[snip]
Reply to
Mark Lloyd

My father picked Beta because of the hi-fi audio. He didn't know VHS had the same thing.

Both 8086 & 8088 are 16-bit (data bus) internally. As you probably know, they had 20-bit address buses.

BTW, I think I did see one PC clone that used a 8086.

Also, there were 80186 & 80188 processors that had a few new features.

[snip]
Reply to
Mark Lloyd

The advantage of having the sources is that you can examine the "programs" that access these files to see what they expect to encounter. *That* is the definitive reference (not the man(1) page)

The bigger problem with any of the (free) Eunices is that there are lots of quirks that a casual user would not appreciate -- and could easily break. Too much of the "UN*X experience" RELIES on "experience". You can't just throw things together and hope it does what you want efficiently *or* effectively.

(Windows and its installers hide a lot of this from the casual Windows user)

The same is true of Windows. That's why there are patches released each week. (and that doesn't even address the applications/userland).

What "tips and advice" COULD you give someone -- without knowing how they will be using their machine as well as their goals?

What tips and advice would you give a car owner -- without knowing how they will be using their vehicle and what they expect/value from it?

Windows tries to be all things to all people (by being nothing to no one). The Eunices primarily target technical users. That audience gets a great deal of value from these offerings. For folks whose expectations end at the window manager, disappointment abounds! Trying to "back port" usability into a "product" that was designed for technical users is like trying to remove the hump from the camel...

Reply to
Don Y
[snip]

Some things are much better on Linux.

I have a Canon camera I bought in 2008. It has a USB port, but doesn't do anything on a Windows system unless I install the complex bloated software which came with it. The software didn't work right so I just used a SD card reader to transfer images. Laptops usually have those built in.

When I got Linux I just plugged the camera in and the camera appears as a drive. Just what I want.

[snip]
Reply to
Mark Lloyd

That will vary based on the distro yuo're using. And, the versions of the individual "programs" that are involved.

E.g., configuring BIND9 is significantly different than BIND8. The general concepts are the same (somewhat) but the settings in named.conf(5) are very different and have added functionality.

A persistent gripe, for me, is that systems are not "shipped" with fully populated configuration files. I.e., if the default setting for is , then it can't hurt to include: = in the configuration file. Yeah, it might slow down the startup of the program by a few milliseconds as another option has to be parsed. But, it clearly documents what *can* be set and what those settings actually *are* -- no need to chase down a (potentially out-of-date) man(1) page to figure it out!

[It's taken me the better part of a week to create named.conf(5) for my new system -- despite the fact that my old system was successfully providing that service up until the moment I "upgraded"]

And, there is seldom an explanation for why changes were made. At least, not in any comprehensive document ("Gee, what happened to the 'tty' UID? Why is it, apparently, no longer needed?")

[A good part of my building new systems is going through this painful -- though trivial -- exercise. Hence my inclination not to "fix something" unless it's truly BROKE!]

OTOH, Windows just makes changes and doesn't even let you know there WERE changes!

Reply to
Don Y

All you need is the .inf file that declares the Vendor ID (VID) and Product ID (PID) values for that specific "device". You can extract them from your Linux system (lsusb(8)) and fabricate a suitable .INF file (or, hack the registry manually).

Thereafter, when that VID/PID is encountered, Windows will automatically install the device as a "mass storage (disk)" device.

[I've done this in the past with PMP's that insisted on being accessed via some third party app]

Where the free Eunices usually excel is with network and other services (that are usually an afterthought to MS -- or, that MS wants to "reinvent" in a proprietary way)

Reply to
Don Y

I've had 3 Canon cameras on Slackware Linux. Sometimes gotta use a card reader. Other times, jes plug into camera and use gphoto2 (digikam GUI). I've never used Canon's software, mainly cuz Canon is notorious amongst Linux users as being painfully anti-Linux, so they never provide their software ported to that OS. Gotta find another way.

Still, there's usually some way. ;)

nb

Reply to
notbob

What I find amusing in the Linux camp is their expectation that they are somehow ENTITLED to having access to this proprietary/trade secret information that is an inherent part of the vendor's IP.

If a vendor wants to share its software with a *portion* of its user base, then more power to them. But, thinking that the user SHOULD share it is awfully presumptuous.

Does your employer think you should (freely) SHARE your expertise with him? Why do you insist on being paid for it? Think of the Greater Good that would come from your sharing it freely -- your employer could then share its PRODUCTS more freely -- everyone wins! (not! :> )

If its important enough to you, you;ll chose a product from another vendor; one who's products "play nicely" with your tools. Of course, if that product doesn't have the features/quality that you want... . The

*masses* limit your choices to what THEY want (i.e., are willing to pay for).
Reply to
Don Y

To anyone who might know the answer to my question...

After reading this thread about MS trying to force installation of Windows 10, I turned off my automatic windows update yesterday, and thumbed through the past updates out of curiosity. I noticed multiple times in the fall of 2015 where MS tried to download and install Win10 and it said all attempts have failed. I'm glad they failed, and it may be because I changed some setting a while back (that I can't remember specifically what it was).

My question is since I've now turned off auto updates, how often should I check for updates that I may actually need to allow to be downloaded, and will it say "Win10" in the list to I know to reject that d/l?

Reply to
Muggles

You have me blocked, so this will probably not get through, 2nd Tuesday of every month is the regular patch day, you will miss emergency out of band patches, but they are rare.

Reply to
FrozenNorth

| My question is since I've now turned off auto updates, how often should | I check for updates that I may actually need to allow to be downloaded, | and will it say "Win10" in the list to I know to reject that d/l? |

How often is up to you. MS is specifically saying that they're not going to detail patches anymore, so online gossip may be your only guide.

I never allow WU. With XP I stick with SP3. With Win7 I stick with SP1. The vast majority of security fixes are not really for Windows but rather for software: Internet Explorer, MS Office, etc.

There are people who will disagree and think it's very important to get all updates. I regard that as a blind "new is better" approach. However you decide to deal with it yourself, it's likely that you'll need to oversee updates very closely if you want to avoid spyware, Windows 10, and whatever else they decide to foist on Win7.

Reply to
Mayayana

FWIW, I've seen the term "patch Tuesday" for a long time, but my automatic updates always come on Thursdays.

Reply to
RonNNN

Why are you asking what does M$ want? You should be asking, "When do I get an operating system that updates precisely when I tell it to update?"

BTW, the answer is: Linux. ;)

nb

Reply to
notbob

Your machine will check it at its own pace, if you force a check on the Tuesday you will get them then.

Reply to
FrozenNorth

Tuesday is when the newest set of patches are RELEASED.

There are millions of Windows machines out there. If they all opted to contact MS's update servers on Tuesday, no one would ever get any updates!

Reply to
Don Y

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.