OT Win7 updates

Page 7 of 8  
On 05/19/2016 09:00 PM, Don Y wrote:

I still have a Captain Zilog t-shirt around someplace. The Boston IEEE had a 68000 seminar that I went to but I was rooting for the Z8000. The 8088 was a severe disappointment. I had done bank switching with a Z80 and figured the 8088 just moved a little external hardware on board.

I only worked with a TI chip on one project. It's big selling point was there were rad hard parts available. Other than that it didn't have much to recommend it.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 5/19/2016 10:05 PM, rbowman wrote:

I didn't like the dichotomy of A-registers and D-registers in the 68k. Why the hell can't you do anything WITH anything? The same was true of the Intel parts (dating back to the 8080). It was always annoying to have to juggle registers to get *what* you wanted, *where* you wanted it! (back then, everything was ASM so this was very "manual").
The Z180 was... "kinda" an improvement. But, the way the "MMU" operated was hokey. I would love to understand the motivation for that choice! OTOH, I designed some big systems rather easily under Z180's by leveraging the use of the bank area for things like different libraries (with "far" calls that would allow me to access code/data anywhere in physical memory -- at a sizeable performance penalty! :< )
The 16032 was a breath of fresh air (on paper) as it eliminated a lot of the "arbitrary" constraints that these other processors had imposed.

I think the 8085 is still sold in a radhard variety. Likewise, the 6502. (I can't remember if the 1802 ever made that distinction)
It was very interesting to see the different approaches that were taken at the time (when there truly WAS "variety"). And, some of the parallels in the designers' personalities, corporate mentality, etc.
I think many had their own ideas as to how their devices were going to be used. And, they were often misguided. Or, premature (i.e., thinking that the devices would be used in much the same way that mainframes were being applied, only on a much smaller scale).
When you'd show concept drawings of how you *might* apply one of their devices, they'd be puzzled: "why are you using it like that? you could do this, instead!" "yes, but your way costs $X, mine costs $Y -- Y<X!"
[I built a memory array in one product that was expandable in one-bit-wide slices. The software would determine the mix of "slices" at POST, along with their respective sizes. So, the application just dealt with a "memory capacity" and didn't have to worry about how it was organized. This enabled us to offer different memory capacities (memory being expensive, "back in the day") for small incremental costs -- instead of just offering two sizes: small (affordable) and large (expensive).]
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Thu, 19 May 2016 22:58:55 -0700, Don Y wrote:

My 1st system I manager was a Varian 73. It had the quite innovative feature of having 3 separate registers for the upper memory maps in use, one for the program counter, one for the fetch address, and one for the store address.
Thus, if the map registers were set up ahead of time, loading or storing the accumulator from or to memory at the same low address actually copied data between mapped memory locations *without* the need to change map registers.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 5/19/2016 11:12 PM, Mike Duffy wrote:

In the early days of video gaming (arcade), address spaces were sparse (e.g., 64KB). And, CPUs were slow (e.g., 1MHz bus). Couple this with the need to update large areas of the screen "atomicly" (staying out of the way of the "beam" as this results in visual artifacts).
So, you didn't want to have to "bank" the memory. Yet, might need ~48KB just for the screen.
Thus, you put the video memory *under* the program memory and relied on the fact that you never wanted to WRITE to program memory (ROM-based) and could thus, deflect the write cycle to the display memory.
Unfortunately, this meant you couldn't READ the display memory -- because it was the program memory that appeared in its place!
Other products (other industries) you'd play similar games. E.g., using an INput instruction to read the COLUMNS of a keypad switch array -- while driving the ROWS with a particular pattern to allow for sequential scanning of key closures in the matrix. I.e., you could let the matrix occupy a large portion of the address space because it was not the *program/data* address space that you were wasting on it!
I designed a product that used a single chip MCU as a "supervisory processor". It's role was to monitor the main (larger) processor and reset it (and the system) if the processor looked like it was misbehaving (a "watchdog" of sorts -- one with smarts!). But, by carefully interpreting bus control signals, I could also arrange for it to be a valid peripheral, accessible from the main CPU.
(This allowed it and the main CPU to reassure each other that everything was proceeding properly. If not, the MCU would force the CPU -- and system -- into a "safe" state)
With that capability, it was easy to enhance the interface so that the "boot ROM" for the main CPU was actually *inside* the single chip MCU -- just a portion of the MCU's memory set aside for that purpose! The main CPU would go to fetch it's first instruction and I would redirect that memory access to the MCU -- which would look up the instruction in its internal ROM and feed it to the main CPU. The main CPU could then slowly build up a program image in main memory (RAM) based on the instructions in this "boot ROM".
This would continue until the main CPU would access the MCU *as a peripheral* and turn off the "boot ROM" flag. Thereafter, code would execute out of the RAM and the MCU would resort to its nominal functionality in the system.
One of my first product used information about bus cycles to allow me to discriminate between one of five "logically equivalent" ways that the contents of location 0x0000 were being accessed: - the first opcode fetched after power-on reset - the opcode fetched as the result of an "RST 0" instruction - "interrupt 0" (almost the same as an "RST 0" but generated by hardware) - a "soft reset" (JMP 0) - the data read from location 0x0000 A stranger looking at the code would wonder how the same reference could be interpreted differently.
Nowadays, everything is plain vanilla, by comparison. You think nothing of setting aside a megabyte of address space to access a keypad. Another megabyte to access 16KB of RAM. etc.
Boring.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 05/20/2016 12:55 AM, Don Y wrote:

When I interviewed for my current job one of the questions was prefaced with 'assume you have unlimited memory'. I flashed back to an 8049 project when I would have killed for a byte.
I enjoyed working with microcontrollers. You had a very good idea of what the thing was up to at all times, how many cycles each instruction took, and what was happening with the peripherals.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 5/20/2016 7:09 AM, rbowman wrote:

In my early projects, we would count *bits* and make maps of which bits of which bytes were being used by which routines (at which times): "Ah, b3 of Foo need not be preserved at this time! So, I can use that as a flag..." (you never wasted more than a bit on a boolean! When you only had a few hundred bytes TOTAL to work with -- including the stack, etc.)
[You would annotate each subroutine with a note as to it's maximum stack penetration. So, you could figure out exactly how many bytes of stack were required, worst case, for the design. Can't afford to waste bytes that will never be accessed. Nor can you tweak the size of the stack after you've sold the product!]
I can recall being a few hundred bytes "into" needing another ($50) EPROM on one project. We went through the code and tabulated how many times each subroutine was invoked. Then, mapped the most commonly referenced (not "frequently" but "commonly") routines to shorter calling sequences (lengthening the time it took to invoke them but saving a byte on each invocation instance). We were thus able to shrink the size of the code.
And prayed we would never have to make any changes that ADDED to the code (yeah, right... when has code ever SHRUNK??)

That's considerably harder, nowadays. Pipelines are deeper, there's more magic between the CPU and the memory bus, etc. I.e., with support for newer memory technologies (DDR*), it's hard to predict how long a memory access will take as the memory interface is pulling in chunks of memory instead of individual locations.
Plus, compilers have advanced to the point where you can't even be sure the code will execute as you wrote it! All of the mealy-mouth words that you thought you could safely ignore in the language definition suddenly represent real possibilities that the compiler writer can "legally" exploit.
And, hardware is so cheap and tiny that you can now "afford" to do things in ways that previously would have been "mainframe class" implementations! E.g., my current project started out with lots of dumb "motes" that just allowed the various "interfaces" to be distributed around the house (instead of running thousands of signals back to "one big computer", I could process a dozen on a mote and ship the *data* they represented off to that "big computer").
But, you still need the interface logic, signal conditioning, some sort of MCU, network interface (to talk to the "big CPU"), local power supply, etc. At *each* of those "motes".
And, as the application grows, the "big computer" has to grow significantly; applying the design to a household with four occupants and to a commercial establishment with 400 employees are vastly different problems -- the "motes" design doesn't scale well because there is no "processing" done in the motes; they just perform data acquisition! So, the "big computer" has to get vastly larger, faster, etc. (imagine trying to carry on 400 different conversations simultaneously in addition to sensing and controlling all of the I/O's associated with that facility and those 400 employees!)
But, if you put some "capabilities" into those motes, then you can distribute the PROCESSING, as well! So, the "big computer" ends up being distributed instead of centralized. I can put a quarter of a gigabyte of memory and a 500MHz (32b) processor on each node for just a wee bit more than I could put a dumb microcontroller (remember, the microcontroller still needs to be capable of talking on the network, still needs power, signal conditioning, etc.).
Well, if you have those kinds of resources available "locally", you can put in a richer execution environment (e.g., demand paged memory, capabilities based protocols, etc.) that's more supportive of migrating parts of the application into that node! AS IF it was a piece of some "big computer".
And, as the deployment grows, the increased number of nodes that are required (to address the larger facility and increased number of users) automagically grows the (distributed) "big computer".
But, that means the nodes are now executing in environments where cache performance is an issue (not pertinent to a dumb microcontroller), where you can't infer how long something will take to execute just by "casual inspection", etc. And, as they are now parts of a bigger whole, you have to address the sharing that would be expected in that larger "entity"; how do I share CPU cycles if part of the CPU is "way over there"? how do I share memory if the only "free" memory is locate in another room?
It's sort of the *best* of both worlds -- and the *worst* of both worlds!
[Given how cheap and sophisticated hardware is becoming, I expect this to be the approach for larger projects -- esp IoT -- in the future. It's just impractical to deploy a single "big computer" to manage the tens of thousands of I/Os you might encounter in a facility!]
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 05/20/2016 11:41 AM, Don Y wrote:

We still have a few bitmaps in the data structures. Then there are the shorts. Who would ever need more than 32767 objects? That was fun when we found out. Going to an unsigned short bought a little more breathing space. Then there are all the time_t variables. I don't plan on being around when that hits the fan.
It's been nibble along the way. Someone had the brilliant idea to look at the free disk space and exit if it looked too little. Life was grand until the first 4TB platter.
I get a chuckle when I see the real old stuff with variables declared 'register'. At least the compiler doesn't make snotty remarks when it finds them or K&R syntax.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 5/20/2016 6:22 PM, rbowman wrote:

These weren't specifically bit fields. Rather, more like unions of random types. I.e., when Foo was running, it might be an 8 bit int. When baz was running, it might be the most significant byte of a 24 bit float (!). When bar was running, it might be two three bit fields and two one bit fields -- used by bar and cosmo at the same time.
Each "declared byte" of RAM had a paragraph commentary that explained how the byte was used in different scenarios. With three of us working on the codebase at the same time (one set of expensive development tools), it took a fair bit of planning to ensure no two of us would appropriate a particular resource at the same execution time.

Until you realize that you were counting on "-1" to signal errors...

In my case, time is measured in nanoseconds (because some of my control loops operate in the microseconds time frame) so I can represent ~300 year intervals.
The harder problem is one of human expectations: It's 1830. If I say I have an appointment in 2 hours, how do I implement that "reminder"? Do I "do the math" and realize that it will be at 2030 and schedule an event for that time?
If so, what happens if something (me?) changes the current time of day so it now is 1820. My 2030 appointment is now 2:10 in the future!
Conversely, if I say I have an appointment at 2030, is it safe to set a 2 hour timer? If someone changes the current time, will I end up going to my appointment early/late?
Of course, you can make an implementation choice -- or even support both approaches! But, how do you come up with a language that allows the user to indicate which of these implementations (timer vs absolute time) a particular event schedule should use?

Exactly. How can people NOT foresee this stuff? It's like being surprised that XMAS falls on the 25th this year!
I designed my scripting language in large part to insulate the users from these sorts of issues. How do you explain overflow to a housewife? Or, cancellation to a plumber? They want to just deal with ideal operators and not have to worry about idiosyncrasies of an implementation!
(sort of like trying to explain why sqrt(3)*sqrt(3) != 3.000)

Yeah, I am frequently arguing with folks who think they can code smarter than the compiler's optimizer. Write what you want the code to *do*. Let the compiler figure out how to do it! Concentrate on finding good algorithms, not trying to outsmart the compiler!
(NatSemi's GNX tools were amazingly good, considering the time frame! The quality of the optimization was stunning. Of course, the 32K was a highly orthogonal device so it was a lot easier to design an optimizer that didn't have to "color" registers, etc.)
It's sad when you see folks employing cryptic syntax in the (mis)belief that it will somehow result in "faster" code! Esp when this greatly increases the chance that they'll write something that is NOT what they intended!
<shrug> "Job security"?
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Thursday, May 19, 2016 at 11:01:05 PM UTC-4, Don Y wrote:

From what I see, the memory management and paging chips there, the Z8010, Z8015, were introduced by Zilog in the early 1980s. That puts them in the timeframe of the 286, not the 8086, which was introduced in 1976. Your claim was relevant to the 8086 and years matter. Just a few years later, the 386 was out with memory management and protection onboard, where it really needs to be to work effectively and the rest is history.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 05/20/2016 08:23 AM, trader_4 wrote:

Yes, they do. Get them right.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Friday, May 20, 2016 at 9:40:20 PM UTC-4, rbowman wrote:

I have them right. Show us the date the Z8000 MMU and paging chips were introduced. Those specific chips, not the CPU itself which was used without it. As I said, from what I see they came out in the early 80s, about the same time as the 286. You could try to build a half-assed multitasking system with paging using even an 8086. Even the 286 wasn't really up to the task, for a number of reasons, including overall performance. Don makes it sound like in 1978 there were microprocessor solutions that allowed you to do build a robust, multitasking system with paging. Just a few years matter in an industry moving at speed.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Thursday, May 19, 2016 at 10:00:42 PM UTC-4, rbowman wrote:

It looks like those Z8000 add-on chips for memory management and paging became available in the early 1980s. That puts it not in the era of the 8086, which was the time period of Don's claim, but much later. By that time the 286 with built-in memory management was out.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 05/20/2016 08:11 AM, trader_4 wrote:

Time line:
Intel 8086 April 1978 Zilog Z8000 series early 1979 Motorola MC68000 September 1979 IBM PC August 1981 Intel 80286 February 1982 Intel 80386 October 1985
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 5/20/2016 6:40 PM, rbowman wrote:

The 8086 had NO CONCEPT OF MEMORY MANAGEMENT. It had the concept of an EXTERNAL floating point unit -- so boards were built with provisions for external FPU's (even though not sold with those installed).
The 68000 had mechanisms to rerun a bus cycle -- essential for a demand paged MMU. So, you could make provisions in your hardware and software to add support for the PMMU when it became available.
The same was true of the 32k (you could actually design a system that would "short out" the MMU if it was not present and "switch it in" if installed -- no settings or jumpers necessary!)
In Eggebrecht's book (architect and team leader for the IBM PC), he explained his reasons for selecting the 8086 family over other alternatives available at the time of the design.
The choices were essentially: - 6502 - z80 - z8000 - 8086 - 68k - "proprietary" chip (i.e., an "IBM special") (Amusing that he didn't even consider the DEC parts -- perhaps fear that DEC might not be able to produce in the volumes they envisioned? Or, maybe just "out-of-the-question" to patronize a competitor?)
The 6502 and z80 were ruled out because they feared being seen as "followers" in the industry (instead of LEADERS) -- the z80 and 6502 (apple!) having already carved out markets.
The z8000 was too different from the z80 so no simple migration path from the HUGE z80 code base to that architecture. (amusing considering how hard it is/was to port code *to* the PC)
The proprietary solution was ruled out because the only tools available ran on IBM mainframes ("Buy one of our PC's! Then, buy one of our mainframes so you can write code for it!!")
The 68k saw a lot of attention. It was recognized as a much nicer architecture (more like an '11 -- OhMiGosh!). The reasons against adopting it boiled down to the fact that it had a 16b bus. They were more interested in pinching pennies than designing a real computer!
I.e., "We'll fix it in version 2"
In his words: "In summary the 8088 was selected because it allowed the lowest cost implementation of an architecture that provided a migration path to a larger address space and higher performance implementations. Because it was a unique choice relative to competitive system implementations, IBM could be viewed as a leader, rather than a follower. It had a feasible software migration path that allowed access to the large base of existing 8080 software. The 8088 was a comfortable solution for IBM. Was it the best processor architecture available at the time? Probably not, but history seems to have been kind to the decision."
I think he underestimates the "kindness" aspect! History has been TOLERANT of the decision. How many millions (?) of man years of discarded software have come and gone because of the endless contortions as Intel keeps trying to make an antique architecture work in a modern world! (each contortion causing software to become obsolete or extraordinary measures taken to allow it to live on for a short while longer...)
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 05/20/2016 10:49 PM, Don Y wrote:

At least we don't have to compile for 5 different memory models anymore.
A company I worked for had a 5110 with the PALM processor:
https://en.wikipedia.org/wiki/IBM_PALM_processor
It had nothing to do with the 5150 (IBM PC). My take was IBM fully expected the PC to be an epic fail and sent their expendables to Boca Raton to scrounge up parts and not disturb the adults. The damn thing survived so they had to back up and try again with the PCjr to prove there was no market for home computers.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 5/20/2016 10:38 PM, rbowman wrote:

Now it's worse! You have to build for web deployment, smart phone, PC, Mac, etc.
In my case, I have to assume the UI can be haptic, visual or aural (or combinations of the above). I drew the line at English speaking, though -- way too much to learn to address those language and cultural issues! :<

Hmmm... interesting.

There isn't! Just "entertainment systems"! :-(
(I wonder how many people actually use computers for "computer stuff" vs. entertainment-related activities)
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 05/21/2016 03:08 PM, Don Y wrote:

Hopefully the solution we're proposing will be viable for a while. C#, WPF, and Xamarin.forms for the mobile devices. Using a MVVM architecture makes it fairly easy to abstract the business logic from the presentation layer and Xamarin handles the Android/iWhatever/Windows Phone end. Macs aren't a problem. PSPAs just don't Apple desktops although iPads and iPhones are popular.

I read a set of requirements recently that when you parse the fine print suggested the system had to be usable by deaf, dumb, and blind dispatchers. We've done Spanish which must be a hoot to Spanish speakers. Google translate only gets you so far. Native Spanish speakers don't exactly grow on trees here particularly ones that know the right idiom. One man's camión is another man's autobus.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 5/21/2016 3:40 PM, rbowman wrote:

You can't support these things after-the-fact. You have to plan on them in the initial design.
E.g., you'd design a microwave oven differently if you knew it had to be usable by blind/deaf folks and not just sighted ones. There's nothing inherent in the concept of a microwave oven that precludes addressing these users. It's just been simpler for folks to resort to displays and flat membrane keypads instead of indicators and controls that can be identified without vision.
(given the diabetic emergency in this country, I suspect a lot of folks are going to find blindness "late in life" to be a real challenge! No time to learn how to adapt to the loss as you would if you'd grown up with it)
The keypad for the initial Reading Machine had no labels on it. Folks would walk up and ask, "But how do you know which key is which?" Smile politely and tell them to close their eyes and repeat the question...
(there was a button called "nominator" that was in an easily accessible location. Press it and it announces the functionality of the next button you press)
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 05/21/2016 05:15 PM, Don Y wrote:

We're already getting requests for the Senior Citizen GUI style. Dispatchers aren't getting any younger either.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 5/21/2016 8:01 PM, rbowman wrote:

Good luck! Hearing loss, poor visual acuity (if not outright blindness), more easily confused, essential/Parkinsonian tremor, etc. At times, I think it would be easier to support all the languages of the world!
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Site Timeline

Related Threads

    HomeOwnersHub.com is a website for homeowners and building and maintenance pros. It is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.