OT Win7 updates

I think he means mice that can do things other than being a set of buttons and an XY displacement source. E.g., press this button to get finer grained positioning; press this other button to click-lock; etc.

But it's implemented inconsistently in X! I.e., try to copy from an xterm to another -- paying attention to how whitespace is treated!

One of the power toys adds multiple/virtual desktops. And, many of the multihead monitor cards drag in that sort of support as well. I get annoyed because Windows tries to be smarter than me: I configure three monitors with the center one as my main desktop (the desktop extended off to the left and right on the other monitors). The thinking is that if I only want to run with one monitor, I can use the center monitor for that (power off the other two -- particularly useful in the Summer months!!)

But, powering down a monitor causes the desktop to be reshuffled -- anything that was on the powered down monitor gets moved onto the remaining monitors (no doubt as an AID for me!). So, I have to reshuffle my desktop if I don't want to use all three monitors (I would be content to leave whatever was on the powered down monitor where it is and just not access it! Or, access it via a miniature version of the virtual screen -- dragging the tiny outline of the window onto the center monitor!)

Reply to
Don Y
Loading thread data ...

I bought MS Office because a client could not see images embedded in docs from Open Office or Libre Office. Figured for maybe $125 it was not worth the aggravation of trying to work around and besides its tax deductible as a business expense.

Was extremely irritating as I bought it from Amazon who hooks you to MS to get it and it did not load properly. I had to call MS, turn machine over to them and have them do it. They got it working fine but made me generate a new pw to get on my computer and changed the opening screen. This has been a year or two ago and I recall other option was annual renting of MS Office. Personally I have not seen any improvement over when I had it before, don't like the cloud as first option for saving doc and while Open Office and Libre Office can read all their stuff, MS Office cannot even read its own very old documents.

Reply to
Frank

So much for "compatible", eh? :>

I've not owned a copy of Office since the 4.2 days (W3.1? W95?). It was the perfect example of MS thinking they knew better than I how to solve *my* problem. At the time, I purchased Ventura Publisher to prepare my documents (including business correspondence!). And, decided DTP was the right way to go for these authoring needs; Office (Word) was way too bloated for small documents; and pretty inept for large ones (e.g., 500+ pages)

This is known as "utter contempt for customers". :> Let's not make this easy for the customer OR a pleasant experience. Let's, instead, make it easy for *us*!

You don't even have to add "very" when talking about "old"! MS has never been particularly good at supporting its own formats! You'd have a better chance with a third party application "importing" the document -- and possibly saving it in a NEW Office format!

More examples of having to rebuy, relearn, redo stuff that you already thought was BEHIND you!

FrameMaker (DTP) has had a consistent interface language for years. I can read version 5 documents in a version 13 program. And, if I chose to save in that interface language/format, I can freely move changes made in the 13 version BACK to the 5 version.

The same sort of thing is true with AutoCAD.

These folks have made a commitment to this ability so you don't mind the thousands of dollars involved (well, you mind it a lot less than you would if it was MS abandonware!)

Reply to
Don Y

formatting link

Unfortunately, IBM was in a pissing contest with Exxon and Exxon bought Zilog in 1980.

Even for the Intel world, the i86 was supposed to be a temporary measure while they developed the i432. Intel has come up with some gems like the Itanic. They couldn't even kill the i86 with that.

The 68000 was another strong player although the Sun 1 had a homegrown MMU.

Reply to
rbowman

Some of our clients run 3 or 4 monitors, a great source of heartburn. The GUIs are actually using an XServer that gets confused easily.

Reply to
rbowman

Like the Franklin Ace? It only took Apple two months to sue Franklin for copyright infringement.

formatting link

There were a couple of years when Apple needed cash and licensed clones, but that didn't last. I'm always amused how the Apple fanbois tend toward the progressive end of the spectrum and pay premiums for some of the most locked down electronics on the planet.

Reply to
rbowman

Can't do it with gVim it don't need doing.

Reply to
rbowman

Amusing how misinformed folks can be. But, I guess if you don't have buying clout, your view of the world is a lot more "vanilla" than folks "on the inside". (when they fly the chip designers out to talk to you, you know you've got clout! :> )

The Z8000 was contemporary with the 8086. The 68000 similarly so. And, if you have IBM's purchasing clout, you'd be amazed at what you could "acquire" ahead of formal release schedules (I was sporting an MC68000 die "tie tack" in that time frame; designed a 68K system for

*release* in 1980 -- though 10MHz parts were still pricey).

DEC also had the T11 (or perhaps it was the F11?) but in a very expensive package (4 cavities on a large ceramic carrier -- made the ceramic 68K look

*economical* by comparison!) OTOH, hard to imagine IBM and DEC crawling in bed together.

TI was pitching the 99000 to us but the "workspace" concept was too scarey, given memory prices (and speeds), at the time. Would have made for some interesting OS implementations, though!

The 16032 (later 32016) was the finest of the affordable devices in that time frame. When you considered the cost of adding floating point hardware support, it was a lead pipe cinch! But, NS has always had a lousy track record with CPU's (SC/MP anyone?)

The 432 was stillborn. Almost as wonky an idea as the 99000. Given how much faster CPUs have become compared to memory speed increases, the 99000 would have aged poorly in that regard.

Z280 would have been the best "bang for buck" processor but Zilog could never recreate the magic of the Z80. The 380 being a pipe dream alongside the Z80000!

The 68000 didn't really implement restartable instructions correctly. A double fault would crash the processor (that was later fixed in the

68010).
Reply to
Don Y

I don't understand the problem.

I run an X server on each of my machines so I can work on the UN*X boxen "as convenient". I can size the X desktop to be as large as I like and span monitors readily.

[I've not checked to see how this behaves when I power off a monitor as I mentioned up-thread; I imagine it just moves the X server's window onto the remaining monitor(s) and lets some part of it "extend past the edge of the display"]

What are you (they) using for your (windows hosted) X Server?

[I'm currently debating replacing my Neoware X terminals with something home grown; yet another example of a supplier using FOSS software (NeoLinux) and not making the sources available -- so I could FIX the problems instead of rolling my own!]
Reply to
Don Y

(sigh) I wish that were the case. But, even my small "tutorials" are typically 50 pp.

For my current project, lots of different media involved in the documentation -- yet all packaged in PDF "containers". E.g., want to know the difference between a back vowel and front vowel? Here's the back... and here's the front. Wanna know what voice "creak" sounds like? Adjust this slider and click "PLAY"... Wanna know what the "help" gesture looks like? Click "PLAY" and watch the animation.

Etc.

In my case, I won't be available to "clarify things" so I have to make sure I can present all the necessary information as unambiguously as possible. I've a fair bit of experience reading other folks' documentation and have been sorely disappointed by the lack of clarity. As the original authors aren't available, that leaves me floundering trying to GUESS at their intent.

You'd think folks would want their work to be understood and *used* (else, why publish it??)

Reply to
Don Y

formatting link

It was Mortice Kern Systems (MKS) before PTC bought them for another product and inherited the XServer and Toolkit. The bulk of our code dates back to the AIX days and builds and runs on both Linux and Windows. IBM priced their way out of the market with the RS6000 boxes so everyone has went to Windows. I think we still have a couple of RS6000 boxes but whether they would boot is a good question.

Legacy is grand.

Reply to
rbowman

We have tech writers but they have a tendency to cut'n'paste the programmer's notes from the work order. I suppose it's improved my writing skills since I know where the notes will wind up. I have worked with good tech writers in the past.

Reply to
rbowman

I still have a Captain Zilog t-shirt around someplace. The Boston IEEE had a 68000 seminar that I went to but I was rooting for the Z8000. The

8088 was a severe disappointment. I had done bank switching with a Z80 and figured the 8088 just moved a little external hardware on board.

I only worked with a TI chip on one project. It's big selling point was there were rad hard parts available. Other than that it didn't have much to recommend it.

Reply to
rbowman

I don't spend much time "documenting the code". Rather, I have to spend time documenting the algorithms and the underlying technology.

For example, I use speech output in one of my interfaces (the interfaces are varied -- depending on the capabilities of the user). So, I first have to present an overview of speech synthesis (from text). This lets me present the problems that will be encountered without muddying up the discussion of a particular implementation. It lets me introduce a lexicon that I can later use as a shorthand to provide "back references" in the ensuing design discussions.

I can then address the individual pieces of a synthesizer -- text normalization, grapheme-to-phoneme conversion, stress assignment, prosody, waveform generation, etc. I can break a huge project down into more understandable components -- with this "front-end roadmap".

[In school, I was exposed to the concept of "complexity" with the understanding that anything that doesn't fit in a single braincase is "complex". So, I want to make sure I present issues in small enough pieces that the developer can "see the whole picture" -- even if he doesn't have all the details at his immediate grasp]

So, I can make a statement like "The parts-of-speech tagger helps disambiguate between homographs" -- and the reader understands WHY this is necessary (without a digression into that material). Later, when a pronunciation rule is qualified by a PoS tag, the developer shouldn't be puzzled over its purpose.

Then (in another document), I can present the different approaches to speech synthesis -- and the pros and cons of each. Otherwise, a future developer (maintainer?) wouldn't understand why a particular approach was chosen.

Then, for the particular implementation technology selected, I have to explain the various issues that pertain to the design of the actual implementation. E.g., why a particular data structure was chosen over another "better" (?) structure ("Here are the metrics for each approach...") This lets the developer know what he will be encountering when he starts digging through the sources.

Then, in the sources, I can make abbreviated references that remind the developer of what I'm trying to achieve -- without re-justifying the design choice: "Implement affix rules as CART tree".

My hope is that this anticipates the sorts of questions like:

- why didn't you use a simple indexed/sorted list?

- why didn't you use a b-tree representation?

- couldn't you hash the input to expedite lookups? etc.

Then, if the developer decides he wants to change the implementation, he at least has an explanation that he can apply to his new proposal ("Why are you NOT using the original approach? What have you discovered that invalidates it?")

I.e., I'm trying to give a back-of-the-napkin tutorial that gives some perspective to a project that most folks would probably be ill-prepared to tackle, "cold". I do this for all of the technologies that I am using as I don't imagine any *one* developer would have a handle on all of them. :<

Reply to
Don Y

I didn't like the dichotomy of A-registers and D-registers in the 68k. Why the hell can't you do anything WITH anything? The same was true of the Intel parts (dating back to the 8080). It was always annoying to have to juggle registers to get *what* you wanted, *where* you wanted it! (back then, everything was ASM so this was very "manual").

The Z180 was... "kinda" an improvement. But, the way the "MMU" operated was hokey. I would love to understand the motivation for that choice! OTOH, I designed some big systems rather easily under Z180's by leveraging the use of the bank area for things like different libraries (with "far" calls that would allow me to access code/data anywhere in physical memory -- at a sizeable performance penalty! :< )

The 16032 was a breath of fresh air (on paper) as it eliminated a lot of the "arbitrary" constraints that these other processors had imposed.

I think the 8085 is still sold in a radhard variety. Likewise, the 6502. (I can't remember if the 1802 ever made that distinction)

It was very interesting to see the different approaches that were taken at the time (when there truly WAS "variety"). And, some of the parallels in the designers' personalities, corporate mentality, etc.

I think many had their own ideas as to how their devices were going to be used. And, they were often misguided. Or, premature (i.e., thinking that the devices would be used in much the same way that mainframes were being applied, only on a much smaller scale).

When you'd show concept drawings of how you *might* apply one of their devices, they'd be puzzled: "why are you using it like that? you could do this, instead!" "yes, but your way costs $X, mine costs $Y -- Y

Reply to
Don Y

My 1st system I manager was a Varian 73. It had the quite innovative feature of having 3 separate registers for the upper memory maps in use, one for the program counter, one for the fetch address, and one for the store address.

Thus, if the map registers were set up ahead of time, loading or storing the accumulator from or to memory at the same low address actually copied data between mapped memory locations *without* the need to change map registers.

Reply to
Mike Duffy

Ah! I think I have an old version of the toolkit archived here.

Yeah, my SPARCstation LX got to that point. It was frustrating to "set it free" as I had really maxed it out. But, at 50MHz, it was too pokey to hang onto. (OTOH, I kept my Voyager which wasn't much faster at 60MHz!)

I think it is an interesting aspect of technology.

I know what it costs me (effort/time/money) to maintain my legacy "investments". OTOH, I only have to answer to myself; if I want to replace/upgrade, I don't have to make a case to my employer, stockholders, customers, etc.

I've a colleague who does IT for a multimillion dollar enterprise, here. It is "educational" (for want of a better word) to watch how he pieces together solutions to leverage the existing software/hardware infrastructure. He's always squirreling away big iron that he comes across as getting spares for his existing systems is virtually impossible.

He clearly could never tackle replacing everything; the effort and cost would be prohibitive (cheaper to sell the business and let that be someone else's "surprise"! :> ). They might end up with a cleaner implelmentation -- but business would grind to a halt for many years while the upgrade was being created (i.e., the IT would be frozen while it was reimplemented anew).

I often muse over where the "drop dead" point might be... when he'll be faced with a challenge that he can't cobble into his existing implementation. And, whether he will, at that time, propose the complete overhaul; or, yet another kludge to ALMOST upgrade...

[Kinda like a plumber repeatedly patching a leak... when does he give up and tell the homeowner major repairs are in order?]
Reply to
Don Y

In the early days of video gaming (arcade), address spaces were sparse (e.g., 64KB). And, CPUs were slow (e.g., 1MHz bus). Couple this with the need to update large areas of the screen "atomicly" (staying out of the way of the "beam" as this results in visual artifacts).

So, you didn't want to have to "bank" the memory. Yet, might need ~48KB just for the screen.

Thus, you put the video memory *under* the program memory and relied on the fact that you never wanted to WRITE to program memory (ROM-based) and could thus, deflect the write cycle to the display memory.

Unfortunately, this meant you couldn't READ the display memory -- because it was the program memory that appeared in its place!

Other products (other industries) you'd play similar games. E.g., using an INput instruction to read the COLUMNS of a keypad switch array -- while driving the ROWS with a particular pattern to allow for sequential scanning of key closures in the matrix. I.e., you could let the matrix occupy a large portion of the address space because it was not the *program/data* address space that you were wasting on it!

I designed a product that used a single chip MCU as a "supervisory processor". It's role was to monitor the main (larger) processor and reset it (and the system) if the processor looked like it was misbehaving (a "watchdog" of sorts -- one with smarts!). But, by carefully interpreting bus control signals, I could also arrange for it to be a valid peripheral, accessible from the main CPU.

(This allowed it and the main CPU to reassure each other that everything was proceeding properly. If not, the MCU would force the CPU -- and system -- into a "safe" state)

With that capability, it was easy to enhance the interface so that the "boot ROM" for the main CPU was actually *inside* the single chip MCU -- just a portion of the MCU's memory set aside for that purpose! The main CPU would go to fetch it's first instruction and I would redirect that memory access to the MCU -- which would look up the instruction in its internal ROM and feed it to the main CPU. The main CPU could then slowly build up a program image in main memory (RAM) based on the instructions in this "boot ROM".

This would continue until the main CPU would access the MCU

*as a peripheral* and turn off the "boot ROM" flag. Thereafter, code would execute out of the RAM and the MCU would resort to its nominal functionality in the system.

One of my first product used information about bus cycles to allow me to discriminate between one of five "logically equivalent" ways that the contents of location 0x0000 were being accessed:

- the first opcode fetched after power-on reset

- the opcode fetched as the result of an "RST 0" instruction

- "interrupt 0" (almost the same as an "RST 0" but generated by hardware)

- a "soft reset" (JMP 0)

- the data read from location 0x0000 A stranger looking at the code would wonder how the same reference could be interpreted differently.

Nowadays, everything is plain vanilla, by comparison. You think nothing of setting aside a megabyte of address space to access a keypad. Another megabyte to access 16KB of RAM. etc.

Boring.

Reply to
Don Y

That's the problem with legacy software -- you do need to make a case to move on. Not many industries come with that long a trail. Another programmer and myself are hacking out the next generation that will depart from the old code base. It's been a long project, mostly examining to competing technologies. We're well aware we're laying the groundwork for the next legacy. flash forward to 2031: what idiot ever decided C# was a good idea? We still need to use Visual Studio 2025 to build the crap.

Reply to
rbowman

No, like these. Life didn't begin with the Mac. He made the claim that Apple's old machines were only available from Apple. Well, these are old Apple machines. He also has closed system conflated with sole sourced product.

formatting link

From Wikipedia, the free encyclopedia

Jump to: navigation, search

The following is an incomplete list of clones of Apple's Apple II home comp uter. More details on some models are in Apple II series#Clones. AES easy3 Agat Agat-4 Agat-7 Agat-8 Agat-9

Albert[1] AMI II Apco Arrow 1000 Asem AM 64e Aton II Ap II Base 48, Base 64, Base 64A Basis 108, Basis 208 Bee II BOSS-1 CCE Exato Pró Citron II CSC Euro Super Cubic 88 CB-777[2] Elppa II Formosa Microcomputer Formula II kit ("Fully compatible with Apple II+")[3] Franklin Ace Fugu Elite 5 Golden II IMKO 2 InterTek System IV ITT 2020 (Europlus) Ivel Z3 Laser 128 Laser 3000 Mackintosh MCP MC 4000 Mango II Medfly Microcraft Craft II Plus Microdigital TK-2000 Color (not 100% binary-compatible) Microdigital TK-2000 II Color (not 100% binary-compatible) Microdigital TK-3000 IIe - Page in Portuguese Microdigital TK-3000 //e Compact Microengenho Multitech Microprofessor II (MPF II) Microprofessor III (MPF III)

MicroSCI Havac Microcom IIe Mind II Multi-system computer O. S. Micro Systems Orange Panasia Peach Pearcom Pravetz series 8 Pravetz 8A Pravetz 8M Pravetz 8E Pravetz 8C

Precision Echo Phase II Pineapple[4] RX-8800 Sekon (computer) Shuttle (computer) Space 83 Spring Spectrum ED Syscom 2 TK 2000 TK 3000 TK 8000[2] UNITRON AP II Unitronics Sonic VECTORIO [5](Japan?) Wombat[2] Zeus 2001

I'm always amused how the Apple fanbois tend

That's one reason I've never bought one. Their cell phones are another example. IDK of any other manufacturer where the battery can't be replaced by the user. If any other manufacturer tried to pull that, they wouldn't sell very many. But Apple does it and people stand in line for hours to get one. I've used other people's iPhones enough to know that it's very similar to the Android OS. In fact, I like the Android OS better.

Reply to
trader_4

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.