The hundreds of other companies who had also done it by then?
Although I'm not sure they'd necessarily copy a *company*, because enough algorithms and theory would have essentially been in the public domain by then anyway, and it wouldn't exactly be rocket science to just do a bit of bedtime reading and go from there.
They were slightly more canny than that even... they did a per machine licensing price for the OEMs that was at a significant discount over the usual license cost. It meant that the maker paid a fixed license fee for every machine they sold regardless of what OS (if any) it had on it. Hence for them to go for another OS they always had to in eff3ect pay twice, so it guaranteed that anything other than a free OS could never compete on price.
Well as multitasking as Win 3.1 anyway... (in fact that is slightly unfair since at least the mac was not burdened with the need to run DOS apps that had no awareness of the need to share resources)
Not denying the usefulness of app switching even if it could not run them at the same time. Even Win 3.1 was a reasonable (if excessive) tool for switching between DOS apps. (Desqview was probably better mind)
But MacOS of old was really showing its age below the waterline, I used to run it on an emulator on my miggy at one time (mainly for access to apps like Netscape that were not available as native apps). It was quite entertaining having the various benchmark tools try to work out what platform they were running on ;-) It looked like a Quadra, but with a
68060, a hard drive six times faster and (emulated) video several times slower than than the real 040 mac)
ISTR there being an MMU available for the '010, too, although it was a bit clunky (and some vendors - e.g. Sun - implemented their own).
Several vendors implemented a kind of "elegant kludge" to do virtual memory on the humble 68000, too (which lacked the necessary to implement virtual memory properly) - that involved actually running two CPUs out of phase with each other.
Segmented architecture, 16 bit registers and not many off them, loads of hard coded use cases for various registers, weak stack handling, lame edge sensitive interrupt controllers and prioritisation.
compared with flat memory, 32 registers, orthogonal design etc
Than what? PL/M was better than Intel assembler granted (especially if you had to use Intel's DOS based tools which were bug city). '186 cleared away a few peripheral chips and was favoured by hardware guys who could not work out how to do address decoding logic that worked, but the performance was poor in comparison to 68K.
If its anything like doing non bit aligned ASN.1 (PER) encoding, then its a mess regardless of the language! I wrote a set of routines for it in C which looked ok but were too slow, so recoded in 68K assembler.
'286 would have been ok(ish) if it were not for the legacy of PCs and DOS. You really needed the complexity of the 386 to tame those. The 386' was the first processor I met where it took 20 mins to acclimatise to the new assembler, and then months to get a handle all the wrinkles of the architecture.
I actually assumed he was just trolling, because surely even he isn't
*that* crazy.
x86 was awful, it really was. Didn't its choice largely stem from IBM using it on an earlier product? If only they'd gone for m68k instead, or even the ns32k...
but why? If apple had been dominant they would have just said this is fine lets leave it as it is.
The PC market shows what happens with competition, and apple played catch up for an awful lot of time. In fact until someone else developed the Intel boxes for them they were essentially dead. Apple are not as innovative as some think they are.
HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.