Do you use any computer based tool for doing project layout?

Now he does that. And the writers get their names on the cover.

Reply to
LDosser
Loading thread data ...

Whas that "Tage Frid Teaches Woodworking, Book 1: Joinery"? I found it very clear, in spite of some reproduction problems with the pix... though it's taken me about a year to resolve the cover illustration into a dovetail joint. Kept reading it as some abstract illustration -- no longer able to see that, now that I know what it's supposed to be!

Reply to
Steve

, rather used an

uits are usually

d more often. =A0

directly. =A0'Adder'

h of inverters

zero result, zero carry

=A0=3D> one =A0result, zero carry

=A0=3D? zero result, one =A0carry

=A0=3D> zero result, zero borrow

0 =3D> one =A0result, zero borrow
1 =3D> one =A0result, one =A0borrow

e 'negative' of

Oh, I see... a liberal.

Reply to
Robatoy

snipped-for-privacy@4ax.com:

Huh?

Reply to
keithw86

ll, rather used an

mes on purpose,

rcuits are usually

sed more often. =A0

on directly. =A0'Adder'

nch of inverters

states in the

=3D> zero result, zero carry

=A0=3D> one =A0result, zero carry

=A0=3D? zero result, one =A0carry

=A0 =A0=3D> zero result, zero borrow

EQ 0 =3D> one =A0result, zero borrow

EQ 1 =3D> one =A0result, one =A0borrow

ction, consider

the 'negative' of

f the positive

I know you're stupid. You don't have to constantly demonstrate the fact.

Reply to
keithw86

On Thu, 15 Apr 2010 06:29:11 -0700 (PDT), " snipped-for-privacy@gmail.com"

You mean in the same way you demonstrate how to be a red neck?

Reply to
Upscale

I was just thinking about who could possibly be dumber than RoboTwat, and who shows up...

Reply to
keithw86

On Thu, 15 Apr 2010 06:55:54 -0700 (PDT), " snipped-for-privacy@gmail.com"

You really like confirming your status as ignorant trailer park trash don't you? I know you can't help playing the simpleton idiot because of your inbreeding, but at least try to make a token effort will you?

Reply to
Upscale

On Wed, 14 Apr 2010 15:26:58 -0700, the infamous "LDosser" scrawled the following:

Quite the evocative phrase, wot?

Reply to
Larry Jaques

On Wed, 14 Apr 2010 18:42:35 -0400, the infamous "Lee Michaels" scrawled the following:

And it would be equally as artful.

Just saying...

Reply to
Larry Jaques

*wiping monitor and keyboard*
Reply to
Robatoy

very strange.

Reply to
J. Clarke

must have been a rather flimsy stand -- the golf-ball mechanism wasn't that massive.

Reply to
Robert Bonomi

It was the standard table. The ball doesn't have much mass, but the carriage doing full returns against the stop as fast as possible makes a lot of noise and did the deed.

Reply to
Doug Winterburn

Here's the data sheet for the printer showing the stand:

formatting link

Reply to
Doug Winterburn

That chip does -not- do *actual* subtraction. The spec sheet makes that fact abundantly clear (expressly stated in the 2nd para. of the description). That chip is an 'adder' with added logic to 'invert' one of the inputs, so that it can _simulate_ subtraction (by internal 'complement and add').

And _that_ is what the MC10H180, in fact, does.

Demonstrating, yet again, what you "don't know you don't know" about digital logic circuit design.

They are _not_ 'the same operations'. They cannot be implemented with the same logic. They cannot even be implemented with the same number of gates.

The -results- of the operations are "mathematically equivalent", but they are

*NOT* the same operations. "In theory", this is a difference that should make no difference, but "in practice", there _is_ a difference, when you have to implement in the real world.

Useless? Do you know how many *BILLIONS* of dollars of scientific/engineering computers were built using that -exact- logic, over a period of several decades?

Those of us who actually _used_ those kinds of machines had to deal with this "useless" behavior on a day-to-day basis.

Those machines *all* used _NATIVE_SUBTRACTION_, with addition being 'simulated' by 'complement and subtract'.

To be absolutely explicit, on those machines _addition_ was done by running the second operand through an inverter and then feeding that result to the 'native' subtraction circuitry. It was -not- done by disabling inverters in front of a 'native' adder circuit. I'm sure even _you_ can see the stupidity of running a set of (front-end) inverters before a set of (internal to simulated subtraction logic) inverters that fed an 'adder' circuit to generate the result.

Those who claim otherwise are ignorant of the FACTS of computing history.

Reply to
Robert Bonomi

The "standard' algorithm, in computer _hardware_ for getting the

2s complement 'negative' of a value is to invert the bits (1's complement negative) and _ADD_1_ to that value. This algorithm, as does *any* other possible algorithm, *fails* when the value to be negated is the 'most negative value' that can be represented on the machine.`

Yup. on a 2's complement machine, "NOT" (1's complement negate) is *much* gaster than "NEG" (2's complement negate). On a 1's complement machine, the opcodes are synonyms (if the latter code even _exists_, that is).

Reply to
Robert Bonomi

Liar. It is *extremely*relevant* when discussing _real_world_ implementations of hardware to do the task.

Something you *obviously* know nothing about.

Reply to
Robert Bonomi

The 'ambiguous' bit-pattern for 'zero' *was* _the_ compelling reason that IEEE 'standardized' on 2's complement. The 'test for zero/non-zero' operation had to check for _two_ bit patterns (all zeroes, all ones), which either took twice as long as a single check, *or* used up a _lot_ more 'silicon real-estate'. Even _worse_, a test for "equality" could not simply check for a bit-for-bit correspondence between the two values, it had to return 'equal' IF one value was all zeroes, and the other was all ones. This was _really_ "bad news" for limited-capability processors -- you had to invert one operand, SUBTRACT, and _then_ perform the zero/non-zero check described above. Suddenly the test for 'equal' is 3 gate times *SLOWER* than a 'subtract'. This _really_ hurts performance. "Inequality" compares are also adversely affected, although not to the same degree.

For *big*, 'maximum performance' machines, the cost of the additional hardware for dealing with unique +0/-0 was small enough (relative to the _total_ cost of the machine) that it was easy to justify for the performance benefits. When the IEEE stuck it's oar in, 'budget' computing was a fact of life -- mini-computers, and micro-processors. It was _important to the

*user* of computing that the results on 'cheap' hardware match *exactly* that obtained from using the 'high priced spread'. And that _code_ developed on one machine run *unchanged* on another machine, and produce exactly the same results.

At the vehement urging of the makers of 'budget' computing systems, as well as the users thereof, 2's complement arithmetic was selected for the IEEE standard, *despite* the obvious problem of a _non-symmetric_ representation scheme. Number 'comparisons' were much more common in existing code than 'negations', thus it 'made sense' to use a representation scheme that favored the 'more common' operations. In addition, the 'minor problem' of the 'most negative number' not having a positive counterpart was not perceived to be a 'killer' issue. "Real-world" data showed that only in *VERY*RARE* situations did numeric values in computations get 'close' to the 'limit of representation' in hardware.

I *understand* the decision, although, still to this day, I disagree with it.

Reply to
Robert Bonomi

I would have to concede Keith's expertise on the subject of morons.

*NOBODY* has the degree of _first-hand_ experience on the matter that he does.
Reply to
Robert Bonomi

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.